Saturday, December 31, 2011

Perlin Noise in JavaScript

Perlin noise in two dimensions, generated using the code below.
I've been working on an HTML5 canvas-based procedural texture demo (which I'll blog about tomorrow), for which I did a JavaScript port of Ken Perlin's noise() routine (which is in Java). Ahead of tomorrow's blog, I thought I'd briefly discuss Perlin Noise.

Perlin Noise
If you've worked with 3D graphics programs, you're already well familiar with Ken Perlin's famous noise function (which gives rise to so-called Perlin noise). The code for it looks a little scary, but intuitively it's an easy function to understand. Let's take the 2D case (although you can generate Perlin noise for any number of dimensions). Imagine that you have a 256-pixel-square image (blank, all white). Now, imagine that I come along and tell you to mark the canvas off into 32 rows and 32 columns of 8x8-pixel squares. Further imagine that I ask you to assign a random grey value to each square. You've now got a kind of checkerboard pattern of random greys.

What differentiates Perlin noise from random checkboard noise is that in Perlin's case, the color values are interpolated smoothly from the center of each tile outward, in such a way that you don't see an obvious gridlike pattern. In other words, when you cross a tile boundary, you want the slope of the pixel intensity to be constant (no discontinuities). You can visualize the end result if you took the 32x32 random checkboard pattern and passed it through a Gaussian blur a few times. Pretty soon, you wouldn't even be able to tell that gridlines ever existed in the first place. That's the idea with Perlin noise. You want to interpolate colors from one block to the next in such a way that there are no discontinuities at the cell boundaries. It turns out this requirement can be met in quite a variety of ways (by using cubic splines, quartics, or even sine- or cosine-based interpolation between squares, for example; or by using Perlin's gain() function). There's no one "correct" way to do it.

I'd love to be able to link to a good Perlin noise tutorial on the Web, but so far I haven't found one that doesn't try to conflate fractal noise, turbulence, and other topics with Perlin noise. The best treatment I've come across, frankly, is (not surprisingly) in Perlin's own Texturing and Modeling book (which is truly a first-rate book, "must reading" for graphics programmers).

Fortunately, Ken Perlin has done all the hard work for us in writing the necessary interpolation (and other) code for noise(), and he has kindly provided a 3D reference implementation of the noise() function in highly optimized Java. I ported his code to JavaScript (see below) and I'm happy to say it works very well in a canvas environment (as we'll see in tomorrow's post, right here). It's reasonably fast, too. In fact, it's so fast that there's no need to fall back to a 2D version for better speed. This is good, because the 3D version gives you added versatility in case you decide you want to animate your noise in the time domain.

Usage of Perlin's function is very straightforward. It takes 3 arguments (in Java, these are double-precision floating point numbers -- which is fine, because in JavaScript all numbers are IEEE-754 double-precision floating point numbers, under the covers). The way the function is usually used, the first two arguments correspond to the x and y coordinate values of a pixel in 2-space. If you're working in 3-space, the third argument is the z-value. In 2-space, you can call the noise() function with the third argument set to whatever you like. If you're doing a 2D animation and want the texture to animate in real time, you can link the third argument (that z-value) to a time-based index, and the texture will animate smoothly, because you are, in effect, sampling closely spaced slices of a 3D noise space.

The return value from noise() is a double-precision floating point number in the range 0..1. Actually, in Perlin's original code, the return value can range from -1 to 1.0, but in my JavaScript port (below), I clamp the return to 0..1. Here's the code:

// This is a port of Ken Perlin's Java code. The
// original Java code is at http://cs.nyu.edu/%7Eperlin/noise/.
// Note that in this version, a number from 0 to 1 is returned.
PerlinNoise = new function() {

this.noise = function(x, y, z) {

   var p = new Array(512)
   var permutation = [ 151,160,137,91,90,15,
   131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23,
   190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,
   88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166,
   77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244,
   102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196,
   135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123,
   5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,
   223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9,
   129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228,
   251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107,
   49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254,
   138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180
   ];
   for (var i=0; i < 256 ; i++) 
 p[256+i] = p[i] = permutation[i]; 

      var X = Math.floor(x) & 255,                  // FIND UNIT CUBE THAT
          Y = Math.floor(y) & 255,                  // CONTAINS POINT.
          Z = Math.floor(z) & 255;
      x -= Math.floor(x);                                // FIND RELATIVE X,Y,Z
      y -= Math.floor(y);                                // OF POINT IN CUBE.
      z -= Math.floor(z);
      var    u = fade(x),                                // COMPUTE FADE CURVES
             v = fade(y),                                // FOR EACH OF X,Y,Z.
             w = fade(z);
      var A = p[X  ]+Y, AA = p[A]+Z, AB = p[A+1]+Z,      // HASH COORDINATES OF
          B = p[X+1]+Y, BA = p[B]+Z, BB = p[B+1]+Z;      // THE 8 CUBE CORNERS,

      return scale(lerp(w, lerp(v, lerp(u, grad(p[AA  ], x  , y  , z   ),  // AND ADD
                                     grad(p[BA  ], x-1, y  , z   )), // BLENDED
                             lerp(u, grad(p[AB  ], x  , y-1, z   ),  // RESULTS
                                     grad(p[BB  ], x-1, y-1, z   ))),// FROM  8
                     lerp(v, lerp(u, grad(p[AA+1], x  , y  , z-1 ),  // CORNERS
                                     grad(p[BA+1], x-1, y  , z-1 )), // OF CUBE
                             lerp(u, grad(p[AB+1], x  , y-1, z-1 ),
                                     grad(p[BB+1], x-1, y-1, z-1 )))));
   }
   function fade(t) { return t * t * t * (t * (t * 6 - 15) + 10); }
   function lerp( t, a, b) { return a + t * (b - a); }
   function grad(hash, x, y, z) {
      var h = hash & 15;                      // CONVERT LO 4 BITS OF HASH CODE
      var u = h<8 ? x : y,                 // INTO 12 GRADIENT DIRECTIONS.
             v = h<4 ? y : h==12||h==14 ? x : z;
      return ((h&1) == 0 ? u : -u) + ((h&2) == 0 ? v : -v);
   } 
   function scale(n) { return (1 + n)/2; }
}

So let's say you have a function that marches through all the pixel values in an image, and you want to use this code. You need the x and y coordinates of the pixel, the width of the image (as w), and the height (as h). Then you could do something like:

x /= w; y /= h; // normalize
size = 10;  // pick a scaling value
n = PerlinNoise.noise( size*x, size*y, .8 );
r = g = b = Math.round( 255 * n );

Here, the z-argument is arbitrarily set to .8, but it could just as well be set to zero or whatever you like. You can fiddle with size to get a result that's visually pleasing (it will vary considerably, depending on the effect that you're trying to achieve). If you're animating the texture, the next time-step might set the z-arg to 0.9, say, instead of 0.8.

In the example given above, we're setting r = g = b, which of course gives a grey pixel. The overall result looks like the picture at the top of this post. In fact, that image was generated using the code shown above.

Perlin's justly famous noise function is enormously versatile (and a ton of fun to play with). As I say, the most authoritative, in-depth discussion of it occurs in Perlin's Texturing and Modeling book. We'll see more colorful uses of the noise() function in tomorrow's blog. Don't miss it!

Friday, December 30, 2011

Convolution Kernels in HTML5 Canvas

Convolution is a straightforward mathematical process that is fundamental to many image processing effects. If you've played around with the Filter > Other > Custom dialog in Photoshop, you're already familiar with what convolutions can do.
A sharpening convolution applied to Lena.

A convolution applies a matrix (often called a kernel) against each pixel in an image. For any given pixel in the image, a new pixel value is calculated by multiplying the various values in the kernel by corresponding (underlying) pixel values, then summing the result (and rescaling to the applicable pixel bandwidth, usually 0..255). If you imagine a 3x3 kernel in which all values are equal to one, applying this as a convolution is the same as multiplying the center pixel and its eight nearest neighbors by one, then adding them all up (and dividing by 9 to rescale the pixel). In other words, it's tantamount to averaging 9 pixel values, which essentially blurs the image slightly if you do this to every pixel, in turn.

The application of convolutions to an HTML5 canvas image is straightforward. I've created an example Chrome extension that is active whenever you visit a URL ending in ".jpg" or ".png" (from any website). The extension provides a 3x3 convolution kernel (as text fields). You can enter any values you want (positive or negative) in the kernel columns and rows. Behind the scenes, the kernel will be normalized for you automatically. (That simply means each value is divided by the sum of all the values, except in the case where the values sum to zero, in which instance the normalization step is skipped.)

Some convolutions, such as the Sobel kernel, have kernel values that add up to zero. In this case, you end up with a mostly dark image that you'll probably want to invert. My Chrome extension provides an Invert Image button, for just that occasion.
A modified Sobel kernel, plus image inversion.

The UI also includes a Reset button (which reloads the original image and sets the kernel to an identity kernel) and a button that opens the newly modified image in a new window as a PNG that can be saved to disk.

The code for the Chrome extension is shown below. To use it, do this:

1. Copy and paste all of the code into a new file. Call it Kernel.user.js (or whatever you want, but be sure the name ends with .user.js).
2. Save the file (text-only) to any convenient folder.
3. Launch Chrome. Use Control-O to bring up the file-open dialog. Navigate to the file you just saved. Open it.
4. Notice at the very bottom of the Chrome window, there'll be a status warning (saying that extensions can harm your health, etc.) with two buttons, Continue and Discard. Click Continue.
5. In the Confirm Installation dialog that pops up, click the Install button. After you do this, the extension is installed and running. Test it by navigating to any convenient URL that ends in ".jpg" or ".png" (but do note, the extension may fail due to security restrictions if you are loading images from disk, via a "file:" scheme). For best results, navigate to an image on the web using http.


// @name           KernelTool
// @namespace      ktKernelTool
// @description    Canvas Image Kernel Tool
// @include        *
// ==/UserScript==



// A demo script by Kas Thomas.
// Use as you will, at your own risk.


// The stuff under loadCode() will be injected
// into a <script>
 element in the page.

function loadCode() {


window.KERNEL_SIZE = 3; // 3 x 3 square kernel

window.transformImage = function( x1,y1,w,h ) {
      
 var canvasData = context.getImageData(x1,y1,w,h);

 var kernel = getKernelValues( );
 normalizeKernel( kernel );
  
 for (var x = 1; x < w-1; x++) {
      for (var y = 1; y < h-1; y++) {

   // get the real estate around this pixel
   // (using the offscreen image)
   var area = 
    context.getImageData(x-1,y-1,
     KERNEL_SIZE,KERNEL_SIZE);
   
          // Index of the current pixel in the array
          var idx = (x + y * w) * 4;

   // apply kernel to current index
   var rgb = applyKernel( kernel, area, canvasData, idx );

   canvasData.data[ idx ] = rgb[0];
   canvasData.data[idx+1] = rgb[1];
     canvasData.data[idx+2] = rgb[2];
       }
 } 

 // inner function that applies the kernel
 function applyKernel( k, localData, imageData, pixelIndex ) {

  var sumR = 0; var sumG = 0; var sumB = 0;
   var n = 0;

  for ( var i = 0; i < k.length; i++,n+=4 ) {
   sumR += localData.data[n]  *  k[i];
   sumG += localData.data[n+1] * k[i];
   sumB += localData.data[n+2] * k[i];
  }

  if (sumR < 0)  sumR *= -1; 
  if (sumG < 0)  sumG *= -1; 
  if (sumB < 0)  sumB *= -1; 

  return [Math.round( sumR ),Math.round( sumG ),Math.round( sumB )]; 
 }

 context.putImageData( canvasData,x1,y1 );
};

window.invertImage = function( ) {

  var w = canvas.width;
  var h = canvas.height;
  var canvasData = 
   context.getImageData(0,0,w,h);
  for (var i = 0; i < w*h*4; i+=4)  {
   canvasData.data[i] = 255 - canvasData.data[i];
   canvasData.data[i+1] = 255 - canvasData.data[i+1];
   canvasData.data[i+2] = 255 - canvasData.data[i+2];  
  }
  context.putImageData( canvasData,0,0 ); 
 }

// get an offscreen drawing context for the image
window.getOffscreenContext = function( w,h ) {
   
 var offscreenCanvas = document.createElement("canvas");
 offscreenCanvas.width = w;
 offscreenCanvas.height = h;
 return offscreenCanvas.getContext("2d");
};

window.getKernelValues = function( ) {

 var kernel = document.getElementsByClassName("kernel");
 var kernelValues = new Array(9);
 for (var i = 0; i < kernelValues.length; i++)
  kernelValues[i] = 1. * kernel[i].value;
 return kernelValues;
}

window.setKernelValues = function( values ) {

 var kernel = document.getElementsByClassName("kernel");
 for (var i = 0; i < kernel.length; i++)
  kernel[i].value = values[i];
}

window.normalizeKernel = function( k ) {
 
 var sum = 0;

 for (var i = 0; i < k.length; i++)
  sum += k[i];

 if (sum > 0)
  for (var i = 0; i < k.length; i++)
   k[i] /= sum;
}

window.setupGlobals = function() {

 window.canvas = document.getElementById("myCanvas");
 window.context = canvas.getContext("2d");
 var imageData = context.getImageData(0,0,canvas.width,canvas.height);
 window.offscreenContext = getOffscreenContext( canvas.width,canvas.height );
 window.offscreenContext.putImageData( imageData,0,0 );
};

setupGlobals();  // actually call it

// enable the buttons now that code is loaded
document.getElementById("reset").disabled = false;
document.getElementById("invert").disabled = false;
document.getElementById("PNG").disabled = false;

} // end loadCode()



/* * * * * * * * * main() * * * * * * * * */

(function main( ) {

 // are we really on an image URL?
 var ext = location.href.split(".").pop();
 if (ext.match(/jpg|jpeg|png/) == null )
    return;

 // ditch the original image
 img = document.getElementsByTagName("img")[0];
 img.parentNode.removeChild(img); 

 // put scripts into the page scope in 
 // a <script> elem with id = "myCode"
 // (we will eval() it in an event later...)
 var code = document.createElement("script");
 code.setAttribute("id","myCode");
 document.body.appendChild(code);  
 code.innerHTML += loadCode.toString() + "\n";

 // set up canvas
 canvas = document.createElement("canvas");
 canvas.setAttribute("id","myCanvas");
 document.body.appendChild( canvas );

 context = canvas.getContext("2d");

 image = new Image();

 image.onload = function() {

  canvas.width = image.width;
  canvas.height = image.height;
       context.drawImage(image,0, 0,canvas.width,canvas.height ); 
 };

 // This line must come after, not before, onload!
 image.src = location.href;

 createKernelUI( );
      createApplyButton( );
 createResetButton( );  
 createInvertImageButton( );
 createPNGButton( ); // create UI for Save As PNG

 function createPNGButton( ) {

  var button = document.createElement("input");
  button.setAttribute("type","button");
  button.setAttribute("value","Open as PNG...");
  button.setAttribute("id","PNG");
  button.setAttribute("disabled","true");
  button.setAttribute("onclick",
   "window.open(canvas.toDataURL('image/png'))" );
  document.body.appendChild( button );
 }

 function createInvertImageButton( ) {

  var button = document.createElement("input");
  button.setAttribute("type","button");
  button.setAttribute("value","Invert Image");
  button.setAttribute("id","invert");
  button.setAttribute("disabled","true");
  button.setAttribute("onclick",
   "invertImage()" );
  document.body.appendChild( button );
 }

 function createResetButton( ) {

  var button = document.createElement("input");
  button.setAttribute("type","button");
  button.setAttribute("value","Reset");
  button.setAttribute("id","reset");
  button.setAttribute("disabled","true");
  button.setAttribute("onclick",
   "var data = offscreenContext.getImageData(0,0,canvas.width,canvas.height);" +
   "context.putImageData(data,0, 0 );" + 
   "setKernelValues([0,0,0,0,1,0,0,0,0]);" );
  document.body.appendChild( button );
 }

 // This will load code if it hasn't been loaded yet.
 function createApplyButton( ) {

  var button = document.createElement("input");
  button.setAttribute("type","button");
  button.setAttribute("value","Apply");
  button.setAttribute("onclick","if (typeof codeLoaded == 'undefined')" +
   "{ codeLoaded=1; " + 
   "code=document.getElementById(\"myCode\").innerHTML;" +
   "eval(code); loadCode(); }" +
   "transformImage(0,0,canvas.width,canvas.height);" );
  document.body.appendChild( button );
 }

 function createKernelUI( ) { 

  var kdiv = document.createElement("div");
  var elem = new Array(9);

  for ( var i = 0; i < 9; i++ ) {
   elem[i] = document.createElement("input");
   elem[i].setAttribute("type","text");
   elem[i].setAttribute("value","1");
   elem[i].setAttribute("class","kernel");
   elem[i].setAttribute("style","width:24px");
   elem[i].setAttribute("id","k" + i);
  }
  for ( var i = 0; i < 9; i++ ) {
   kdiv.appendChild( elem[i] );
   if (i == 2 || i == 5 || i == 8)
    kdiv.innerHTML += "<br/>";
  }

  document.body.appendChild( kdiv );
 } 
})();

It can be fun and educational to experiment with new kernel values (and to apply more than one convolution sequentially to achieve new effects). With the right choice of values, you can easily achieve blurring, sharpening, embossing, and edge detection/enhancement, among other effects. 

Incidentally, for more information about the Lena test image (in case you're not familiar with the interesting backstory), check out http://en.wikipedia.org/wiki/Lenna.

Sunday, December 25, 2011

How to Save a Canvas as PNG

In yesterday's post, I showed how to render an image into an HTML5 canvas element (and then operate on it with canvas API calls). When you've made changes to a canvas image, the time may come when you want to save the canvas as a regular PNG image. As it turns out, doing that isn't hard at all.

The key is to use canvas.toDataURL('image/png') to serialize the image as a data URI, which you can (of course) open in a new window with window.open( uri ). Once the image is open in a new window (note: you may have to instruct your browser to allow popups), you can right-click on the image to get the browser's Save Image As... command in a context menu. From there, you just save the image as you normally would.

The following code can be added to yesterday's example in order to create a button on the page called Open as PNG...

function createPNGButton( ) {

 var button = document.createElement("input");
 button.setAttribute("type","button");
 button.setAttribute("value","Open as PNG...");
 button.setAttribute("onclick",
  "window.open(canvas.toDataURL('image/png'))" );
 document.body.appendChild( button );
}

As you can see, there's no rocket science involved. Just a little HTML5 magic.

Saturday, December 24, 2011

Gamma Adjustment in an HTML5 Canvas

I came up with kind of a neat trick I'd like to share. If you're an HTML canvas programmer, listen up. You just might get a kick out of this.

You know how, when you're loading images in canvas (and then fiddling with the pixels using canvas API code), you have to load your scripts and images from the same server? (For security reasons.) That's no problem for the hard-core geeks among us, of course. Many web developers keep a local instance of Apache or other web server running in the background for just such occasions. (As an Adobe employee, I'm fortunate to be able to run Adobe WEM, aka Day CQ, on my machine.) But overall, it sucks. What I'd like to be able to do is fiddle with any image, taken from any website I choose, any time I want, without having to run a web server on my local machine.

So, what I've done is create a Chrome extension that comes into action whenever my browser is pointed at any URL that ends in ".png" or ".jpg" or ".jpeg". The instant the image in question loads, my extension puts it into a canvas element, re-renders it, and exposes its 2D context for scripts to work against.

For demo purposes, I've included some code for making gamma adjustments to the image via canvas-API calls (which I'll talk more about later).

The code for the Chrome extension is shown below. To use it, do this:

1. Copy and paste all of the code into a new file. Call it BrightnessTest.user.js. Actually, call it whatever you want, but be sure the name ends with .user.js.

2. Save the file (text-only) to any convenient folder.

3. Launch Chrome. (I did all my testing in Chrome. The extension should be Greasemonkey-compatible, but I have not tested it in Firefox.) Use Control-O to bring up the file-open dialog. Navigate to the file you just saved. Open it.

4. Notice at the very bottom of the Chrome window, there'll be a status warning (saying that extensions can harm your loved ones, etc.) with two buttons, Continue and Discard. Click Continue.

5. In the Confirm Installation dialog that pops up, click the Install button. After you do this, the extension is installed and running.

Test the extension by navigating to http://goo.gl/UQpRA (the penguin image shown in the above screenshots). Please try a small image (like the penguin) first, for performance reasons. Note: Due to security restrictions, you can't load images from disk (no file: scheme in the URL; only http: and/or https: are allowed). Any PNG or JPG on the web should work.

When the image loads, you should see a small slider underneath it. This is an HTML5 input element. If you are using an obsolete version of Chrome, you might see a text box instead.

If you move the slider to the right, you'll (in effect) do a gamma adjustment on the image, tending to make the image lighter. Move the slider to the left, and you'll darken the image. No actual image processing takes place until you lift your finger off the mouse (the slider just slides around until a mouseup occurs, then the program logic kicks in). After the image repaints, you should see a gamma curve appear under the slider, as in the examples above.

Here's the code for the Chrome extension:

// ==UserScript==
// @name           ImageBrightnessTool
// @namespace      ktBrightnessTool
// @description    Canvas Image Brightness Tool
// @include        *
// ==/UserScript==



// A demo script by Kas Thomas.
// Use as you will, at your own risk.


// The stuff under loadCode() will be injected
// into a <script> element in the page.

function loadCode() {

window.LUT = null;

// Ken Perlin's bias function
window.bias = function( a, b)  {
 return Math.pow(a, Math.log(b) / Math.log(0.5));
};

window.createLUT = function( biasValue ) {
 // create global lookup table for colors
 LUT = createBiasColorTable( biasValue );
};

window.createBiasColorTable = function( b ) {

 var table = new Array(256);
 for (var i = 0; i < 256; i++)
  table[i] = applyBias( i, b );
 return table;
};

window.applyBias = function( colorValue, b ) {
   
 var normalizedColorValue = colorValue/255;
 var biasedValue = bias( normalizedColorValue, b );
 return Math.round( biasedValue * 255 );
};

window.transformImage = function( x,y,w,h ) {
      
 var canvasData = offscreenContext.getImageData(x,y,w,h);
 var limit = w*h*4; 
  
 for (i = 0; i < limit; i++)  
  canvasData.data[i] = LUT[ canvasData.data[i] ]; 
  
 context.putImageData( canvasData,x,y );
};


// get an offscreen drawing context for the image
window.getOffscreenContext = function( w,h ) {
   
 var offscreenCanvas = document.createElement("canvas");
 offscreenCanvas.width = w;
 offscreenCanvas.height = h;
 return offscreenCanvas.getContext("2d");
};

window.getChartURL = function() {
 
 var url = "http://chart.apis.google.com/chart?";
 url += "chf=bg,lg,0,EFEFEF,0,BBBBBB,1&chs=100x100&";
 url += "cht=lc&chco=FF0000&&chds=0,255&chd=t:"
 url += LUT.join(",");
 url += "&chls=1&chm=B,EFEFEF,0,0,0";
 return url;
}

setupGlobals = function() {

 window.canvas = document.getElementById("myCanvas");
 window.context = canvas.getContext("2d");
 var imageData = context.getImageData(0,0,canvas.width,canvas.height);
 window.offscreenContext = getOffscreenContext( canvas.width,canvas.height );
 window.offscreenContext.putImageData( imageData,0,0 );
};

setupGlobals();  // actually call it

} // end loadCode()



/* * * * * * * * * main() * * * * * * * * */

(function main( ) {

 // are we really on an image URL?
 var ext = location.href.split(".").pop();
 if (ext.match(/jpg|jpeg|png/) == null )
    return;

 // ditch the original image
 img = document.getElementsByTagName("img")[0];
 img.parentNode.removeChild(img); 

 // put scripts into the page scope in 
 // a <script> elem with id = "myCode"
 // (we will eval() it in an event later...)
 var code = document.createElement("script");
 code.setAttribute("id","myCode");
 document.body.appendChild(code);  
 code.innerHTML += loadCode.toString() + "\n";

 // set up canvas
 canvas = document.createElement("canvas");
 canvas.setAttribute("id","myCanvas");
 document.body.appendChild( canvas );

 context = canvas.getContext("2d");

 image = new Image();

 image.onload = function() {

  canvas.width = image.width;
  canvas.height = image.height;
       context.drawImage(image,0, 0,canvas.width,canvas.height );            
 };

 // This line must come after, not before, onload!
 image.src = location.href;

 createSliderUI( );  // create the slider UI
 createGoogleChartUI( );  // create chart UI


 function createGoogleChartUI( ) {
  // set up iframe for Google Chart
  var container = document.createElement("div");
  var iframe = document.createElement("iframe");
  iframe.setAttribute("id","iframe");
  iframe.setAttribute("style","padding-left:14px");
  iframe.setAttribute("frameborder","0");
  iframe.setAttribute("border","0");
  iframe.setAttribute("width","101");
  iframe.setAttribute("height","101");
  container.appendChild(iframe); 
  document.body.appendChild(container);
 }
 

 // Create the HTML5 slider UI
 function createSliderUI( ) {

  var div = document.body.appendChild( document.createElement("div") );
       var slider = document.createElement("input");
  slider.setAttribute("type","range");
  slider.setAttribute("min","0");
  slider.setAttribute("max","100");
  slider.setAttribute("value","50");
  slider.setAttribute("step","1");

  // if code hasn't been loaded already, then load it now
  // (one time only!); update the slider range indicator;
  // create a color lookup table
  var actionCode = "if (typeof codeLoaded == 'undefined')" +
   "{ codeLoaded=1; " + 
   "code=document.getElementById(\"myCode\").innerHTML;" +
   "eval(code); loadCode(); }" +
   "document.getElementById(\"range\").innerHTML=" +
   "String(this.value*.01).substring(0,4);" +
   "createLUT( Number(document.getElementById('range').innerHTML) );" 
   
      
  slider.setAttribute("onchange",actionCode);


  // The following operation is too timeconsuming to attach to
  // the onchange event. We attach it to onmouseup instead.
  slider.setAttribute("onmouseup",
   "document.getElementById('iframe').src=getChartURL();"+
   "transformImage(0,0,canvas.width,canvas.height);");
  
  div.appendChild( slider );
  div.innerHTML += '<span id="range">0.5</span>';
  
 }

 
})();

This code is a little less elegant in Chrome than it would have been in Firefox (which, unlike Chrome, supports E4X and exposes a usable unsafeWindow object). The code does, however, illustrate a number of useful techniques. To wit:

1. How to swap out an <img> for a canvas image.
2. How to draw to an offscreen context.
3. How to inject script code into page scope, from extension (gmonkey) scope.
4. How to use the HTML5 slider input element.
5. How to change the gamma (or, colloquially and somewhat incorrectly, "brightness") of an image's pixels via a color lookup table.
6. How to use Ken Perlin's bias( ) function to remap pixel values in the range 0..255.
7. How to display the resulting gamma curve (actually, bias curve) in a Google Chart in real time.

That's a fair amount of stuff, actually. Discussing it could take a long time. The code's not long, though, so you should be able to grok most of it from a quick read-through.

The most important concept here, from an image processing standpoint, is the notion of remapping pixel values using a pre-calculated lookup table. The naive (and very slow) approach would simply be to parse pixels and do a separate bias() call on each red, green, or blue value in the image. But that would mean calling bias() hundreds of thousands of times (maybe millions of times, in a sizable image). Instead, we create a table (an array of size 256) and remap the values there once, then look up the appropriate substitution value for each color in each pixel, rather than laboriously calling bias() on each color in each pixel.

If this is the first time you've encountered Ken Perlin's bias() function, it's actually a very important class of function to understand. Fundamentally, it remaps the unit interval (that is, real numbers in the range 0..1) to itself. With a bias value of 0.5, all real numbers from 0..1 map to their original values. With a bias value less than 0.5, the remapping is swayed in the manner shown in the screenshot above, on the right. A bias value greater than 0.5 bends the curve in exactly the opposite direction. But in any case, 0 always ends up mapping to zero and 1 always maps to one, no matter what the bias knob is set to. The function is, in that sense, nicely normalized.

Bias is technically quite a bit different from a true "gamma" adjustment. Gamma curves come from a different formula and they don't have the desirable property of mapping onto the unit interval or behaving intuitively with respect to the 0.5 midpoint. Nevertheless, because "gamma" is more familiar to graphic artists, I've (ab)used that word throughout this post, and even in the headline. (Shame on me.)

The performance of the bias code is surprisingly poor in this particular usage (as a Chrome extension). On my Dell laptop, I see processing at a rate of just under 50,000 pixels per second. The same bias-lookup code running in a normal web page (not a Chrome extension that injects it into page scope) goes about ten times faster. Yes, an order of magnitude faster. In a native web page, I can link the image transformation call to an onchange handler (so that the image -- even a large one -- updates continuously, in real time, as you drag the slider) -- that's how fast the code is in some of my other projects. But in this particular context (as a Chrome extension) it seems to be dreadfully slow, so I've hooked the main processing routine to an onmouseup handler on the slider. Otherwise the slider sticks.

Anyway, I hope the techniques in this post have whetted your appetite for more HTML5 canvas explorations. There are some great canvas demos out there, and I'll be delving into some more canvas scripting techniques in the not-so-distant future.

Happy pixel-poking!




Friday, December 16, 2011

How to tell if a corporation is evil

I made up this checklist as a quick way to gauge whether a company is (or might be considered) evil. It's not a definitive list, by any means, and there is no right or wrong way to score a company against this list. In the end, you have to decide whether you think a given company is evil or not. This list is meant only as a guide, a starting point for discussion.

1. Does your company value profits over people? This is easy to check. When earnings are threatened, does the company simply lay off workers? Most companies follow this kneejerk policy, since "people costs" (salaries and benefits) constitute the single biggest cost for most companies.

2. Does your company engage in deceptive, monopolistic, or anti-competitive business practices?

3. Does your company routinely lie to customers in its advertising?

4. Does your company engage in unfair or deceptive HR practices, such as billing a 50-hr/week job as being 40 hours per week, or hiring two part-time workers (without benefits) in lieu of one full-time worker with benefits, or failing to promote women or minorities? (E.g., are executives mostly men?)

5. Does your company pay zero taxes? Many large companies (e.g., General Electric) manage to escape paying taxes altogether even in profitable years (http://www.reuters.com/article/2011/11/03/us-usa-tax-corporate-idUSTRE7A261C20111103). Whether this is done legally or illegally doesn't matter: If (as some say) corporations are "persons," not paying your fair share in taxes makes you a bad "person" regardless of what the law says.

6. Does your company accept corporate welfare (subsidies or bailout money from government)? According to the Cato Institute, the U.S. federal government spent $92 billion on corporate welfare during fiscal year 2006 (a year in which there was no recession). Recipients that year included Boeing, Xerox, IBM, Motorola, Dow Chemical, and General Electric.

7. Does your company spend money on lobbying? Check your company's lobbying track record at http://www.opensecrets.org/. Lobbying, by definition, is an attempt to exert influence on lawmakers by circumvention of normal democractic processes. Which is inherently unethical.

8. Does your company endorse political candidates or contribute to their campaigns? It should be obvious that corporations have no legitimate role in politics. They should not tell workers how to vote and shouldn't spend shareholder money on politicians' (re))election campaigns.

9. Does your company pollute the environment? (What is your company's green agenda? Does it even have one?) Does your company have overseas subsidiaries or contractors who pollute the environment?

10. Does your company routinely outsource jobs to countries where labor is cheap and labor laws are lax? Suppose your company hires workers from North America, Europe, and Asia during good times but tends to lay off workers in North America or Europe preferentially during bad times (allowing Asians to remain on the payroll). This is the same as exporting jobs to Asia. It's a very common tactic, and it escapes notice because in good times, the company does not appear to be favoring any one geo.

11. Does your company use layoffs as a way to tailor HR ratios? This is a very common tactic. For example, in technology, companies often go out of their way to try to obtain more female employees since women traditionally have not been drawn to technology, and the ratio of male to female workers in tech is (consequently) high. Big companies like to beef up their female-to-male percentages as much as possible to avoid lawsuits (so that when a woman is fired, she can't claim sexual discrimination). When layoffs occur, many companies will preferentially lay off male workers to "balance the ratios." I have personally seen this practice in action, at one large company, where layoffs were viewed as an opportunity to balance all kinds of HR ratios. I believe it is one reason (but certainly not the only reason) why the most recent recession was particularly hard on men (see for example http://www.spiegel.de/international/germany/0,1518,622373,00.html, and http://mjperry.blogspot.com/2008/12/2008-male-recession-gender-jobs-gap.html, and http://healthland.time.com/2011/03/01/why-the-recession-may-trigger-more-depression-among-men/).

12. Do the company's top people earn more than 20 times the median of all workers? The Economic Policy Institute and the Institute for Policy Studies have both studied CEO pay ratios across a variety of industries in a variety of countries. In most of the industrialized world, the CEO pay ratio is 20 or less. In the U.S., it tends to be well over 100:1 (and in some years it has been 250:1 or higher; see http://articles.moneycentral.msn.com/Investing/Extra/CEOsNearRecordPayRatios.aspx and http://www.aflcio.org/corporatewatch/paywatch/). There is no reason for any one human being in any one company to make 100 times what another human being makes, and in fact most companies could find talented volunteers to fill CxO positions for far less money than 20:1. Certainly anything in excess of 20:1 is just that -- needless excess. Steve Jobs paid himself a dollar a year at Apple. (He did own a lot of stock, of course.) More CEOs should follow his example.

Thursday, December 15, 2011

Firefox adoption just keeps going down


Browser usage data from Q4 2008 to Q4 2011, for visitors of this blog.

I was somewhat surprised (as many people were) to learn, earlier this month, that in terms of market share, Google's Chrome browser has recently surpassed Firefox in overall adoption. I shouldn't have been surprised at all: If I had taken the time to look carefully at my own blog's analytics, I would've seen this very result nine months ago.

Readers of my blog tend to be developers, techies, and early adopters, and so Internet Explorer usage has never been high for people who visit this blog, whereas Firefox usage has always been high (68% in Q4 of 2008, for example). Trends like Chrome overtaking Firefox tend to show up early in my analytics; my readers are trendsetters. I completely didn't see the Firefox death spiral coming, however.

I decided, finally, to sit down and sift through my analytics to find out exactly how browser usage has varied over time for visitors to this blog. In the graph above, I've plotted browser statistics quarter-by-quarter for the 13 quarters going back to Q4 2008 (the earliest date for which analytics were available). Note that data for the most recent quarter run only to mid-December (obviously). For the total data set going back to Q4 2008, we're talking slightly more than half a million total visits.

The vertical scale tops out at 100 percent. Firefox (in blue) starts at 68 percent and ends, most recently, at 32 percent. Chrome starts at 8 percent and finishes at an impressive 38 percent. The curves crossed, for visitors to this blog, around nine months ago (in early 2011).

What can I say? I don't think these sorts of trends bode well for Firefox. In fact I think the future is looking pretty bad for Firefox (for a variety of reasons), and there's probably not time to reinvent the product at this point.

In the meantime, if you work for a software (or other) company that currently supports Firefox but not Chrome, you've got your priorities backwards! Get to work supporting Chrome, ASAP, lest the market leave you behind.

Sunday, December 11, 2011

6 Tips for Beginning Canvas Programmers

Lately I've spent some time programming against the <canvas> API. Predictably, I encountered all the common beginner's mistakes, and had to work through them. Along the way, I learned a number of useful things about canvas programming, some basic, some not-so-basic. Here's a quick summary:

1. To avoid security errors, always serve your HTML (and scripts) from the same server as any images you're going to be working with. (Corollary: Don't "serve" your HTML and images from the local filesystem. That's a sure way to get security errors.) Install a local instance of Apache web server (or some other web server) and serve content to your browser from localhost, if need be.

2. If you're modifying pixels using context.getImageData( ), use putImageData( ) to draw back to the image, and be sure to supply all 3 arguments to putImageData( )! Here is a common pattern:

function doSomething() {

var canvasData =
context.getImageData(0, 0, imageObj.width, imageObj.height);

for (var x = 0; x < w; x++) {
for (var y = 0; y < h; y++) {
var idx = (x + y * w) * 4;
var r = canvasData.data[idx + 0];
var g = canvasData.data[idx + 1];
var b = canvasData.data[idx + 2];

// do something to r,g,b here

canvasData.data[idx + 0] = r;
canvasData.data[idx + 1] = g;
canvasData.data[idx + 2] = b;
}
}

// draw it back out to the screen:
context.putImageData(canvasData, 0, 0);
}

Notice the three arguments to putImageData(). The final two args are the x and y position at which to draw the image. If you forget those two args, expect errors.


3. You can draw offscreen by simply creating a canvas element programmatically. Like this:
    imageObj = new Image();
imageObj.src = "http://localhost:4502/content/lena.png";

function getOffscreenContext(imageObj) {
var offscreenCanvas = document.createElement("canvas");
offscreenCanvas.width = imageObj.width;
offscreenCanvas.height = imageObj.height;
return offscreenCanvas.getContext("2d");
}

If you use this function (or one like it), you can keep an offscreen copy of your image around, which can be extremely handy.

4. You can save programmatically created/modified images offline. The trick is to slurp the canvas into a data URL and then open or display that URL in a new frame or window where you can right-click it to get the usual image-save options from the browser. Something like this:

myImage = canvas.toDataURL("image/png"); 
window.open( myImage ); // opens in new window as a PNG

This serializes the image as a (big, huge) data URL, then opens the image in a new window. The new window contains a PNG image, plain and simple.

5. Any time you assign a value to canvas.width or canvas.height, you will wipe the canvas clean! This is both weird and handy. Just doing canvas.width = canvas.width will instantly erase the canvas.


6. When all else fails, consult the HTML 5 Canvas Cheatsheet.

Tuesday, December 06, 2011

How Google is quietly killing Firefox



After a certain period of time spent programming, you develop a kind of sixth sense about what programs are doing. Surprisingly often, this sixth sense turns out to be right. I don't know if what I'm about to say is right. I do know that my sixth sense is telling me something.

I've been a Firefox user for years, and I still like Firefox, the way I still like my 1998 Jeep Grand Cherokee (with the cast-iron six-cylinder engine) even though it's not the latest-and-greatest model. Lately, though, Firefox has been freezing and/or quitting unexpectedly with greater and greater frequency, even though my browsing habits haven't changed.

What's changed over the past couple of years? The Web. Web content has become more and more dynamic, more AJAX-driven, more JavaScript-intensive.

JavaScript is a great language, but like a lot of languages these days it relies on programmers being careful about how they manage runtime objects. It's surprisingly easy, if you manipulate the DOM a lot, to generate code that leaks memory. See this nice writeup for more info (also see https://developer.mozilla.org/en/Debugging_memory_leaks).

My contention is that most AJAX code leaks memory like a sieve, and this is why more and more users are seeing their browsers (not just Firefox, but IE, Safari, and Chrome as well) freeze up and die in normal operation these days. The various browsers differ in how they manage memory and how they do garbage collection. Chrome, in particular, has undergone significant changes recently in how it handles garbage collection. This is no accident. It's in response to the greater challenges imposed on all browsers today by dynamic web pages.

When I leave AJAX-intensive web pages open all day in Firefox, I eventually find that Firefox is using 1.3 gigabytes or more of RAM. Usually, by the time it reaches 1.4 gigabytes, the browser freezes (goes white) and then either dies outright, or unfreezes again after 30 or 40 seconds. If it manages to unfreeze, usually about 60 megabytes of RAM have been freed up (according to Task Manager). But then memory usage marches upwards again and it freezes (goes white) again, within seconds. The freeze/unfreeze cycle continues until Firefox unceremoniously crashes.

My programmer's sixth sense tells me that when memory usage exceeds a certain level, Firefox lacks sufficient headroom to carry out a proper garbage collection cycle. Partway through the cycle, it runs out of memory and initiates another GC cycle. This repeats until the program is in what one might call a GC panic. Uncommanded program termination is the inevitable result.

Firefox is often roundly criticized for its tendency to "leak memory," but I would caution that it is not really the core program that is leaking memory. It's really the AJAX code running in JavaScript-intensive pages that's causing the memory leakage.

So ironically, Firefox's reputation is suffering not because of anything Mozilla's programmers are doing, but because of web developers who are using jQuery and tons of other popular libraries indiscriminately, without regard to memory leakage. (Be sure to have a look at this blog post about jQuery's role in memory leakage.)

I haven't done careful testing, but I can tell you (from daily experience) that if I leave Gmail open in Firefox all night, Firefox will run out of memory by morning. If I leave Facebook and Twitter open as well, I can count on running out of memory in just a few hours.

Now here's where it starts to get really troubling.

Mozilla's greatest revenue source today (accounting for more than 80 percent of annual income) is Google. Mozilla is deeply dependent on Google for operating revenue. And yet, it is in direct competition with Google for browser market-share. Recently, the Firefox and Chrome adoption curves crossed, with Firefox now lagging behind Chrome for the first time.

There's a huge conflict of interest here. If you buy the theory that most people who abandon Firefox do so because it crashes (runs out of memory) unpredictably, it stands to reason that all Google has to do to pick up market share in the browser world is publish AJAX-intensive web pages (Google Search, Gmail, Google Maps, etc.) of a kind that Firefox's garbage-collection algorithms choke on — and in the meantime, improve Chrome's own GC algorithms to better handle just those sorts of pages.

That's exactly what seems to be going on. Or at least that's what my gut says.

And my gut is sometimes right.

Saturday, November 05, 2011

This Is What Democracy Looks Like


Yesterday, I submitted the following op-ed piece to my local newspaper. I don't know yet if it will be published there. But it is published here. ;)

When my 17-year-old son visited me in Jacksonville last month over Columbus Day weekend, I had several "activities" lined up for him on his visit. The first and most important was to accompany me to the inaugural General Assembly of the first Occupy Jacksonville meetup in Hemming Plaza.

I was (and am) proud to have had my son by my side that day. I wanted him to see democracy in action. And that's exactly what we experienced, together, along with upwards of 300 other concerned Jacksonville residents.

Some might suggest that if I wanted my son to see democracy in action, I should perhaps take him to Washington, DC. But unfortunately, "democracy in action" is not the order of the day any more in Washington (to the extent it ever was). Mostly what happens in Washington these days are closed-door meetings between government officials and their corporate sponsors, the big-dollar lobbyists.

To call the Occupy Movement "democracy in action" might seem premature, or even delusional, depending on your political bent. After all, protesters in a public park don't sign laws into existence. Protesters balance no budgets, create no legislation, add no earmarks.

They do something far more important, though. They change the nature of political discourse in this country, one conversation at a time.

Naysayers are fond of criticizing the Occupy Movement as having no demands and (more to the point) being utterly powerless to bring about concrete change in Washington. And yet, on November 1, six U.S. Senators (Tom Udall of New Mexico, Michael Bennett of Colorado, Tom Harkin of Indiana, Dick Durbin of Illinois, Chuck Schumer of New York, Sheldon Whitehouse of Rhode Island, and Jeff Merkely of Oregon) introduced a Constitutional amendment that would effectively overturn the Supreme Court's Citizens United v. Federal Election Commission decision, thereby restoring the ability of Congress (and the states) to properly regulate the campaign finance system.

Getting the Citizens United decision nullified has been one of the most avidly advocated reforms of the Occupy Movement, not just in New York but in all venues where the movement exists. Placards and signs denouncing the 2010 Supreme Court decision (a decision that effectively opened the door to unlimited corporate and special-interest spending in elections, treating corporations as ordinary citizens) can be found at any Occupy rally, including the rallies that have been occurring in Jacksonville since October 8.

Following the Supreme Court's January 21, 2010 decision, there was a national outcry. President Barack Obama himself said that the decision "gives the special interests and their lobbyists even more power in Washington — while undermining the influence of average Americans who make small contributions to support their preferred candidates." Obama would eventually say in his weekly radio address that "this ruling strikes at our democracy itself" and "I can't think of anything more devastating to the public interest."

The controversy over the ruling quickly died down, however, and from January 2010 to September 2011, there was no further action on the matter.

But then a curious thing happened.

Following on the heels of the Arab Spring uprisings, a protest called Occupy Wall Street began in lower Manhattan. With startling alacrity, what began as a modest public gathering in an obscure park in New York spread to become a nationwide phenomenon involving (by some counts) as many as a thousand cities, here and abroad.

Much uncertainty lies ahead. What can be said with certainty, however, is that the Occupy movement, for all its supposed disorganization and lack of goals, has (in a few short weeks) fundamentally changed the nature of political debate in the United States. What was previously unthinkable (e.g., a Constitutional amendment overthrowing the Supreme Court's Citizens United decision) is now not only thinkable, but front-and-center in Congress.

I'm proud to have brought my son to Occupy Jacksonville. I'm proud of what the Occupy movement has done, and will do. As we're fond of chanting in our marches: this is what democracy looks like.

Thursday, October 13, 2011

What does it take to get a job at Google?

It takes patience, that's for sure. The process, from start to finish, can take six weeks or more. And although the infographic doesn't specifically say so, the candidate can be blackballed by one "no" vote along the way. Hiring decisions have to be unanimous. (Click on graphic for larger version.)


What does it take to get a job at Google


Infographic by Jobvine Recruitment Network

Tuesday, October 04, 2011

Occupation spreading


For CNN's well-paid talking heads (who represent the 1% wealthiest Americans) it seems incomprehensible that a political movement with no clear agenda and no leadership hierarchy is spreading across the U.S. like wildfire. But it's true. The above map (current as of early October 2011) shows some of the planned locations for Occupation events.

Meanwhile, reporters like CNN's cutesy-faced Erin Burnett (who worked for several years on Wall St. and has a personal net worth of $12 million) are aghast at the impromptu street protests, wondering empty-headedly "what their agenda is" and snickering at the youthful innocence of the participants, going so far as to ask one of them (on her Oct. 3rd show) "Do you know that the bank bailouts benefitted the American taxpayer?"

Burnett is obviously not anxious to burn bridges with her Wall Street contacts by acting like a real journalist. Because that would require that she do actual research, ask probing questions, and get at the truth, which is that the people protesting in the streets represent the 99% of Americans who have been defrauded by Wall Street and by their government representatives (who answer to big business). The Erin Burnetts of the world are in the lucky 1%. She doesn't have to care about the 99% who are struggling. She has her cake, and eats it too, while 15% of her viewership lives on food stamps.

Link

Sunday, September 11, 2011

Remembering the Fallen



It's time to remember the roughly 200 brave individuals (many of them still nameless) who chose to plunge to their deaths on 9/11 rather than face immolation or massive smoke inhalation (or the imminent collapse of the towers themselves). According to a story in the Daily Mail, "sometimes the fallers were separated by an interval of just a second. At one point nine people fell in six seconds from five adjacent windows; at another, 13 people fell in two minutes." Twenty minutes after the first building was struck, two people fell simultaneously from the same window on the 95th floor. Some individuals (above) chose to hold hands with a partner on the way down.

Most jumpers were in free-fall for about 10 seconds before hitting the ground at between 120 and 200 miles per hour.

Wednesday, June 22, 2011

Oracle endorses Adobe strategy with FatWire purchase

The Content Management ecoverse is abuzz today with the news of Oracle's decision to acquire Web Experience Management vendor FatWire (for an undisclosed sum). While I suspect Oracle didn't have to spend much more than $200 million to acquire the Mineola, NY-based FatWire, it would not surprise me if cash-flush Larry & Co. dumped as much as $400 million (in stock and cash) into the acquisition. Recall that back in 2006 (when the U.S. dollar was actually worth something) Oracle cheerfully coughed up $440 million for Stellent.

What does the FatWire acquisition say about Oracle's previous foray into WCM (via Stellent)? What does it say about the current marketplace?

The second question is easier. Oracle has clearly bought into the Customer Experience Management Weltanschauung. Oracle's Kumar Vora says: "As more companies rely on their web sites as their most important channel for communication, marketing, customer engagement and commerce, they realize that optimizing the online experience requires best-in-class capabilities delivered on an integrated platform with complete back-office and data integration." This is exactly the message of Adobe with respect to CEM, and I (for one) find it only slightly ironic (and not at all coincidental) that the FatWire acquisition announcement follows Adobe's announcement of its new Digital Enterprise Platform by only a day.

Bottom line, it's clear from Oracle's own rhetoric surrounding the purchase of FatWire that the FatWire acquisition is nothing if not a hand-over-fist endorsement of the Adobe CEM vision.

Does this mean Oracle will be able to compete effectively against Adobe in the CEM-platform space? For insight on this, perhaps we should return to the question I posed earlier about FatWire vs. Stellent. With Stellent, Oracle acquired a capable Web CMS of modest complexity that offered good value for the money. They folded it into the Universal Content Management suite, did a little rearchitecting (but without ditching the proprietary Idoc Script language), and came up with an ECM-flavored document-management suite that could also do WCM. Meanwhile, Oracle invested relatively little R&D in personalization, optimization, campaign management, social collaboration, multi-device output, or any of the other disciplines that companies like Day Software (now Adobe) and FatWire continued to innovate in. In short, Oracle chose not to innovate (at a pace that would ensure continued relevance for UCM). As a result, it is today having to buy (rather than build) its way into the WEM/CEM space.

It's expected to take another 6 to 9 months for the FatWire deal to close (during which time the two companies, and their respective WCM offerings, will operate separately). After the deal closes, expect another year or so for any meaningful product integration to occur. (In the meantime, you can expect to hear a lot of happy talk about CMIS being the holy grail of interoperability.) Bottom line, while the well-oiled Oracle sales machine will no doubt have a nice-sounding story in place well before then, I don't think you can expect a credible integration of FatWire technology with the UCM/Fusionware morass in less than 18 months; and by that time, who knows what other technologies Oracle will have to have bought in order to maintain credibility in the WEM/CEM/WCM space?

My hat's off to Yogesh Gupta for doing a fine job of fatting the FatWire calf just in time for slaughter. Had he waited another six months to clinch a buyout deal, things might have looked quite a bit less rosy for the company formerly known as Divine, nee OpenMarket (nee Future Tense).

Saturday, May 28, 2011

A perfect storm for XMP?



Like many others in this biz, I've been following the development of Adobe's Extensible Metadata Platform (XMP) for quite some time, and for at least three years I've been saying that it would be in Adobe's best interest to hand oversight over this ostensibly open standard to a bonafide Standards Body (rather than let adoption languish as people continue to associate XMP with "Adobe-proprietary"). Happily, Adobe is in fact now doing the right thing: XMP is in the process of becoming ISO-16684-1, via an effort led by my colleague Frank Biederich.

This effort couldn't have come at a better time. The content world is in desperate need of an industry-standard way to represent rich-content metadata, and I strongly believe XMP is the right technology at the right time.

One can quibble over whether embedding XMP in a host file is the correct thing to do (as opposed to placing it in the file's resource fork, or simply creating XMP as a separate sidecar file and managing it separately). There are good arguments pro and con. But packaging issues aside, there's not much question, in my mind, that nearly every form of content benefits from having easily-parsed metadata associated with it. This is particularly true of enterprise content (content that's managed in some kind of access-controlled repository). The availability of metadata makes a given content bit easier to repurpose, easier to track, easier to search -- easier to work with all the way around.

At Day Software (now a part of Adobe), we've long had a saying that "everything is content." I'm fond of saying that once metadata is attached, "everything is an asset."

I think XMP is poised to become a huge success, comparable to, say, Atom or RSS. First of all, the specification itself is short and easily understood (thus easily implemented) -- always a Good Thing where XML standards are concerned. It's also semantically flexible and highly extensible -- again two very good things. The fact that it leverages RDF also bodes well for XMP as we trundle ever-closer to the Ontological Web. The social dimensions of an asset, for example, could easily be accommodated by XMP via RDF triples. Let your imagination dwell on that for a minute.

But I also think the timing for XMP is quite propitious just in terms of where it's at in its lifecycle. I was giving a talk last week at Adobe Research in Basel (on the subject of XMP) in which I mentioned a certain lifecycle theory (whose, I can't remember) that sees all technologies as basically going through three phases, each one lasting about six years. Phase One is Acceptance: It takes around six years for anything truly new to change the way people think about it. (During this period, only alpha geeks will actually adopt the new technology.) Phase Two is Adoption: It takes six years for the (at last understood) new technology to enter the mainstream in earnest. Phase Three is Ubiquity: This is when adoption becomes universal (or as close to that as it's going to get) and the market is saturated.

Not everything goes through an 18-year cycle, obviously. This is just a rough conceptual model, but I find that it applies in a surprising number of cases. If we look at XMP (which dates to 2001) through this model, we see that it is a little more than halfway through the Adoption phase. I think that the ratification of ISO-16684-1 will kick off a Ubiquity phase in which we see XMP used in more ways and in more places than anyone would ever have thought possible.

My talk last week in Basel was in front of a roomful of developers. Everyone there was familiar with aspect-oriented programming, so I made the (admittedly imperfect) analogy with XMP, saying "Imagine if resources could have aspects. What would that look like? It would look a lot like XMP." The info that gets packaged up into XMP often has to do with crosscutting concerns, like access control, DRM, version history, and what might be called "serving suggestions" (mimetype, compatibility hints). It's not that far different from an advice stack in JBossAOP. Even the packaging concerns are familiar from AOP. A classic problem in AOP, after all, is where to put aspects: in the source code itself (as annotations), or in separate descriptors (as in JBossAOP)? The same concerns (and tradeoffs) arise with XMP.

In any case, I think the pressing need for more and better metadata (as it pertains to enterprise content in particular) plus the built-in (in many cases) support for XMP in cell-phone cameras, plus the need for ontology-friendly web formats going forward, and many other factors (including the opening up of XMP under ISO-16684), spell a perfect storm for XMP as we hurtle toward Web.Next. All I can say is: It's about time.

Saturday, April 30, 2011

A script for putting page numbers on PDF pages

We had a project in the office recently wherein a large amount of web documentation was converted to a (single) PDF file using the built-in Web Capture capability of Acrobat X (otherwise known as Control-Shift-O), and when we were done, we wanted a page number to appear on each page. It turns out, it's fairly easy to apply page numbers to (any) PDF file using a bit of JavaScript.

Here is a script that will add a page number (as a read-only text field) to the upper left corner of every page of a PDF document. (Obviously, you can adjust the script to place the page number in any position you want, if you don't like the upper left corner.) The best way to play with this code is to run it in the JS console in Acrobat Pro (Control-J to make the console appear). Paste the code into the console, select all of it, then type Control-Enter to execute it.



var inch = 72;
for (var p = 0; p < this.numPages; p++) { 
 // put a rectangle at .5 inch, .5 inch 
  var aRect = this.getPageBox( {nPage: p} ); 
  aRect[0] += .5*inch;// from upper left corner of page 
  aRect[2] = aRect[0]+.5*inch; // Make it .5 inch wide 
  aRect[1] -= .5*inch; 
  aRect[3] = aRect[1] - .5*inch; // and .5 inch high 
  var f = this.addField("p."+p, "text", p, aRect ); 
  f.textSize = 20;  // 20-pt type
  f.textColor = color.blue; // use whatever color you want
  f.strokeColor = color.white; 
  f.textFont = font.Helv; 
  f.value = String(p+1);  // page numbering is zero-based
  f.readonly = true; 
} 

When you're done, you'll have a page number (as a read-only text field) on each page. If you like, you can Flatten the PDF programmatically using flattenPages() in the console afterwards, to convert the text fields to static objects on the pages (making them no longer editable as text fields).

Thursday, April 21, 2011

Configurable Web Capture in Acrobat

Today I put in a feature request for a new feature for the next dot-release of Adobe Acrobat X. What I requested is a white-list/black-list (of URLs) capability for Web Capture.

You may already know about Acrobat's incredibly useful Control-Shift-O (Open URL) functionality, which does just what you think it should: It captures a web page as a PDF document. The built-in functionality is already plenty powerful. It walks all the links in a web page and captures all linked-to pages (and their linked-to pages, etc., however many levels deep you want), creating appropriate links and Bookmarks inside the finished PDF document. And you can specify "Stay on the same server" if you want, to be sure the web-capture session doesn't inadvertently pull in content from a partner's (or competitor's) site, say. Which is all pretty neat.

I ran into a situation the other day, though, where I wanted to capture all the web content from a site, but I didn't want to pull down any content from URLs containing /javadoc/. It would have been neat if Acrobat's Ctrl-Shft-O feature had an Advanced Configuration dialog in which I could have specified certain URLs which either MUST always (white list) or MUST NOT (black list) be followed in the course of a traversal. Neater still would be if you could supply white-listed or black-listed URLs as regular expressions. (Follow this pattern, don't follow that pattern.)

I don't hold out much hope that this kind of feature will make it into a dot release, but I figured I would submit it anyway. As they say, no squeaky, no greasy.