Posts Tagged ‘sql’

Querying randomly from uneven distributions

December 20, 2009

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Not long ago I was looking for a way to select random elements from an unevenly distributed set.

Solutions out there

First thing, googled up every possible combination of all the keywords that made sense. Yet all I’ve found on the subject was one single forum thread. They came up with the following.

  • Increase row ID by element weight on each element insert.
  • Query a random element by generating a random ID between the first and latest and return the row with the nearest lower ID.
  • Update weights by rebuilding the table from the row in question.

Needless to say, this solution is inferior at a high element count and outright unacceptable in real-time web applications. So, I was forced to put together a solution of my own.

Scraping the surface

The first idea that popped into my mind was an SQL approach with two tables: ‘Elements’ and ‘Multiplicity’. In ‘Elements’ go all the properties of the actual elements of the distribution, one row for each. In ‘Multiplicity’, rows are inserted or deleted each time an element’s weight is changed. A query goes as simple as this (MySQL):


SELECT * FROM Elements NATURAL JOIN Multiplicity ORDER BY RAND() LIMIT 1

(The join is there only to show that it’s the information in Elements that carries value for us. I could just as well select from Multiplicity and then look up the corresponding element by key. Also, I looked past the inefficiency of ORDER BY RAND() for the sake of compactness.)

While the above does work, there are still problems. For one, only integer weights may be concerned. But I could live with that. The real problem is that every single unit of weight is another row in Multiplicity. Imagine just 1 million rows in Elements, each having an average weight of 1000. Looks like I’ve just added three magnitudes to my data storage needs. No doubt, this concept also fails, albeit for a different reason, on large-scale.

One that works

So, what’s the solution, that’s fast and doesn’t consume the entire Internet? Right after the SQL approach I started experimenting with something similar to the first solution (inserting and querying being the same as there), but I already knew that updating weight was not an option. And that was the key. I cannot update the weight in a row but I can update element properties. So what I do is insert the same element with the new weight and update the original row by removing all element properties and leaving only the weight, thus creating an empty, but weighted row. This empty row becomes a placeholder for a future element with the same weight. So before inserting an element I first check for a placeholder and insert only if there’s none. Otherwise I just update the placeholder. This way the IDs and weights remain consistent, and the method is applicable to real-time applications even with huge datasets.

There are only two minor problems. To keep the number of placeholders low, only a predefined set of weights may be used. Powers of two, Fibonacci numbers, et cetera. Secondly, empty elements count too in the querying process, distorting the relative weight of real elements. But this can be easily resolved by re-running the query again and again until it returns a real element.