Skip to content

John Von Neumann

January 2, 2013

By chance, I have run across multiple references to John Von Neumann material over the last few of weeks.   Von Neumann’s was an astoundingly broad and vigorous intellect and I have been intrigued by his life, creativity and contributions since first hearing about him. He is not one of the most famous 20th century scientist, though he shows up in close proximity to nearly every major character and contribution you have heard about–computability, game theory, economics, quantum mechanics, the Manhattan project…  Amazing!

There is a 45-year old documentary on YouTube that is fascinating for a number of reasons and gives a good overview of a few of Von Neumann’s contributions.

And don’t miss part 2 in which Paul Halmos says Johnny could have made a contribution if he had only applied himself…

Download (and read!) Von Neumann’s and Morgenstern’s classic work on game theory: Theory of Games and Economic Behavior.

Hat tips: Interesting post from Carson at Science Clearing House, MathJesus’ tweet of math history link on Von Neumann’s birthday.

 

Age visualization

November 11, 2012

At Visualized in NYC last week, one of the presenters (Sha Hwang) showed a visualization of his age.  I found this striking as a measure of one’s place in life and a lovely graphic as well. I decided to create a similar graphic for myself.

Age in months. The green line is the medial life expectancy of 78 years.

My Processing code is available on Github in case you want to make your own.

 

R, e.g.: Year-over-year comparisons with ggplot and facet_wrap

October 31, 2012
tags:

The following image appeared on the Gnip blog last week. It compares tweet containing “SXSW” since 2007. For comparing timing over the years it is useful to align the plots by year, but let the y-scales float. Otherwise it isn’t possible to see the features in the early years due to the growth of Twitter.

The tricks here are pretty straight forward,

  • Set scales=”free_y” in facet_wrap
  • Create a dummy year, doy (the one below is for 2000, a year not shown)
  • Label each year with a dummy scale using scale_x_date with custom breaks
  • Give metric abbreviated y-labels (see format_si function)
  • Add space between the plots as the date labels can be misleading when the plots have standard spacing (use panel.margin)

While the x-labels are correct, I don’t really like the look of how Jan 1 of the next year in each plot is hanging off to the right unlabeled.

</pre>
<pre>#!/usr/bin/env Rscript
library(ggplot2)
library(stringr)
library(gridExtra)
library(scales)

args <- commandArgs(trailingOnly = TRUE)

format_si <- function(...) {
  # Format a vector of numeric values according
  # to the International System of Units.
  # http://en.wikipedia.org/wiki/SI_prefix
  #
  # Based on code by Ben Tupper
  # https://stat.ethz.ch/pipermail/r-help/2012-January/299804.html
  # Args:
  #   ...: Args passed to format()
  #
  # Returns:
  #   A function to format a vector of strings using
  #   SI prefix notation
  #
  # Usage:
  #   scale_y_continuous(labels=format_si()) +
  #
  function(x) {
    limits <- c(1e-24, 1e-21, 1e-18, 1e-15, 1e-12,
                1e-9,  1e-6,  1e-3,  1e0,   1e3,
                1e6,   1e9,   1e12,  1e15,  1e18,
                1e21,  1e24)
    prefix <- c("y",   "z",   "a",   "f",   "p",
                "n",   "µ",   "m",   " ",   "k",
                "M",   "G",   "T",   "P",   "E",
                "Z",   "Y")

    # Vector with array indices according to position in intervals
    i <- findInterval(abs(x), limits)

    # Set prefix to " " for very small values < 1e-24
    i <- ifelse(i==0, which(limits == 1e0), i)

    paste(format(round(x/limits[i], 1),
                 trim=TRUE, scientific=FALSE, ...),
          prefix[i])
  }
}

Y = read.delim(args[1], sep=",", header=TRUE)
Y$date <- as.POSIXct(Y$time)

png(filename = paste(sep="", args[1], ".png"), width = 550, height = 300, units = 'px')
  print(
    ggplot(data=Y) +
	geom_line(aes(date, count), color="#e56d25") +
    scale_y_continuous(labels=format_si()) +
    scale_x_datetime(limits=c(as.POSIXct("2007-01-01"), as.POSIXct("2012-09-01"))) +
    xlab("Date") +
    ylab("Tweets per Day") +
    ggtitle(args[2]) +
    opts(legend.position = 'none',
       panel.background = theme_rect(fill = "#545454"),
       panel.grid.major = theme_line(colour = "#757575"),
       panel.grid.minor = theme_line(colour = "#757575")
        )
    )
dev.off()

##
# year over year comparison with facet wrap
#
# simulate dates in single year (2000 in this case),
# but give them yr factors for facet
# use custom formatting
#

Y$Yr <- as.factor(as.POSIXlt(Y$time)$year + 1900)
Y$Mn <- as.factor(1 + as.POSIXlt(Y$time)$mon)
Y$Dy <- as.factor(as.POSIXlt(Y$time)$mday)
# use dates for easier plotting
Y <- transform(Y, doy = as.Date(paste("2000", Y$Mn, Y$Dy, sep="/")))

png(filename = paste(sep="", args[1], ".year.png"), width = 550, height = 800, units = 'px')
  print(
    ggplot(data=Y) +
	geom_line(aes(doy, count), color="#e56d25") +
    facet_wrap( ~ Yr, ncol = 1, scales="free_y" ) +
    scale_y_continuous(labels=format_si()) +
    scale_x_date(labels=date_format("%b"), breaks = seq(min(Y$doy),max(Y$doy),"month")) +
    xlab("Date") + ylab("Tweets per Day") +
    labs( title = args[2] ) +
    theme( legend.position = 'none',
           panel.margin = unit(1.5, 'line'),
           strip.text.x = element_text(size=12, face="bold"),
           panel.background = element_rect(fill = "#545454"),
           panel.grid.major = element_line(colour = "#757575"),
           panel.grid.minor = element_line(colour = "#757575")
    )
  )
dev.off()</pre>

Decisions: data, bias and blame

October 28, 2012

This Strata (NY, 2012) talk caught my attention more than any talk at the conference. Ms. Ravich made a request for developers to create better decision tools. (Did she confuse this group for a mythical Software Engineer/Game Theory conference?)

Ms. Ravich started with “I am not a big fan of the information revolution.” That’s a gutsy start given the crowd. But fortunately we were all drowsy, no one reacted. Technically, she was one of the best speakers–she spoke clearly and slowly, her argument was logically organized, she told a good story, and used a powerful myth as a supporting metaphor for her point.

The form of the request was shaped by the idea of fast and slow thinking. Fast thinking at its best synthesizes and sorts quickly. You need fast thinking to sort out what to think slowly about. Then she delivered a couple of assertions. “I think strategic decision makers are in real danger of the information revolution swamping our ability to do fast thinking. And that’s the very attribute we need to do to make the hard policy choices.”

What does “information revolution” mean? Apparently it is a movement or -ism or evolution or situation that can change basic human psychology and erodes the ability to do fast thinking. And what is the case for more fast thinking in policy making? Heuristics for decision making are so natural we barely realize we are using them. They are great because they are fast and we feel certain about them. Also, they can be create huge liabilities when used to make decisions about long-term policy. That feeling of certainty is associated with confirmation bias, attention bias, willful framing naivete, unconscious anchoring biases, …

Ravich goes on to explain the assertions above with an example from the Bush (43) administration dealing with the challenges of nation building in Afghanistan. Afghan was growing a lot (most) of World’s opium poppies. I am sure this caused many economic, border, organized crime, monetary, etc problems. But Ms. Ravich’s explanation for why this was bad was that it offended our national pride. So, we decided to destroy the poppies. This did not endear us with the farmers nor stop them from growing poppies.

Ravich explains that the poor process of making the decision was due to the inability of decision makers to “rack and stack the importance of each bit of information to see how it aligned with our goal.”

Following this explanation was the request: “If strategic decision makers in the situation room are going to win the information revolution, developers need a better insight into the thought process of how the policy decision makers reason and think, how we assemble and prioritize information.”

I am afraid I heard something a little like this… Look, we are good at making gut decisions. We can make them fast. We feel and act confidently about them. But you guys didn’t make the proper context for our heuristics and biases so they didn’t reflect reality. Do better next time.

On one hand, fair enough. That’s the job I signed up for. But it also seems there is room here for more responsible accounting for biases on the part of the decision makers? And that sometimes means wading through boring data and trying to understand something you don’t already understand.

Links:

Python JSON or C++ JSON Parsing

October 27, 2012
tags: , ,

At Gnip, we parse about half a billion JSON activities from our firehoses of social media every day. Until recently, I believed that the time I would save parsing social activities with C++ command line tool would more than justify additional time it takes to develop in C++. This turns out to be wrong.

Comparing the native JSON parser in Python2.7 and the UltraJSON parser to a C++ implementation linked to jsoncpp indicates that UltraJSON is by far the best choice, achieveing about twice the parsing rate of C++ for Gnip’s normalized JSON Activity Stream format. UltraJSON parsed Twitter activities at near 20MB/second.

 

Plot of elapsed time to parse increasingly large JSON files.  (Lower numbers are better.)

Additional details, scripts, data and code is available on github.

Dp-means: Optimizing to get the number of clusters

July 19, 2012

In my last post I compared dp-means and k-means error functions and run times.  John Myles White pointed to some opportunities that come from \lambda being a continuous variable.

Evolving the test code I posted on github, I developed a quick-and-dirty proof of concept.

First, below is the parameter vs. error graph in its latest incarnation.  There are two important changes from the analogous graph from last post:

  • Instead of using the k-means cost function to make the timing, error comparisons as I did before, I am now plotting the traditional k-means cost function for k-means and the cost function for dp-means,

\text{Cost(K-means)} + \lambda k

  • I am not plotting against \text{data range}/\lambda for comparison
  • I am plotting errors for a data set not used in training (called cross-validation in the code).

The cost function for dp-means shows a clear minimum. This graph is slightly confusing because the parameter for k-means, k, the number of clusters, increases left-to-right, while the number of clusters in dp-means goes down with increasing parameter \lambda.

I wrote a small script that leverages SciPy to optimize the dp-means cost function in order to determine the optimal value of \lambda, and therefore the number of clusters.

Here is an example on one of the data sets included as an example “input” directory.  This code runs slowly, but converges to a minimum at,

lambda: 5.488
with error: 14.2624

Here is a sample training at the optimal value with only the data as input (the code determines everything needed from the data.)

Figure shows training iterations for centers, training
data membership, cross-validation data membership.

The code is rough and inefficient, but the method seems robust enough to proceed to work on smoothing things out and run more tests. Neat.

Comparing k-means and dp-means clustering

July 6, 2012

A recently published paper explains a Bayesian approach to clustering. Revisiting k-means: New Algorithms via Bayesian Nonparametrics motivates and explores the idea of using a scale parameter \lambda to control the creation of new clusters during clustering rather than requiring the Data Scientist to set the number of clusters, k, as in k-means.  John Myles White coded this in R and shows some example clusterings with varying \lambda, but doesn’t dig into quantitiative comparisons. (BTW, subscribe to his blog.)

After looking at John’s plots, you may ask if there is any better motivation for choosing a scale parameter than the number of clusters–both seem ad hoc and to require experienced judgement to get the “best” result.  (I get fidgety when people say they just used k-means and it worked great–k-means always gives an answer, so “success” in the simple sense doesn’t mean much.)

In what ways could dp-means be an improvement over k-means?

  • Parameter choice.  The scale of the upper and lower bounds can be calculated from the data.  In general, we can bound \lambda at the high end with the range of the sample data and at the lower end, some measure of the nearness of the nearest data points, or possible, the smallest expected cluster size.
  • Time cost to minimize error.  K-means time cost is approximately linear in k, and at first glace the time-scaling of dp-means with number of clusters (not with \lambda) appears to scale linearly as well (with dp-means, smaller \lambda corresponds to more clusters), but it is not clear that this is better or worse than k-means in practical cases.

There are no proofs here, just numerical exploration and intuition building.  I coded dp-means in Python in a way that let me leave as much common code between k-means and dp-means and get lots of diagnostics.  These implementations aren’t optimized.  The code and examples shown here are available on github.

First, a 2-d version with three nicely separated clusters.  Here’s the original data,

A couple of the clusters are spread along one dimension to make it a little more interesting. Here’s an example of training dp-means:

And the mean-squared error per sample point:

In my versions of by k-means and dp-means, the algorithm stops when the change of error between iterations drops below a pre-defined tolerance. Download the code and play with training dp-means with varying scale parameter to see how decreasing  \lambda increases the number of clusters.

Ok, let’s get on with the comparison…in this example, we have 3 features and 5 overlapping clusters.

The error vs. parameter plots for dp-means and k-means are shown below.

The parameter for k-means is the number of clusters while parameter I am plotting for dp-means is 1 /\lambda (approximately the reciprocal of the minimum cluster size), so they cannot really be compared. However, in this graph it is easy to see where the number of clusters in dp-means match those in k-means.

For dp-means, there are no changes in the error if we set the parameter \lambda > \sqrt{3}(x_{max}-x_{min}) because no cluster will be larger than a cube containing data.  Both k-means and dp-means continue to improve with additional clusters until each sample point is at the center of its own cluster.

The classic “elbow” at  k = 3  indicates separated clusters found by k-means.  Around 1 / \lambda \approx 3 there is a “lap” (continuing with the body metaphors) for dp-means.  Is this a reliable heuristic for training dp-means?  It results in 11 clusters.

The dp-means algorithm achieves 4 clusters (fairly consistently) around  \lambda \approx 1.7.

With respect to parameter, dp-means may come out slightly ahead, but it might be something of a matter of taste.

How about the time cost of lower error? I both algorithms we have to search the space for the parameter value and cluster centers that give the minimum error. This means running the algorithm repeatedly, where we achieve a range of local minima on each iteration and choose the lowest value as our best fit.  So one way to look at the time-cost of each algorithm is to compare the minimum error for a range of total search iterations.

The graphs below plot  \log(error) vs.  \log(time) for dp-means and k-means for 4, 10, and 20 search iterations.  Dp-means is slightly more efficient in each case, but only slightly.

Conclusion. I will continue to explore dp-means because of the parameter advantages, but the time advantages seem negligible.  What do you think?

Follow

Get every new post delivered to your Inbox.

Join 253 other followers

%d bloggers like this: