Brainstorming Company Names Revisited

I’ve been gone for a bit working on a research paper, and then attending conferences, but now it is time to get back to business.

This experiment is an extension of the previous blog post about brainstorming company names. In that post, it seemed like iteration wasn’t making a difference, except to encourage fewer responses. This time, we decided to enforce that each worker contribute the maximum number of responses. We also reduced that number from 10 to 5, since it felt daunting to force people to come up with 10 names. This also reduced the number of names we needed to rate, which is the most expensive part of this experiment.

Finally, we decided to show all the names suggested so far in the iterative condition. Previously, we showed only the best 10 names, but this required rating the names, which seemed bad for a number of reasons. Most notably, it seemed like an awkward blend of using the ratings both as part of the iterative process, and also as the evaluation metric between the iterative and non-iterative (or parallel) conditions.

The new iterative HIT looks like this:

Example iterative HIT

The parallel version doesn’t have the “Names suggested so far:” section.

We also changed the rating scale from 1-5 to 1-10. This was done because 1-10 felt more intuitive, and provided a bit more granularity. It would be nice to run experiments concentrating on rating scales to verify that this was a good choice (anyone?). Here is the new scale:Rating scale from 1 to 10

We brainstormed names for 6 new fake companies (we had 4 in the previous study). You can read the descriptions for each company in the “Raw Results” link below.


Raw Results

Average rating of names in each iteration of iterative processes.This graph shows the average rating of names generated in each iteration of the iterative processes (blue), along with the average rating of all names generated in the parallel processes (red). Error bars show standard error.


Names generated in the iterative processes averaged 6.38 compared with 6.23 in the parallel process. This is not quite significant (two-sample t(357) = 1.56, p = 0.12). However, it does appear that iteration is having an effect. Names generated in the last two iterations of the iterative processes averaged 6.57, which is significantly greater than the parallel process (two-sample t(237) = 2.48, p = 0.014) — at least in the statistical sense; the actual difference is relatively small: 0.34.

There is also the issue of iteration 4. Why is it so low? This appears to be a coincidence—3 of the contributions in this iteration were considerably below average. Two of these contributions were made by the same turker (for different companies). A number of their suggestions appear to have been marked down for being grammatically awkward: “How to Work Computer”, and “Shop Headphone”. The other turker suggested names that could be considered offensive: “the galloping coed” and “stick a fork in me”.

These results suggest that iteration may be good after all, in some cases, if we do it right, maybe. Naturally we will continue to investigate this. We have already done a couple of studies with similar results to this one, suggesting that iteration does have an effect. After posting these studies on the blog (soon), the hope will be to start studying more complicated iterative tasks.

You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.


  • [...] This experiment also uses the 1-10 based rating scale introduced in the updated blog post about brainstorming company names. [...]

  • [...] have often used Mechanical Turk to rate things, like brainstorming ideas and image descriptions. We were curious how MTurk ratings compared to ratings we might obtain from [...]

  • [...] start with, we need a topic. We modified the brainstorming code from a previous blog post to generate topic ideas instead of company names. We used the iterative method — where [...]

  • [...] post explores data obtained from three prior experiments, including ratings of image descriptions, company name ideas, and [...]

  • Ben Dalton says:

    The rating phase feels to me like it’s not picking the most usable company names, just the clearest. I see more intriguing or unique and inventive names in the parallel process (possibly where people aren’t being influenced by the normality of previous suggestions). If those names could be pulled out of the list some how (a memory test? or rating on funny/unique/etc.?), that might generate the most useful company name?

  • glittle says:

    That is an interesting observation. I like the idea of a different test for rating names. I especially like the idea of a memory test. In particular, I wonder how well people can predict which are the most memorable names (since we could verify their predictions).

  • I like the idea of reducing the number of names each worker can contribute. This would have more likely prompted the worker to think more carefully and with more creativity when choosing a name which would be more beneficial to the end result.

  • Hmm I agree with Ben here, those company names got the highest rankings which had a more clearer meaning and not those which are interesting to the human mind. The memory test sound like a great idea i’ve seen a few hits like that on mturk and that might help to get better results

  • Hey, you are joking right? You mean to tell me there’s actual scientific research on choosing company names? I know it’s important to choose a good name, but turning it into a science. But don’t let me judge, you could have something really good here.

    Thanks and God bless

  • If anyone really cares about having a great company name, wouldn’t you want to do everything possible to make sure you had the best?