With this exhibition, we designed the evaluation interface so that individuals were looking at a work of photography individually – we did not expect people to finish and discussed this aspect at length with our consultants. We were following strictures in the book The Wisdom of Crowds and we purposely designed the interface to reflect these theories. The stats are published for your own interpretation and you may disagree, but the exhibition stands as it is.
I do want to thank you for posting – the blog is a great place for a discussion like this.
]]>The process of selection was painstaking and even if it was slightly flawed, there is NO reason to denigrate the Museum, Shelley or anyone involved. Name me another museum anywhere in the world where this would even be considered.
And always remember, this museum has 1.5 million holdings, one of the greatest collections of Egyptian Art in the world…repeat in the world, a Luce Study Center for American Art – there are only 4 in the entire country.
My resume that centers on business analysis and technical writing will be updated tomorrow.
Thankyou Brooklyn Museum for the honor of giving me 5 x 7 inches of space on your fabulous walls.
]]>Some of the “Facts/Statistics” offered are inaccurate and/or of little value.
For example: “Each of the 389 images was seen approximately 1,054 times”. That is false. That is a useless statistic which was obtained by merely dividing the number of reviewers by the number of entries – and therefore serves as misleading invalid pseud-data.
I am very disappointed that the Brooklyn Museum has bought into the whole “Crowd-self-correction idea” as if it was a tested and verified concept. Its is not.
]]>Thanks for writing. One of the things we really wanted to do with this show is be as transparent as possible, so much of the data is there for you to analyze and interpret as you wish. You bring up an interesting point here and it is one that our consultants talked to us about at length, actually. We were cautioned to think about this process not as individuals seeing everything, but rather the “crowd” was curating and the “crowd” would self-correct over time. As long as we could get enough people to participate, there would be enough sets of eyes seeing photos at differing times (owing to the randomization) to allow for the crowd’s self-correction. (This notion of self-correction is something that James Surowiecki brings up in The Wisdom of Crowds quite a bit.) The consultants explained this very idea was the difference between an individual curating and a crowd curating and we designed the entire process around this crowd-self-correction idea. For instance, we didn’t offer a way for evaluators to return to works to change their response and we didn’t invalidate data if an entire evaulation wasn’t completed. But I’m no expert :) James Surowiecki is going to be writing about the data for the blog a bit later in the run. I don’t know if he’ll cover either of these issues, but I’m pretty interested in what he has to say.
]]>In the “Facts” section it is reported that “on average, each evaluator looked at 135 works”. That means that more than half of the contributions were not evaluated.
In my opinion, the results were tainted. The playing field was not equal across the board for every contributor.
There were “3,344 evaluators (who) cast 410,089 evaluations”. Out of the 3,344 evaluators only “575 people evaluated all 389 of the submitted works, completing the evaluation.”
The bottom line is that the bulk of the entries never had a chance and the efforts of those photographers were wasted and exploited for the sake of statistics and numbers.
I would like to know what the results would have been if you had only looked at the evaluations of the 575 people who evaluated all 389 entries.
]]>