What does it take to Photosynth the Brooklyn Museum?

After seeing this demonstration video, we couldn’t wait to answer this question. Before I get too deep into this, it’s important to keep in mind Photosynth is a new application and Microsoft released it in the early stages so anyone could work with it while continued development is made. I certainly appreciate that, so here are the lessons we’ve learned putting this into production.

The idea was simple—we queried three of our Flickr peeps (Amy Dreher, Trish Mayo, Stephen Sandoval) to see if Photosynth interested them as much as it did us. Turns out, they were game and couldn’t wait to get their hands on it, so we started planning to shoot at Target First Saturday. Before the date, we did some early trials with existing photography— all were shots of the outside of the building and all were shot at different times of day and in different periods of our history. Photosynth couldn’t make heads or tales of it—I can hardly blame it, we were not trying very hard at this point either :) It was clear, Photosynth was going to demand new photography and a lot of it, so we had a plan. You can see our early trials here and here.

Oh wait, can you see those trials? No? Well, you are not alone. This was our biggest challenge with this application and the most important thing to talk about at this stage of the game. In order to view or make a synth, you have to download a client and that client only runs on PC. Supposedly, the client will run in Firefox, but I could only get it to run in Internet Explorer (not something I ever like to use). Installing was a bear, you have to be an admin to do it and it’s a typical Microsoft installer package (i.e., heavy-handed). So how much of a pain is this really? Well, it turned into a bit of a circus for us. Amy and Trish are both on macs (so am I). Stephen had a PC, so he was testing on his own account and he could see progress when we were testing at work, but Amy and Trish couldn’t. Amy came over one day to check it out on a Brooklyn Museum computer, but other than that it was impossible to do more testing. To top it off, no one at the Brooklyn Museum has admin access to their computers, so they can’t even see the result of this work! Inside folks reading this—don’t call helpdesk—we shot some video to help you out. It’s hard to explain how silly I find it that I had to shoot this lo-fi video to show this off, but so be it. You do what you have to in order to make content accessible and I’ll get off my high-horse for now—Photosynth is early in the dev cycle, so fair enough.

Video and more after the jump (this is going to get long, sorry folks)…

So, if you have the client installed you can get to our Brooklyn Museum profile where we’ve been playing and Stephen has a couple of his Brooklyn Museum synths on his own profile. I’m going to look at two of these synths in the blog.

Photosynth the Brooklyn Museum from Brooklyn Museum on Vimeo.

The first is the synth from Trish Mayo’s photos. It’s a 100% synthy and contains 17 photographs. This is beautiful and because there are fewer photographs it loads quickly and sharpens quickly. This was shot from one vantage point on the plaza. The second synth combines the photography that Amy Dreher was shooting with Trish’s shots. Amy took hundreds of captures from multiple vantage points, so you can really move around the plaza. I combined them with Trish’s shots and you can see layers of different people coming and going which is cool. At 407 photographs, it takes a while to load and each image takes a bit longer than I’d expect to sharpen. Both synths are an interesting contrast.

But all this has us thinking—here are a few thoughts from Amy: How is this better/worse than a normal video cam? Could this be used as a team tool? Could this work as a community project (like the spring video)? How do they maximize those qualities that make this different and not just imitate what already exists? Although I’m glad it worked, I’m wondering if it is not so exciting because most of the photos were shot mechanically (mine)?

Our experience with Photosynth indicated that it needed structured photography (see the how-to guide for an idea) and this does mean a lot of the spontaneity is gone. It could be an awesome tool if used in a collaborative way (as the demo of Notre Dame suggests), but with the current structure that is really difficult. Our original idea was to create an account and each photographer could upload their own photos into one synth, but that got canned when we found out you couldn’t add photos to an existing synth. It’s a one-shot-upload, for now which makes collaborative processing difficult. So given our experience, I think these are the things we’d like to see:

1. A more universal player, so more people can view the synth. If it has to be its own thing, then it would be great if it could work in most browsers and a light-weight plug-in would be optimal, so it could be installed by non-admin users.

2. A more universal client. If the heavy-weight client is needed to process uploads, working cross-platform would be good.

3. A way to add photos to existing synths and a versioning system so you could easily see what works and what doesn’t without having to re-upload.

4. A way to work collaboratively with others within the Photosynth site. If #3 can be solved, could you have Photosynth groups where people could join and submit their photos for collaborative synths? Of course, #1 and #2 would have to be solved to make this a viable and accessible option.

5. A way to work with historical materials and make them synthy (even if it means allowing manual mapping). One of the really interesting things this could provide is a way to see a how a building changes over time. Just like Graffiti Archeology has been doing, what if you could do the same sort of thing within Photosynth to show layers of a gallery changing or changes to our facade over time as you walked through a space? If you were really dedicated and had a lot of photography at your disposal, this could probably be done within the existing structure, but manual mapping could help connect the dots when the automated process can’t make heads or tails of something. We’ve seen our Flickr friends trying to synth some of our material in The Commons, but without much success (see example 1 or example 2). Sometimes a manual connect-the-dots can help piece together the puzzle, like in this example created by one of our Flickr friends from the same source material.

A lot of this is already under discussion in the forums, so I’m optimistic that it’s only a matter of time. It was a fun experiment and we are happy to have been able to take it for a spin to learn from it in these early stages. Bonus: We always have a great time whenever Trish, Amy and Stephen come by—many thanks for helping us out with this project and providing the wonderful shots to make it happen!