statistics – BKM TECH / Technology blog of the Brooklyn Museum Mon, 14 Dec 2015 17:05:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 Measuring Success /2015/08/19/measuring-success/ /2015/08/19/measuring-success/#respond Wed, 19 Aug 2015 14:55:52 +0000 /?p=7649 We all struggle with how to measure success. We’re thinking a lot about this right now as we begin to put the pieces together from what we’ve learned over the last ten weeks since ASK went on the floor. Three components help us determine the health of the ASK: engagement goals, use rates, and (eventually) institutional knowledge gained from the incoming data.

When we look at engagement goals, Sara and I are really going for a gold standard.  If someone gets a question asked and answered, is satisfied, and the conversation endsthat’s great, but we’ve already seen much deeper engagement with users and that’s what we’re shooting for. Our metrics set can show us if those deeper exchanges are happening. Our engagement goals include:

  • Does the conversation encourage people to look more closely at works of art?
  • Is the engagement personal and conversational?
  • Does the conversation offer visitors a deeper understanding of the works on view?
  • How thoroughly is the app used during someone’s visit?

We doing pretty well when it comes to engagement. We regularly audit chats to ensure that the conversation is leading people to look at art and that it has a conversational tone and feels personal. The ASK team are also constantly learning more about the collection and thinking about, experimenting with, and learning what kinds of information and conversation via the app open the door for deeper engagement and understanding of the works. In September, we’ll begin the process of curatorial review of the content, too, which will add another series of checks and balances ensuring we hit this mark of quality control.

Right now the metrics show us conversations are fairly deep; 13 messages on average through this soft launch period (starting June 10 to date of this post). The team is getting a feel for how much the app is used throughout a person’s visit; they’ve been having conversations throughout multiple exhibitions over the course of hours (likely an entire visit). Soon we’ll be adding a metric which will give us a use rate that also shows the average number of exhibitions, so we’ll be able to quantify this more fully. Of course, there are plenty of conversations that don’t go nearly as deep and don’t meet the goals of above (we’ll be reporting more about this as we go), but we are pretty confident in saying the engagement level is on the higher end of the “success” matrix. The key to this success has been the ASK team who’ve worked long and hard to study our collection and refine interaction with the public through the app.

Use rate is on the lower end of the matrix and this is where our focus is right now. We define our use rate by how many of our visitors are actually using the app to ask questions. From our mobile use survey results, we know that 89% of visitors have a smartphone and we know from web analytics that 83% of our mobile traffic comes from iOS devices. So, we’ve roughly determined that, overall, 74% of the visitors coming through the doors have iOS devices and are therefore potential users. To get our use rate, we take 74% of the attendance rate (eligible iOS device wielding users) and balance that with the number of conversations we see in the app giving us a percentage of overall use.

Use rate during soft launch has been bouncing around a bit from .90% to 1.96%, mostly averaging in the lower 1% area. All kinds of things affect this number from the placement of the team, how consistent the front desk staff is at pitching the app as first point of contact, the total number of visitors in the building, and the effectiveness of messaging. As we continue to test and refine, the numbers shift accordingly and we won’t really know our use rate until we “launch” in fall with messaging throughout the building, a home for our ASK team, and a fully tested process for the front desk pitch and greeting process.

Our actual download rate doesn't mean much especially given the app only works to have a conversation in the building. Instead, the "use rate" is a key metric.  The one thing the download rate stats does show us is the pattern of downloads  runs in direct parallel with our open hours. Mondays and Tuesdays are the valleys in this chart and that's also when we are closed to the public.

Our actual download rate doesn’t mean much especially given the app only works in the building. Instead, the “use rate” is a key metric defined as actual conversations compared to iphone-wielding visitors. The one thing the download rate stats do show us is the pattern of downloads runs in direct parallel with our open hours. Mondays and Tuesdays are the valleys in this chart and that’s also when we are closed to the public; Saturdays and Sundays are the peaks when attendance is higher.

Still, even with these things in flux, our use rate is concerning because one trend we are seeing is a very low conversion on special exhibition traffic. As it stands, ASK is being used mostly by people who are in our permanent collection galleries. Don’t get me wrongthis is EXCELLENTwe’ve worked for years on various projects (comment kiosks, mobile tagging, QR codes, etc) that would activate our permanent collections; none have seen this kind of use rate and/or depth of interaction. However, the clear trend is ASK is not being taken advantage of in our special exhibitions and this is where our traffic resides. We are starting with getting effective messaging up more prominently in these areas. Once we get the visibility up, we’ll start testing assumptions about audience behavior. It may be that this special exhibition traffic is here to see exactly what they came for with little want of distraction; if ASK isn’t on the agenda it may be an uphill battle to convert this group of users. Working on this bit is tricky and it will likely be a few exhibition cycles before we can see trends, test, and (hopefully) better convert this traffic to ASK.

There’s a balance to be found between ensuring visibility is up so people know it’s available (something we don’t yet have) and respecting the audience’s decision about whether to use it. Another thing we are keeping in mind is the ASK team is in the galleries and answering questions in personthis may or may not convert into app use, but having this staff accessible is important and it’s an experience we can offer because of this project. Simply put, converting traffic directly may not be an end goal if the project is working in other ways.

The last bit of determining successinstitutional knowledge gained from the incoming datais something that we can’t quantify just yet. We do know that during the soft launch period the larger conversations have been broken down into 1,241 snippets of reusable content (in the form of questions and answers) all tagged with object identification. Snippets are integrated back into the dashboard so the ASK team has previous question/answer pairings at their fingertips when looking at objects. Snippets also tell us which objects are getting asked about, what people are asking, and will likely be used for content integration in later years of the project. The big step for us will come in September when we send snippet reports to curatorial so this content can be reviewed. We hope these reports and meetings help us continue to train the ASK team, work on quality control as a dynamic process, and learn from the incoming engagement we are seeing.  

Is ASK successful?  We’re leaving you with the picture that we have right now. We’re pretty happy with the overall depth of engagement, but we believe we need to increase use. It will be a while before we can quantify the institutional knowledge bit, so measuring the overall success of ASK is going to be an ongoing dialog. One thing we do know is the success of the project has nothing to do with the download rate.

]]>
/2015/08/19/measuring-success/feed/ 0
Local Matters /2014/09/25/local-matters/ /2014/09/25/local-matters/#comments Thu, 25 Sep 2014 15:36:06 +0000 /?p=7033 If you’ve been reading the blog lately you know we’ve been taking stock of our digital efforts and making considerable changes. I’ve been discussing what’s not working, but it’s also worth reporting on the trends we’ve been seeing and some of the new directions we are headed as a result. Did you know that our most engaged users on the web are locals? Over many years of projects, metrics have been showing us the closer someone is physically to the Museum the more likely they are to be invested with us digitally.

In 2008, we saw this with Click! A Crowd-Curated Exhibition. 3,344 people cast 410,089 evaluations using a web based activity that would determine a resulting exhibition. Participants in more than 40 countries took part in the activity, but 64.5% were local to the extended tri-state area around New York. A deeper look shows us the bulk of the participation was coming from local audience: 74.1% of the evaluations were cast by those in the tri-state area with 45.7% of evaluations being cast by those within Brooklyn. At the time, we figured this was because of the thematic nature of the exhibition, which depicted the “changing faces of Brooklyn.”

Google Analytics (along with zip code metrics) showed the majority of participation in Click! and Split Second was coming from local sources.

Google Analytics (along with zip code metrics) showed the majority of participation in Click! and Split Second was coming from local sources.

In 2011, we launched another web driven project to produce an exhibition. Split Second: Indian Paintings began with an online activity which would analyze people’s split second reactions to our collection of Indian paintings. The resulting exhibition was anything but local in theme, so we figured a much broader audience would find this of interest. In total, 4,617 participants created 176,394 ratings and spent 7 minutes and 32 seconds on average in their session. Participants took part from 59 countries, but it was the ones in the New York City area that were the most invested; their sessions averaged 15 minutes, which was more than double.

It’s not only these two projects that demonstrate this trend; we see similar things happening in our general website statistics and, also, on our social media.  Though we’ve disbanded the Posse and tagging games, it’s worth noting that, though small in number, the most engaged users were also locals many of whom had a strong long-term relationships with the Museum.

We’ve started to see a clearer picture here about how much local participation matters and if we are going for “engagement” as a strategy, we’re finding these users should be at the forefront of our minds. After all, GO was conceived to address this trend and, as a result, saw participation that I’d describe as incredibly deep. 1,708 artists opened their studios to 18,000 visitors who made 147,000 studio visits over the course of weekend (full stats). In order to nominate artists for the resulting exhibition, we asked voters to see at least five studios, but the average turned out to be eight. More than the metrics, though, it was the comments that so clearly demonstrated how invested people were.

As we head into our project for Bloomberg Connects engagement is the goal and we see our local users as central to both its creation and success.

 

]]>
/2014/09/25/local-matters/feed/ 1
A Response to Rothstein’s “From Picassos to Sarcophagi, Guided Along by Phone Apps” /2010/10/05/a-response-to-rothsteins-from-picassos-to-sarcophagi-guided-along-by-phone-apps/ /2010/10/05/a-response-to-rothsteins-from-picassos-to-sarcophagi-guided-along-by-phone-apps/#comments Tue, 05 Oct 2010 15:29:37 +0000 /bloggers/2010/10/05/a-response-to-rothsteins-from-picassos-to-sarcophagi-guided-along-by-phone-apps/ Many of you may have seen Edward Rothstein’s assessment of mobile technology in museums, but if you haven’t it is certainly worth a read and a bit of discussion.   The article looks at our mobile application along with the Museum of Modern Art and the Museum of Natural History and Rothstein pretty much dislikes the state of the union across the board.

nyt.jpg

I had mixed feelings about the article—I mostly agree that these apps all leave much to be desired, but I disagree that we shouldn’t be trying. Experimentation without perfection is a good thing.  You may remember, I have my own issues with the use of technology in museums, had a less than stellar experience using the AMNH Explorer app and we’ve had to rework our own mobile app once already.  Now is a good time to look at what the author is saying and discuss the current state of our mobile application.

One of the things Rothstein brings up is the lack of geolocation in our app.  He wants the device to automatically locate where he’s standing and magically deliver content—don’t we all?  We have GPS and AMNH’s Explorer app to thank for setting the bar so high, but in terms of what we can do here in Brooklyn, it’s just not possible yet.  While we do have a museum-wide wireless system, it was put in during 2004 and we don’t have the meshing technology required to triangulate signal (something that would require replacing the existing wireless network in its entirety), so we rely on people’s use of accession numbers to look up information about objects.  This is not perfect by any means, but it’s the simplest, clearest and most sustainable way we’ve come up with to deal with the nearly 6000 objects on view.  We tried other methods in version 1 of our app to no avail and we’ve considered switching to QR codes or short numeric codes, but that’s not realistic for this many objects.  Given every object has a unique number published on the object label and we need to develop a system that works with every object on view, accession number lookup is the way to do it…at least for now.

Rothstein makes an assumption about low usage of our app and this is true in some ways, but not true in others.  First and foremost, we don’t have a large audience for our app.  In the galleries on any given day (especially Target First Saturday), you’ll see very few visitors pulling out smartphones.  Eventually, that will change and it’s important to have a system in place as we start to see this turn around, but for now we are consistently seeing clamshell phones on cheaper monthly plans.  Beyond this, our app has suffered from poor visibility throughout the building.  I will admit that I’m really jealous of the amount of visibility the AMNH app—big signage everywhere, staff have Explorer t-shirts and ads are seemingly all over the place—as simply as I can put this:  I want.

mobile_signs.jpg

Just recently, we managed to get directory signage better positioned and our designers are helping us by including a picture of the iphone.  We saw a slight rise in usage when the signage went in, so that’s helping a bit.

So, let’s take a look at what’s really happening when people use this.  The statistics are indicating that they are doing so for pre-visit information (directions, hours, exhibitions, calendar) and that’s something that closely mirrors our general website traffic patterns.  It’s not that visitors are trying to use BklynMuse (our collection search) and failing or trying to play Gallery Tag! (our gallery game) and giving up—they are not getting that far. This could indicate two things:  1)  that visitors want to use the application pre-visit, but they don’t want it to be part of their in-gallery experience  and/or 2)  our app’s home screen is not clear enough to explain all the choices available. For our next round of changes, we are going to concentrate on the latter and see if that changes the metrics.

android1.jpg

What in the world do I get behind doors labeled BklynMuse and Gallery Tag?  It’s just not clear.

Where Rothstein’s assumption falls really short is what happens when people use BklynMuse.  What we are seeing in the statistics indicates that when people are using it, they are using it in an interactive way.  When you compare visitor’s use of the “Like This” feature in-gallery to the collection online, what you see is that on the whole, people in the gallery are using this feature to recommend objects to other visitors.  So, in theory, this kind of recommendation layer where we directly ask people to help guide others is working—we just need to do a better job getting people to the feature.

like_this.png   likethis2.jpg

Low usage overall? Yes, but “Like this” feature is being utilized in the gallery more than on our website.

Rothstein goes on at length to talk about why none of these apps measure up to the experience he wants in the gallery and there’s a point to that.  Each and every visitor walking in our doors is likely to expect something different from an app and every visitor is going to respond differently to what we provide. My point is that it is our  responsibility, collectively, to try new approaches and provide as many entry points into content and the museum as possible.   In terms of Brooklyn’s people-focused mission, we believe a people-focused application is the way to go.  The curated content is already on the walls in the form of object installation, labels and didactics, in-gallery multimedia and gallery design.  The power of the device means we can provide something else, something more unique. We believe leveraging the power of our visitor’s voices  in combination with our own is a worthy goal.  Are we there yet?  No.  Should we try, discuss, learn from our visitors and continue to iterate?  Yes, yes, yes.

I’d love to discuss more via the comments.  There’s a lot to cover on this subject, that’s for sure.

]]>
/2010/10/05/a-response-to-rothsteins-from-picassos-to-sarcophagi-guided-along-by-phone-apps/feed/ 23
Brooklyn Museum Collection Labs /2010/02/23/brooklyn-museum-collection-labs/ /2010/02/23/brooklyn-museum-collection-labs/#respond Tue, 23 Feb 2010 19:11:17 +0000 /bloggers/2010/02/23/brooklyn-museum-collection-labs/ Today, we are taking a page from Google and releasing a labs environment for our collection online.  Having the collection online for 18 months has taught us a lot and there’s a plenty of data we can explore, but we need a place to do it!

labs.jpg

Edison labs, Henry Ford Museum, Detroit.  Via gruntzooki on Flickr.

Creating a labs area of the collection online, gives us a chance to play around with some ideas and look at trends we are starting to see, but allows us to present projects in an informal way for discussion and visitor testing.  Some labs projects will only take us a few days to put together, while others might take a bit longer.  Depending on what we find out and how we see things used, we may integrate some of these projects into the collection’s main layout.

favorite

To start labs, we thought we’d explore love—hey, it is February after all!  We’ve been sitting on a bunch of data that shows how people are reacting to certain objects online and in the galleries.  This first project, What is Love?, displays top-ranked objects broken down by the ways in which people are showing their adoration. There’s active love:  online Posse members selecting objects as favorites in our collection during their web session or visitors coming to the museum and using our interactive gallery guide, BklynMuse, to favorite objects they like on view in the gallery. There’s also passive love: stats generated from the Google Analytics API to show additional metrics such as objects that are most viewed, when folks spend the most time on page with an object, or objects that are getting the most link love on the internet. All of these things shown together, can start to put together a picture of loving going on with regard to objects in our collection.

love.png

What is Love? Our first labs project—go explore the data and tell us your thoughts!

I guess I shouldn’t find it all that surprising that our nudes and the erotic sculpture in the Egyptian collection are all quite popular via the web, but I was surprised at how much variance there is between the categories and how few objects are loved across metrics. We released a sneak preview of What is Love? to our Facebook page last week, one person noted that there seemed to be high percentage of women depicted.  We’d love to hear your thoughts in the comments—notice any correlations between the data here?  Want to see more of this kind of thing in labs?

]]>
/2010/02/23/brooklyn-museum-collection-labs/feed/ 0