userate – BKM TECH / Technology blog of the Brooklyn Museum Mon, 14 Dec 2015 17:03:57 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 Measuring Success /2015/08/19/measuring-success/ /2015/08/19/measuring-success/#respond Wed, 19 Aug 2015 14:55:52 +0000 /?p=7649 We all struggle with how to measure success. We’re thinking a lot about this right now as we begin to put the pieces together from what we’ve learned over the last ten weeks since ASK went on the floor. Three components help us determine the health of the ASK: engagement goals, use rates, and (eventually) institutional knowledge gained from the incoming data.

When we look at engagement goals, Sara and I are really going for a gold standard.  If someone gets a question asked and answered, is satisfied, and the conversation endsthat’s great, but we’ve already seen much deeper engagement with users and that’s what we’re shooting for. Our metrics set can show us if those deeper exchanges are happening. Our engagement goals include:

  • Does the conversation encourage people to look more closely at works of art?
  • Is the engagement personal and conversational?
  • Does the conversation offer visitors a deeper understanding of the works on view?
  • How thoroughly is the app used during someone’s visit?

We doing pretty well when it comes to engagement. We regularly audit chats to ensure that the conversation is leading people to look at art and that it has a conversational tone and feels personal. The ASK team are also constantly learning more about the collection and thinking about, experimenting with, and learning what kinds of information and conversation via the app open the door for deeper engagement and understanding of the works. In September, we’ll begin the process of curatorial review of the content, too, which will add another series of checks and balances ensuring we hit this mark of quality control.

Right now the metrics show us conversations are fairly deep; 13 messages on average through this soft launch period (starting June 10 to date of this post). The team is getting a feel for how much the app is used throughout a person’s visit; they’ve been having conversations throughout multiple exhibitions over the course of hours (likely an entire visit). Soon we’ll be adding a metric which will give us a use rate that also shows the average number of exhibitions, so we’ll be able to quantify this more fully. Of course, there are plenty of conversations that don’t go nearly as deep and don’t meet the goals of above (we’ll be reporting more about this as we go), but we are pretty confident in saying the engagement level is on the higher end of the “success” matrix. The key to this success has been the ASK team who’ve worked long and hard to study our collection and refine interaction with the public through the app.

Use rate is on the lower end of the matrix and this is where our focus is right now. We define our use rate by how many of our visitors are actually using the app to ask questions. From our mobile use survey results, we know that 89% of visitors have a smartphone and we know from web analytics that 83% of our mobile traffic comes from iOS devices. So, we’ve roughly determined that, overall, 74% of the visitors coming through the doors have iOS devices and are therefore potential users. To get our use rate, we take 74% of the attendance rate (eligible iOS device wielding users) and balance that with the number of conversations we see in the app giving us a percentage of overall use.

Use rate during soft launch has been bouncing around a bit from .90% to 1.96%, mostly averaging in the lower 1% area. All kinds of things affect this number from the placement of the team, how consistent the front desk staff is at pitching the app as first point of contact, the total number of visitors in the building, and the effectiveness of messaging. As we continue to test and refine, the numbers shift accordingly and we won’t really know our use rate until we “launch” in fall with messaging throughout the building, a home for our ASK team, and a fully tested process for the front desk pitch and greeting process.

Our actual download rate doesn't mean much especially given the app only works to have a conversation in the building. Instead, the "use rate" is a key metric.  The one thing the download rate stats does show us is the pattern of downloads  runs in direct parallel with our open hours. Mondays and Tuesdays are the valleys in this chart and that's also when we are closed to the public.

Our actual download rate doesn’t mean much especially given the app only works in the building. Instead, the “use rate” is a key metric defined as actual conversations compared to iphone-wielding visitors. The one thing the download rate stats do show us is the pattern of downloads runs in direct parallel with our open hours. Mondays and Tuesdays are the valleys in this chart and that’s also when we are closed to the public; Saturdays and Sundays are the peaks when attendance is higher.

Still, even with these things in flux, our use rate is concerning because one trend we are seeing is a very low conversion on special exhibition traffic. As it stands, ASK is being used mostly by people who are in our permanent collection galleries. Don’t get me wrongthis is EXCELLENTwe’ve worked for years on various projects (comment kiosks, mobile tagging, QR codes, etc) that would activate our permanent collections; none have seen this kind of use rate and/or depth of interaction. However, the clear trend is ASK is not being taken advantage of in our special exhibitions and this is where our traffic resides. We are starting with getting effective messaging up more prominently in these areas. Once we get the visibility up, we’ll start testing assumptions about audience behavior. It may be that this special exhibition traffic is here to see exactly what they came for with little want of distraction; if ASK isn’t on the agenda it may be an uphill battle to convert this group of users. Working on this bit is tricky and it will likely be a few exhibition cycles before we can see trends, test, and (hopefully) better convert this traffic to ASK.

There’s a balance to be found between ensuring visibility is up so people know it’s available (something we don’t yet have) and respecting the audience’s decision about whether to use it. Another thing we are keeping in mind is the ASK team is in the galleries and answering questions in personthis may or may not convert into app use, but having this staff accessible is important and it’s an experience we can offer because of this project. Simply put, converting traffic directly may not be an end goal if the project is working in other ways.

The last bit of determining successinstitutional knowledge gained from the incoming datais something that we can’t quantify just yet. We do know that during the soft launch period the larger conversations have been broken down into 1,241 snippets of reusable content (in the form of questions and answers) all tagged with object identification. Snippets are integrated back into the dashboard so the ASK team has previous question/answer pairings at their fingertips when looking at objects. Snippets also tell us which objects are getting asked about, what people are asking, and will likely be used for content integration in later years of the project. The big step for us will come in September when we send snippet reports to curatorial so this content can be reviewed. We hope these reports and meetings help us continue to train the ASK team, work on quality control as a dynamic process, and learn from the incoming engagement we are seeing.  

Is ASK successful?  We’re leaving you with the picture that we have right now. We’re pretty happy with the overall depth of engagement, but we believe we need to increase use. It will be a while before we can quantify the institutional knowledge bit, so measuring the overall success of ASK is going to be an ongoing dialog. One thing we do know is the success of the project has nothing to do with the download rate.

]]>
/2015/08/19/measuring-success/feed/ 0