metrics – BKM TECH / Technology blog of the Brooklyn Museum Thu, 14 Mar 2019 16:05:34 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 We’ve been silent, but we’ve been busy /2019/03/14/weve-been-silent-but-weve-been-busy/ /2019/03/14/weve-been-silent-but-weve-been-busy/#respond Thu, 14 Mar 2019 16:05:34 +0000 /?p=8137 I will admit, I’m a little embarrassed that it’s been more than a year since our last post. Rest assured, while we may have been radio silent, we’ve been pretty busy. In my last post, I proposed a shift away from a laser-like focus on increasing the use rate of ASK to make room for learning from the data.

Don’t get me wrong, we still care about use rate and want as many visitors as possible to use ASK, but we are no longer consumed by that metric. We have kept the tactics that work best. For example, our ASK Ambassador program is going strong and still makes the greatest difference: there is a direct correlation between Ambassador staffing and use rate. We’ve seen this repeatedly since launching the Ambassador program in 2017. We’ve also continued to play with engagement via ASK, particularly in relation to major special exhibitions, which—let’s face it—is why a majority of visitors come to the Museum. For the David Bowie is exhibition, we created a special trivia activity that was so popular we could barely keep up. Jessica will share more about that in a future post. She’ll also share about what we are currently doing in relation to Frida Kahlo: Appearances Can Be Deceiving, on view until May 12.

The ASK Ambassadors wear branded t-shirts and hats (hats optional).

The ASK Ambassadors wear branded t-shirts and hats (hats optional).

In addition to these initiatives, I’m delighted to say we’ve added a Pratt fellow to our team this year who has been focusing on the data. Sydney Stewart is a second year graduate student in the Museums and Digital Culture program in the School of Information at Pratt Institute. (Full-disclosure, I teach for that program and Sydney is one of my stellar students). It’s amazing what we have been able to learn by having one person focus on the data, and I’ve invited Sydney to share her research and results here in the coming weeks.  

When it comes to truly getting into the ASK data, I find we’re constantly bumping up against what happens when you build minimal viable product (MVP) as part of an agile process: short-sightedness. We purposefully weren’t thinking long-term when developing the dashboard (the interface the team uses to answer questions and process chats), only what we needed in the moment. Because we were building MVP,  we didn’t plan for or build ways to access the larger data set. We only created tools for our initial needs, which were basic metrics and ways to share conversations with curators for fact-checking purposes. Now that we are trying to get a better handle on the scope and possibilities of the data, we are having to look into building tools to access it. For now, the Tech team runs reports for us when we know what to ask for. I suppose you could say we’re having to get agile once again by using Sydney’s research path as a way to help us understand what we actually want and need to know from the data set. Unfortunately, that makes it a little difficult for her as there is a delay between her determination of needed data and our ability to give her that data. Fortunately, she’s been more than up to the task, and she’ll share some of her creative workarounds and what she’s been able to do with existing metrics.

Our period of radio silence is over, so stay tuned!

]]>
/2019/03/14/weve-been-silent-but-weve-been-busy/feed/ 0
Measuring Success /2015/08/19/measuring-success/ /2015/08/19/measuring-success/#respond Wed, 19 Aug 2015 14:55:52 +0000 /?p=7649 We all struggle with how to measure success. We’re thinking a lot about this right now as we begin to put the pieces together from what we’ve learned over the last ten weeks since ASK went on the floor. Three components help us determine the health of the ASK: engagement goals, use rates, and (eventually) institutional knowledge gained from the incoming data.

When we look at engagement goals, Sara and I are really going for a gold standard.  If someone gets a question asked and answered, is satisfied, and the conversation endsthat’s great, but we’ve already seen much deeper engagement with users and that’s what we’re shooting for. Our metrics set can show us if those deeper exchanges are happening. Our engagement goals include:

  • Does the conversation encourage people to look more closely at works of art?
  • Is the engagement personal and conversational?
  • Does the conversation offer visitors a deeper understanding of the works on view?
  • How thoroughly is the app used during someone’s visit?

We doing pretty well when it comes to engagement. We regularly audit chats to ensure that the conversation is leading people to look at art and that it has a conversational tone and feels personal. The ASK team are also constantly learning more about the collection and thinking about, experimenting with, and learning what kinds of information and conversation via the app open the door for deeper engagement and understanding of the works. In September, we’ll begin the process of curatorial review of the content, too, which will add another series of checks and balances ensuring we hit this mark of quality control.

Right now the metrics show us conversations are fairly deep; 13 messages on average through this soft launch period (starting June 10 to date of this post). The team is getting a feel for how much the app is used throughout a person’s visit; they’ve been having conversations throughout multiple exhibitions over the course of hours (likely an entire visit). Soon we’ll be adding a metric which will give us a use rate that also shows the average number of exhibitions, so we’ll be able to quantify this more fully. Of course, there are plenty of conversations that don’t go nearly as deep and don’t meet the goals of above (we’ll be reporting more about this as we go), but we are pretty confident in saying the engagement level is on the higher end of the “success” matrix. The key to this success has been the ASK team who’ve worked long and hard to study our collection and refine interaction with the public through the app.

Use rate is on the lower end of the matrix and this is where our focus is right now. We define our use rate by how many of our visitors are actually using the app to ask questions. From our mobile use survey results, we know that 89% of visitors have a smartphone and we know from web analytics that 83% of our mobile traffic comes from iOS devices. So, we’ve roughly determined that, overall, 74% of the visitors coming through the doors have iOS devices and are therefore potential users. To get our use rate, we take 74% of the attendance rate (eligible iOS device wielding users) and balance that with the number of conversations we see in the app giving us a percentage of overall use.

Use rate during soft launch has been bouncing around a bit from .90% to 1.96%, mostly averaging in the lower 1% area. All kinds of things affect this number from the placement of the team, how consistent the front desk staff is at pitching the app as first point of contact, the total number of visitors in the building, and the effectiveness of messaging. As we continue to test and refine, the numbers shift accordingly and we won’t really know our use rate until we “launch” in fall with messaging throughout the building, a home for our ASK team, and a fully tested process for the front desk pitch and greeting process.

Our actual download rate doesn't mean much especially given the app only works to have a conversation in the building. Instead, the "use rate" is a key metric.  The one thing the download rate stats does show us is the pattern of downloads  runs in direct parallel with our open hours. Mondays and Tuesdays are the valleys in this chart and that's also when we are closed to the public.

Our actual download rate doesn’t mean much especially given the app only works in the building. Instead, the “use rate” is a key metric defined as actual conversations compared to iphone-wielding visitors. The one thing the download rate stats do show us is the pattern of downloads runs in direct parallel with our open hours. Mondays and Tuesdays are the valleys in this chart and that’s also when we are closed to the public; Saturdays and Sundays are the peaks when attendance is higher.

Still, even with these things in flux, our use rate is concerning because one trend we are seeing is a very low conversion on special exhibition traffic. As it stands, ASK is being used mostly by people who are in our permanent collection galleries. Don’t get me wrongthis is EXCELLENTwe’ve worked for years on various projects (comment kiosks, mobile tagging, QR codes, etc) that would activate our permanent collections; none have seen this kind of use rate and/or depth of interaction. However, the clear trend is ASK is not being taken advantage of in our special exhibitions and this is where our traffic resides. We are starting with getting effective messaging up more prominently in these areas. Once we get the visibility up, we’ll start testing assumptions about audience behavior. It may be that this special exhibition traffic is here to see exactly what they came for with little want of distraction; if ASK isn’t on the agenda it may be an uphill battle to convert this group of users. Working on this bit is tricky and it will likely be a few exhibition cycles before we can see trends, test, and (hopefully) better convert this traffic to ASK.

There’s a balance to be found between ensuring visibility is up so people know it’s available (something we don’t yet have) and respecting the audience’s decision about whether to use it. Another thing we are keeping in mind is the ASK team is in the galleries and answering questions in personthis may or may not convert into app use, but having this staff accessible is important and it’s an experience we can offer because of this project. Simply put, converting traffic directly may not be an end goal if the project is working in other ways.

The last bit of determining successinstitutional knowledge gained from the incoming datais something that we can’t quantify just yet. We do know that during the soft launch period the larger conversations have been broken down into 1,241 snippets of reusable content (in the form of questions and answers) all tagged with object identification. Snippets are integrated back into the dashboard so the ASK team has previous question/answer pairings at their fingertips when looking at objects. Snippets also tell us which objects are getting asked about, what people are asking, and will likely be used for content integration in later years of the project. The big step for us will come in September when we send snippet reports to curatorial so this content can be reviewed. We hope these reports and meetings help us continue to train the ASK team, work on quality control as a dynamic process, and learn from the incoming engagement we are seeing.  

Is ASK successful?  We’re leaving you with the picture that we have right now. We’re pretty happy with the overall depth of engagement, but we believe we need to increase use. It will be a while before we can quantify the institutional knowledge bit, so measuring the overall success of ASK is going to be an ongoing dialog. One thing we do know is the success of the project has nothing to do with the download rate.

]]>
/2015/08/19/measuring-success/feed/ 0
Local Matters /2014/09/25/local-matters/ /2014/09/25/local-matters/#comments Thu, 25 Sep 2014 15:36:06 +0000 /?p=7033 If you’ve been reading the blog lately you know we’ve been taking stock of our digital efforts and making considerable changes. I’ve been discussing what’s not working, but it’s also worth reporting on the trends we’ve been seeing and some of the new directions we are headed as a result. Did you know that our most engaged users on the web are locals? Over many years of projects, metrics have been showing us the closer someone is physically to the Museum the more likely they are to be invested with us digitally.

In 2008, we saw this with Click! A Crowd-Curated Exhibition. 3,344 people cast 410,089 evaluations using a web based activity that would determine a resulting exhibition. Participants in more than 40 countries took part in the activity, but 64.5% were local to the extended tri-state area around New York. A deeper look shows us the bulk of the participation was coming from local audience: 74.1% of the evaluations were cast by those in the tri-state area with 45.7% of evaluations being cast by those within Brooklyn. At the time, we figured this was because of the thematic nature of the exhibition, which depicted the “changing faces of Brooklyn.”

Google Analytics (along with zip code metrics) showed the majority of participation in Click! and Split Second was coming from local sources.

Google Analytics (along with zip code metrics) showed the majority of participation in Click! and Split Second was coming from local sources.

In 2011, we launched another web driven project to produce an exhibition. Split Second: Indian Paintings began with an online activity which would analyze people’s split second reactions to our collection of Indian paintings. The resulting exhibition was anything but local in theme, so we figured a much broader audience would find this of interest. In total, 4,617 participants created 176,394 ratings and spent 7 minutes and 32 seconds on average in their session. Participants took part from 59 countries, but it was the ones in the New York City area that were the most invested; their sessions averaged 15 minutes, which was more than double.

It’s not only these two projects that demonstrate this trend; we see similar things happening in our general website statistics and, also, on our social media.  Though we’ve disbanded the Posse and tagging games, it’s worth noting that, though small in number, the most engaged users were also locals many of whom had a strong long-term relationships with the Museum.

We’ve started to see a clearer picture here about how much local participation matters and if we are going for “engagement” as a strategy, we’re finding these users should be at the forefront of our minds. After all, GO was conceived to address this trend and, as a result, saw participation that I’d describe as incredibly deep. 1,708 artists opened their studios to 18,000 visitors who made 147,000 studio visits over the course of weekend (full stats). In order to nominate artists for the resulting exhibition, we asked voters to see at least five studios, but the average turned out to be eight. More than the metrics, though, it was the comments that so clearly demonstrated how invested people were.

As we head into our project for Bloomberg Connects engagement is the goal and we see our local users as central to both its creation and success.

 

]]>
/2014/09/25/local-matters/feed/ 1