testing – BKM TECH / Technology blog of the Brooklyn Museum Thu, 05 Sep 2019 15:16:37 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 Building a little data capture into our admissions process /2019/09/05/building-a-little-data-capture-into-our-admissions-process/ /2019/09/05/building-a-little-data-capture-into-our-admissions-process/#respond Thu, 05 Sep 2019 15:16:37 +0000 /?p=8331 As I mentioned in my previous post about mapping our digital landscape, we’re not letting the lack of CRM completely get us down. We have been trying to find creative ways to gather data with the systems we currently use. For years we have asked for visitor zip codes as part of the admissions transaction since we need to report those numbers to our city funders. We recently started to wonder if we could get just a bit more info at the front desk. In July we launched a simple test using our point-of-sale system (Siriusware) to gather the answer to a single-question survey: What is your reason for visiting? The answer to this basic question would be extremely helpful as we plan for future exhibitions, forecast revenue, and think about how to market ourselves. 

We began with a very short list of options in a drop down menu that included the special exhibitions, a few specific collection areas, and the collection more generally. We quickly found the need to add a few more options. For example, the admissions team asked for a “just in the neighborhood” option as it’s a common response to the question (though the data shows it’s not as common as they likely felt it was).

The survey appears in a pop-up window and has a drop-down menu of options. Unfortunately, the option to skip or cancel is bakedin; we can’t make this a required question to complete the transaction.

The survey appears in a pop-up window and has a drop-down menu of options. Unfortunately, the option to skip or cancel is baked-in; we can’t make this a required question to complete the transaction.

Results for the first two months are interesting. In July, the permanent collection ranked highest in response rate, while for August it was our Pierre Cardin special exhibition. The initial lack of options is one of the reasons for the high “other” response rate in July, which dipped the second month as more options were added. Currently, we have 16 options plus skip/decline. This feels like a lot, but maybe it’s ok. In particular, I wonder about including Korean art and African art in the list at the moment since both are temporarily off-view, but it would help us track an uptick once those collections are on view once more. We also have to remember to update the list regularly as special exhibitions open and close. For example, both Liz Johnson Artur and One: Egúngún exhibitions closed mid-August, which explains the dip in responses.

reason for visiting chartA quick comparison of the total number of survey responses (which should be every transaction) to total number of visitors who were required to visit the admissions desk shows the transaction count is about 60-65% of the visitor count. Multiple tickets can be purchased through a single transaction—and we know most of our visitors come in pairs or groups—and that feels close to the right percentage. I think we are still getting more cancellations than we should and we’ll keep working on it. The admissions team is meant to pose the question in a casual and conversational manner so it doesn’t feel like a survey (or an interrogation!) and select a response in order to proceed with the sale, although it is possible to cancel and move on. To avoid cancellation, we included a skip/decline option. Unfortunately, not everyone is consistently asking the survey question, which we know because we can run reports on who is logging which responses. For example, we found one person mostly just cancelled the survey in the first week and were able to speak with them. While we don’t want survey completion rate to become punitive, we do want to encourage completion because the information is important for us as an institution. Finding that balance can be tricky.

After two months, we are still working out the kinks, mostly in terms of making this process habit for the admissions team. A next step is to work with our Tech team to create a report that would knit together the survey answer, ticket info, and zip code from each transaction so we can compare the data set as a whole. That would be a pretty powerful triumvirate. 

]]>
/2019/09/05/building-a-little-data-capture-into-our-admissions-process/feed/ 0
Pilot 3: Texting /2017/07/14/pilot-3-texting/ /2017/07/14/pilot-3-texting/#respond Fri, 14 Jul 2017 14:00:13 +0000 /?p=8073 Last week we wrapped up our final planned pilot project to help determine the direction for ASK 2.0.  Another somewhat obvious solution to the challenge of people not wanting to download an app, why not text us instead? We set up a Twilio account and spent two weeks essentially pretending we didn’t have an app. The ASK Ambassadors pitched the texting service, and with the exception of international visitors without data plans, didn’t talk about the app at all. We had dedicated palm cards featuring the phone number and a few “helpful hints” for ways to use the service.

Our texting palm card included the phone number as well as "helpful hints" on what kinds of things to text. Unlike the app (which is geo-fenced), in theory you can text us anytime, but we won't answer outside of Museum hours. The system will autofire the same "out of office" notification" app users also get.

Our texting palm card included the phone number as well as “helpful hints” on what kinds of things to text. Unlike the app (which is geo-fenced), functionally you can text us anytime, but we won’t answer outside of Museum hours. The system will autofire the same “out of office” notification” app users also receive.

Our developers did some backend magic so that the Twilio messages would push to the dashboard, which allows us to keep that single source for incoming messages. Unlike the app, the SMS messages do not utilize location aware, which meant the team was flying somewhat blind in the dashboard. Normally, when a visitor sends us a message, the nearest beacon responds and the dashboard populates with the artworks on view in that gallery. Each artwork has the associated metadata from our collection online as well as “snippets” (question and answer pairs) from previous conversations that have been tagged to the work. SMS messages provided few of these tools to the team, who had to manually search the collection online (or good, old Google) if they didn’t know the work already. However, this isn’t the first time we’ve dealt with this challenge, so going into this pilot we felt pretty confident we could handle it. As expected, response time suffered a bit, but overall the team did really well and lack of location data didn’t hinder them much at all. One happy discovery was that the image recognition the developers put into place last year occasionally worked with MMS message, so if a user sent us an image (many do), there was a chance the dashboard would find it and pull the metadata as well.

I have to say, out of all the pilots, I thought this would be the most successful, and I was right. Visitors really responded well to the idea of texting, and we ended up having to reorder palm cards twice. Despite this enthusiasm, use rate was not what I would like to have seen; we averaged just above 2%. Interestingly, not all of that traffic was from texting. We still had some iOS and Android users, only 19 of whom were repeat users, so some folks were finding and using the app despite our promotion of texting (and likely not all international visitors). Out of curiosity, I wanted to compare this most successful pilot to our most successful two non-pilot weeks. Turns out the average use rate of those two weeks is higher than any of the pilots. So alas, while this pilot had the best use rate of the three, it did not hint at a “solution” to our use rate plateau and was not better than our best two (non-pilot) weeks combined.

By way of quick recap: pilot 1 (provided devices) proved charging for devices doesn’t work, while free devices didn’t give us more app traffic; pilot 2 (ASK on Demand) showed that while people liked the idea of an in-person chat, few took us up on it; and pilot 3 (texting) did not show better numbers than really successful non-pilot weeks.

What’s next? I’m not 100% sure, but we’re kicking around a few ideas. We just had a big team meeting with the ASK team and Ambassadors to discuss these metrics and share observations, which I’m still chewing over. Some things we are exploring include providing devices for scheduled groups and keeping the texting service as an alternative for those that just don’t want to download.

It’s going to take a little while to figure out, but I promise more to come after vacation (yay!) and time to think. 

]]>
/2017/07/14/pilot-3-texting/feed/ 0
Pilot 2: ASK on Demand /2017/06/28/pilot-2-ask-on-demand/ /2017/06/28/pilot-2-ask-on-demand/#respond Wed, 28 Jun 2017 14:00:18 +0000 /?p=8067 As promised, this week’s post is on our second pilot in search of our direction for ASK 2.0. For the first pilot, we provided devices in an attempt to get over our use rate hump, which showed some promise, but wasn’t a runaway success. I’m afraid the same can be said for our second pilot, which I nicknamed “ASK on Demand.”

Over the course of ASK, we have seen distinct patterns in the ways people engage with the app and the ASK team: they ask us questions, seek more information, and share their opinions. Visitors have also responded very well to opportunities to meet the ASK team in person during  pop-up tours, Art History Happy Hour events, or when team members are stationed in the galleries, which they occasionally are. Users also enjoyed the chance to meet the ASK team when their office space was in a public space, though not enough to keep them there for good. Knowing all this, we began to wonder, what if we updated the user experience of ASK to reflect these aspects that visitors enjoy and provide visitors clearer choice within their experience?

While still providing the opportunity to chat via text, the core of the ASK experience, could we also offer FAQs, most-asked about objects, or a “surprise me” feature that provides bite-sized content? This would help address the reported pressure some visitors feel about having to ask a question. In addition to texting and FAQs, could we offer the ultimate personal experience by offering visitors the opportunity to have conversation on-demand, in-person with one of the ASK team members? Functioning something like an airplane “call button,” this option would allow visitors who prefer an in-person conversation or are just really enjoying their text conversation to request the ASK team member to join them in the gallery. Should this concept work, we could play with incentives like only surfacing the “call button” after a certain number of exchanges or galleries visited or we could offer the option faster for repeat users. There are lots of possibilities to explore here.

This kind of approach would require changes to the app functionality and design, which we’re prepared to tackle, but only if we could prove some of the basic concepts valid. In particular, I wanted to confirm people would take us up on the in-person request component. We created a dedicated palm card for this pilot highlighting the various ways people might engage with us and the ASK Ambassadors emphasized the in-person option in the pitches.

Dedicated palm cards highlighted the in-person concept, The "helpful hints" simultaneously provide instructions as well as offer concrete suggestions as to how to use the app.

Dedicated palm cards highlight the in-person concept. The “helpful hints” simultaneously provide instructions as well as concrete suggestions on to how to use the app based on needs and interests.

I’m somewhat surprised to report that over the course of the two weeks, our app traffic was right within the normal range and only six people took us up on the offer of an in-person appearance. One of the things I was curious about was how many people would bypass texting altogether and just request in-person time. Timing of the request really varied. Two of the five requested a team member immediately, one (a family group) ended their very engaged time via the app with an in-person request, one refused to download the app and the ASK Ambassador requested a team member on their behalf, and two took the ASK team member up on her offer to join them one or two messages into the conversation.

Our wiley ASK Ambassadors snapped two quick photos of visitor interactions during this pilot. On the right, Isabella speaks with a visitor in the O'Keeffe exhibition. On the left, Rachel speaks with visitors in "We Wanted a Revolution."

Our ASK Ambassadors managed to snap two quick photos of visitor interactions during this pilot. On the right, Isabella speaks with a visitor in the O’Keeffe exhibition. On the left, Rachel speaks with visitors in “We Wanted a Revolution.”

I will say that I think one reason for the limited uptake on the in-person interaction was our over-zealous guarding of the ASK team’s time. We were so worried that the team member would be unable to extract herself from a very interested visitor (this has happened often on tours) or be expected to present some manner of tour, that we asked the Ambassadors to really stress that it was an opportunity to “say hello.” I understand from the Ambassadors, who did a great job executing exactly what I asked, would often preface the opportunity with some kind of explanation that the team might be busy answering questions via the app and would only be free for a minute. In retrospect, I’m sure this made the in-person request feel like a total imposition. I know if I were a visitor, I at least would have hesitated before requesting someone if it were presented to me in such a way.

Again, I come away from this pilot without any true conclusions except that we might want to revisit it, but with less protective language around the team’s time and a more simple invitation to have someone join them in the gallery for a bit. August might be our month to take the learnings from running these pilots try them again. So far, the only thing we’ve been able to definitively say based on the pilots is that charging visitors for iPods loaded with the app won’t work. I suppose that’s something!

]]>
/2017/06/28/pilot-2-ask-on-demand/feed/ 0
ASK 2.0: Providing Devices? Maybe. /2017/06/21/ask-2-0-providing-devices-maybe/ /2017/06/21/ask-2-0-providing-devices-maybe/#respond Wed, 21 Jun 2017 15:37:45 +0000 /?p=8061 As I prefaced in my post last week, while ASK has been successful from an engagement standpoint, we are stalled at between 1-2% use rate. We’ve learned through evaluation, both our own and that conducted by ERm in December 2016, that the biggest barrier to more visitors using ASK Brooklyn Museum is reluctance to download an app. What is more, ERm’s evaluation revealed that although most study participants (57%) felt that the app would enhance their museum visit, many (44%) still would choose not download it! Seriously? Argh. Clearly the format of the current ASK Brooklyn Museum experience is the barrier to adoption. Simply put, people are reluctant to download an app and our recent additional marketing efforts, while they helped us break the previous 1% average, have not provided the final solution to this issue.

We are running a series of three pilot projects that will help us determine the direction of the next two years of the program, which we’re calling ASK 2.0. This is the first of (at least) three posts on the results.

We began our pilots with the low-hanging fruit: provided devices. If downloading is a barrier, don’t make people download anything. Providing devices would also eliminate the related “excuses” we hear from visitors: not having enough storage, data, or battery life. Of course, it’s not quite as simple as just handing someone a device. We had to make sure we’d get it back, which means there had to be some kind of check out process. This is its own kind of barrier, albeit a familiar one for anyone who has ever rented an audio guide. Would a check-out process prevent people from using it? I also wondered if people would be willing to potentially juggle two devices—our iPod Touch and their phone—because despite some focus group and survey participants proclaiming that they don’t like to use their phones during a museum visit, 76% of those same survey respondents admitted that they use their phones at some point during their visit (to take photos, text, use social media, Google something, etc.). Finally, I wondered about the perception of value. What would happen if we put a dollar amount on the experience and charged for the devices? Would it turn people away? Increase download rates? Encourage people to chat more to get their money’s worth? I had to know.

Seems fitting to return to iPod touches for this pilot since that's what we used for our pilot that ultimately led us to create ASK. Here we set them up with lanyards and protective cases. We enabled "guided access mode" to lock them down on the app only.

Seems fitting to return to iPod touches for this pilot since that’s what we used for our 2014 pilot that ultimately led us to build ASK. Here we set them up with lanyards and protective cases. We enabled “guided access mode” to lock them down on the app only.

We ran the test for two weeks and included the A/B test around value. For the first week, the device (iPod touches) were available to check-out for free. For the second week, the iPods cost $5. I have to say, the results surprised me a bit.

For the first (free) week, we checked out out 42 devices, with 30 chats coming from those devices. So not everyone who checked out a device used it (not the surprise). The ASK Ambassadors asked visitors returning devices what they thought about their experience and for most people that didn’t use it, they admitted they didn’t “need” it because the museum was well-curated and explained. I suppose that’s hard to argue with! One surprising result we saw were that the iPods averaged more than double the messages of all users that week. The average number of exchanges for the week was 11.9, but for the iPods specifically it was 24.3! Now this could partially be explained by the fact that several kids used the iPods, and kids tends to send a flurry of photos and comments, so there is a lot of back-and-forth in a short time, but it’s something to think about. Another surprising (and disappointing) thing was that providing devices didn’t net us any more chats than our usual weekly average. So while 30 people used it, that wasn’t 30 more people than usually use it. Hmm.

Alex and Kahlah help a visitor check out an iPod during the free week. We used the Info Desk in the lobby as home base for storing visitor IDs collected as collateral and for storing and charging the iPods.

Alex and Kahlah help a visitor check out an iPod during the free week. We used the Info Desk in the lobby as home base for storing visitor IDs collected as collateral and for storing and charging the iPods.

The second week of the test ($5 rental fee) didn’t work at all. We only had one visitor rent an iPod, and it was for her kid. The engagement numbers don’t mean much with one sample (though it was really high at 36 exchanges). This was also a slow week for us, under the average number of chats for the week. This may have nothing to do with the iPods and perhaps more to do with a few of our ASK Ambassadors being out sick, but hard to know. We did see an uptick in download rate, however, and the ASK Ambassadors told me that it was an easy “sell” to get people to download for free as opposed to renting an iPod. Unfortunately, those downloads did not translate to app use.

So what does this all mean? Honestly, I think it means we need to revisit providing devices for free again. We’ll work this into the plan for August, unless one of our remaining two pilots is wildly successful. So far the second pilot, which I’ve named “ASK on Demand,” is showing some promise. More to come next week!

]]>
/2017/06/21/ask-2-0-providing-devices-maybe/feed/ 0
Fresh Eyes Provide Insight on ASK /2017/02/08/fresh-eyes-provide-insight-on-ask/ /2017/02/08/fresh-eyes-provide-insight-on-ask/#comments Wed, 08 Feb 2017 14:00:19 +0000 /?p=7929 Our entire ASK program has been built upon regular user testing and evaluation, which we’ve always completed ourselves…until now. Since we’ve been trying for over a year to increase the use rate with limited success, we felt like it was time to ask for help. In the fall of last year, we hired ERm, a marketing and research firm here in New York, to bring fresh eyes to the problem. The evaluation had several objectives, including:

  • to gauge overall awareness and recall of the ASK app among visitors to the Brooklyn Museum;
  • to understand expectations and perceptions of the app based on existing materials and concept;
  • to determine response to ASK among users, including best- and least-like features; and
  • to pinpoint barriers to downloading ASK among non-users.

During the months of October and November, ERm conducted online surveys, which were sent to email addresses visitors shared with us at the admissions desk and in the museum shop. We also completed four focus groups, two comprised of pre-recruited individuals who downloaded the app and used it during their visit, and two comprised of regular visitors intercepted and recruited in the galleries. Let me tell you, those focus groups were super enlightening! Since app use is anonymous, we’ve rarely had the opportunity to speak with users about their experience (once we were past beta testing). We’ve mainly relied on app store reviews to give us insight.

ERm provided us with a really comprehensive report that we’re still sinking our teeth into.

ERm provided us with a really comprehensive report that we’re still sinking our teeth into.

We are still processing all the information, but ERm has provided some actionable items related to overall awareness and user experience that we can begin to work on right away. According to the study, awareness is quite low. Only 56% of survey respondents had never heard of the app at all. Interestingly, by comparison, the vast majority of respondents (87%) had not downloaded an app from any other museums either. For those that had heard of ASK, their main source of information were staff members (52%) and museum signs (32%) like the labels pictured below. This is good news since we are in the midst of building an ASK Ambassador team that will provide that much-needed personal invitation to ask.

We have continued to tweak messaging in an attempt to build awareness and clarity around the app. The top image is a label in the American art gallery using the ASK brand approach from April 2016. The bottom image is a label in the Marilyn Minter special exhibition using a new approach featuring a question combined with directive to download. Both are up in the galleries now. Labels will be included as part of the fresh look we’ll take at all our messaging around ASK.

We have continued to tweak messaging in an attempt to build awareness and clarity around the app. The top image is a label in the American art gallery using the ASK brand approach from April 2016. The bottom image is a label in the Marilyn Minter special exhibition using a new approach featuring a question combined with directive to download. Both are up in the galleries now. Labels will be included as part of the fresh look we’ll take at all our messaging around ASK.

 

Despite our fairly extensive testing around messaging, our current version still leaves visitors unclear of how ASK might enhance their experience. They assume it’s a tour app with a map (an assumption we’ve been fighting all along). Additionally, they are unsure of the usefulness of the app in this age where you can Google anything. A few focus group participants felt that the interpretation already provided was enough information and weren’t compelled to investigate further. (Great news to those of us who have ever written a wall label!) A few respondents felt that being on their phone would detract from their museum-going experience, however 76% of survey-takers said they used their phone at some point during their visit. All of these factors contribute to an overall apprehension to download an app for a few hours’ use. Overcoming these factors through different messaging in particular, is an important next step.

ERm also reported three key areas where users like the app most:

  • When it enhances the experience: Users enjoyed ASK most when the information provided supplemented the other forms of interpretation. Specific information that offered fresh details (i.e. the kind of information Googling can’t find) was well-received. Visitors need to be assured that the app will enhance their experience.  A key component to the experience is also length of information and timing of the response. These two components are more difficult to ensure because they are so arbitrary—response length in particular. What may be too much information to some is not enough for others. Ensuring we provide a response before the person has walked away (the ideal timing) is also a unique challenge.
  • Because it is practical to use: Users praised the intuitive functionality of ASK and felt that the in-depth information provided by the team helped make the most of their visit. However, that same intuitive function also meant that for some, the app was too one-dimensional. This is likely compounded by the expectation that the app offers a map and tour information. (That being said, even when visitors have thought this in the past, they still didn’t download it.) Additionally, some users felt there was too much pressure to generate questions or maintain conversation. Visitors want to choose whether they will be a passive or active user.
  • It offers a human element: Users love the personal and responsive nature of the exchange, which was both conversational and enlightening. The information went beyond a Google search. However, there was quite a bit of confusion as to whether the responses via the app were generated by a human or a computer. This was compounded by the uneven responses a few users received (e.g. some answers felt copied-and-pasted, though the team doesn’t do this as a rule). A few people expressed the pressure they felt to socialize while interacting with a human, while others thought interacting via text removed that same pressure. We need to convey that ASK offers users the compelling opportunity to have an intelligent dialogue with an expert who cares.

We will explore these elements further in the coming months as we move toward ASK 2.0. The good news is, nothing we heard from the evaluation is a surprise. Much of the results cover things which we have suspected for some time. It’s nice to have some solid evidence, as opposed to just anecdotes, behind our assumptions. Stay tuned for more on this as we roll out changes, test solutions, and put these ideas into practice.

]]>
/2017/02/08/fresh-eyes-provide-insight-on-ask/feed/ 2
ASK App + Group Tours: Making Hard Choices /2017/02/03/ask-app-group-tours-making-hard-choices/ /2017/02/03/ask-app-group-tours-making-hard-choices/#respond Fri, 03 Feb 2017 18:27:29 +0000 /?p=7923 Earlier this week, Sara introduced the topic of ASK’s new collaboration with our Group Tours office and our efforts to shape the content of our first highlights tour. We’re excited to be offering an alternative to guided tours, and we already know that our “regular” ASK users appreciate receiving recommendations for what to see. When the ASK team was visible on the floor, they regularly fielded visitor questions like,  “What are the most important things here?” or “We only have an hour or so—what shouldn’t we miss?”

So, which works of art would we choose to offer in a self-guided tour scenario? The Museum currently has galleries on four floors, not to mention twelve collection areas ranging from ancient Egypt to contemporary art. If you’ve ever walked through the entire building or flipped through one of our highlights publications, you can guess what a challenge this would be. How can we convey an idea of our “greatest hits” through a manageable number of objects, keeping in mind that a visiting group typically spends from an hour to an hour and a half in the galleries?

Bigger boosk for blog

In a series of brainstorming meetings, and through multiple testing sessions and follow-up conversations with colleagues and friends, we came up with a list of criteria for each selection that helped us to sort through our wish-list:

  • Historical significance: it had to have clear importance within art history and the collection
  • A good story: it had to offer an intriguing narrative, through its creator, its imagery, its provenance, etc.
  • Accessibility: it had to be installed in a way that visitors could find it and easily view it as a group—nothing placed in a tight corner or high on a wall, for example. It also had to be located on a pathway that made sense, moving people through the building wit limited backtracking and easy access to stairs and elevators.
  • Visual excitement: it had to offer opportunities for close looking
  • Connections: it needed to be something we could connect to other works in the tour, through a theme, a technique, etc.

Of course, certain challenges arose during our planning and testing. We had to keep up with gallery rotations, for example: El Anatsui’s Black Block fit all our criteria, but then we learned that it was scheduled to be deinstalled, so we chose another contemporary work. There were also situations where we initially included a work that seemed like an obvious choice, only to struggle with it during testing. Initially we wanted to include a Pre-Dynastic Egyptian sculpture of a female figure because it has the distinction of being the oldest work on display in the Museum. During test chats, however, we realized it was hard for visitors to locate this small object in the galleries; also, our discussion was somewhat limited, since scholars haven’t been able to determine many specific facts about it.

We were also hearing that some visitors would have liked a wider choice of objects, and that they were interested in some works, but not in others. This feedback led us back to two of our ongoing concerns: the balance of structure and flexibility in our script and the balance of freedom and guidance in the visitor’s experience. During one of our brainstorming sessions, the ASK team came up with a new idea: rather than a fixed list of six or eight stops, why not offer two parallel lists, “highlights” and “hidden gems”? Then visitors could mix and match, depending on their interests. It could be like a menu of entrees and side dishes, or a subway line with local and express stops. We’d been feeling stymied, but this idea perked us up: we could include more key objects, while offering visitors a more customized experience.

As we continued to test our evolving tour itinerary with groups of colleagues and outside participants, however, we soon realized that we needed to give just as much consideration to engagement as content. I’ll be discussing some of those related challenges in a future post.

]]>
/2017/02/03/ask-app-group-tours-making-hard-choices/feed/ 0
ASK App + Group Tours: A Balancing Act /2017/02/01/ask-app-group-tours-a-balancing-act/ /2017/02/01/ask-app-group-tours-a-balancing-act/#respond Wed, 01 Feb 2017 14:25:58 +0000 /?p=7916 If you’ve been following our blog, you know we spend a great deal of time focusing on getting our ASK app in more people’s hands. One way we have been doing this is by working with our colleagues in the Education department to use ASK as part of school group visits. We’ve also worked with several professors at Pratt Institute, who have brought their freshman art history classes here and used ASK as part of their time in the galleries. These initiatives have worked quite well, with both staff and participants feeling like they made the most out of their time with us. We started to wonder, however, how to expand this concept of using ASK in a group setting. And that’s when I had a fruitful conversation with Laval Bryant, our Group Tours Coordinator.

We don’t currently offer any special information or experience (printed guide or otherwise) for self-guided group tours. These are the booked groups who elect not to have a guided tour with one of the Museum’s guides. However, just because they may not want a tour with a person, doesn’t mean they don’t want a unique, even curated, experience. Laval wanted to know if ASK could somehow fill this gap in our offerings. As she explains, “groups arriving to the Brooklyn Museum often have limited time and are hoping to receive as much information on our collection as possible. We hope to fulfill the need of those who want the convenience of exploring the museum at their own pace combined with a certain amount of guidance on what to see. We want them leaving here feeling like the purpose of their visit was a great success.”

We evaluated  the concept as well as the printed guide we plan to offer tour participants as part of our testing sessions. After their tour, colleagues were kind enough to fill out a survey, which provided a great deal of useful information that helped us shape the experience.

We evaluated the concept as well as the printed guide we plan to offer tour participants as part of our testing sessions. After their tour, colleagues were kind enough to fill out a survey, which provided a great deal of useful information that helped us shape the experience.

Jessica, Laval, and I set to work with the ASK team to try out this very concept. Could ASK be used for a self-guided tour? What is the “right” balance of guided and free-form experience? How many tour “stops” make sense? How much freedom do we give participants to shape their experience? We’ve run several tests of the concept with staff, and one very helpful test with colleagues, where we analyzed the artwork selection and number, content, format, and the map that will be given to each tour participant.

Striking the right balance between guided and free-form, finding the “right” number of stops, and clearly communicating the format of the tour have been challenging. Jessica will go into these nuances in our next post. However, I am delighted to say we’ll start offering this tour, Highlights + Hidden Gems, next week. We are asking the first few groups to stick around afterward to give us feedback so we can continue to improve the experience. And if this format really works, we may expand the concept to include other tour themes.

]]>
/2017/02/01/ask-app-group-tours-a-balancing-act/feed/ 0
Selectively Flying Blind After Android User Testing /2016/04/05/selectively-flying-blind-after-android-user-testing/ /2016/04/05/selectively-flying-blind-after-android-user-testing/#respond Tue, 05 Apr 2016 13:47:04 +0000 /?p=7837 ASK Brooklyn Museum for Android is now available on Google Play. We had one early quandary, but this was a fairly straightforward development process. That is, until we got to user testing.

User testing sessions are a critical part of the process.

User testing sessions are a critical part of the process.

Android development is tricky. There are a lot of devices all running different system versions in various states of updates with hardware manufactured by different parties, distributed independently or by various carriers. By comparison, iOS is a fairly controlled environment; we knew we could develop the iOS version of the app in house, but it was clear to us that an external team would need to tackle Android and we contracted with HappyFunCorp.

In the beginning of our Android development process we looked at our Google Analytics to figure out which devices/systems were in our majority and this became our supported device list. Simply put, there are too many devices running too many systems to be able to support all of them, so you have to pick your battles. We settled on a combination of support for devices running at least Android 4.3 and with Samsung Galaxy 4S (and higher) and Nexus 5 (and higher) hardware.

As with our iOS release, we did a number of invited user testing sessions onsite at the Museum. Many of these sessions were attended with just a few users giving us their time. Each session helped us surface bugs, but it was difficult to get a critical mass. One thing we started to see, however, is that at each session we had users show up with hardware that was not on our supported list and, inevitably, we saw a lot of bugs on these devices. It was the very well attended session with Bloomberg employees that helped us identify a trend, come to terms, and make a critical decision that will affect all Android users (for the better).

For both iOS and Android apps, Bloomberg employees helped us test each app prior to launch.

Bloomberg employees helped us test both our iOS and Android app prior to launch.

Most of the bugs we found on unsupported devices came down to problems with beacon ranging. We could reliably require Bluetooth on supported devices, but on others we’d see two problems. First, if a device didn’t have bluetooth support the user couldn’t use the app at all. This requirement on iOS devices made sense because of the near ubiquity of Bluetooth on current hardware, but was more difficult on the plethora of Android hardware situations. Second, if users were on an unsupported device beacon ranging was hit or miss often causing problems like device sluggishness or outright device crashes.

It was during the Bloomberg testing session when we could see a number of users all having the same problems that issue became really clear.

It was during the Bloomberg testing session when we could see a number of users all having the same problems that issue became really clear.

We had three options. Option one would be to not allow the download on unsupported devices, but this would mean some users could find it in Google Play and other users wouldn’t see it at all. This presented a nightmare for messaging—”We have an Android app….sort of…”  Option two allowed the download, but many users would experience bugs and it would be difficult to communicate why.  Option three would be to turn off bluetooth/beacon ranging for all unsupported devices, but this would mean the ASK team would not see a user’s location.

When an unsupported device is in use, a "no bluetooth" icon appears on the ASK team dashboard alerting them to the situation.

When an unsupported device is in use, a “no bluetooth” icon appears on the ASK team dashboard alerting them to the situation.

In the end, we went with option three and decided to turn off beacon ranging for all unsupported devices. This means ASK will work on most Android devices, but on devices where we’ve disabled beacon ranging, the ASK team will be flying blind with “no location found.” They can chat with users, but the object information won’t be at their fingertips so readily in what we hope are users who represent the very edge case.

]]>
/2016/04/05/selectively-flying-blind-after-android-user-testing/feed/ 0
A Personal Invitation to ASK /2015/08/11/a-personal-invitation-to-ask/ /2015/08/11/a-personal-invitation-to-ask/#respond Tue, 11 Aug 2015 15:29:12 +0000 /?p=7641 Knowing what we know about our visitors, we figured pretty early on that we would need to offer face time with staff as part of our ASK onboarding, that people might need a little help downloading and getting started. Turns out we were only sort of correct.

We thought people would have trouble with downloading and enabling the sheer number of settings our app requires, but turns out this part was easy.

We thought people would have trouble with downloading and enabling the sheer number of settings our app requires, but turns out this part was easy.

People have needed that face-time, but not so much for help with the download process per se, but in order to actually explain the app and encourage people to download it in the first place. This was quite surprising to us, considering we require users to turn on multiple services for the app to function properly (wifi, location-aware, bluetooth, notifications, and privacy settings for the camera).

As I mentioned in my previous post, we’ve had some challenges figuring out messaging around ASK. After much initial testing, we think we’ve landed on some ways in which to move forward. This process was heavily informed by the work of our Visitor Liaison team. These three individuals, each of whom has worked with us in the past, were brought on board (in a part-time, temporary capacity) specifically to help us determine the how to talk about the app—the “pitch” in both long and short form—and where visitors are most receptive to hearing it.

Visitor Liaisons are identified by cycling caps, which so far has worked pretty well. We my find as the lobby gets busier, they may need to wear t-shirts or something even more visible in addition. From left to right: Emily, Kadeem, and Steve.

Visitor Liaisons are identified by cycling caps, which so far has worked pretty well. We my find as the lobby gets busier, they may need to wear t-shirts or something even more visible in addition. From left to right: Emily, Kadeem, and Steve.

Steve Burges is a PhD student in Art History at Boston University and former curatorial intern in our Egyptian, Classical, and Ancient Near Eastern Arts department. Kadeem Lundy is a former floor staff member at the Intrepid Air Sea and Space Museum and was a teen apprentice here for three years. Emily Brillon was one of the gallery hosts for our first pilot test and has recently completed her Bachelor’s in Art History, Museum, and Curatorial Studies at Empire State College.

This team has been really key in helping us hone the messaging and in encouraging visitors to participate in ASK. From their efforts, we’ve learned what the key characteristics about the app experience that visitors respond to the most including that it’s a customized, personalized experience; that it’s about real people, or the idea of an expert on demand; and the immediacy, that it’s right away, or on-the-spot.

Most people are receptive when they are in line.

People are most receptive when they are in line.

We are also beginning to see patterns in where visitors are most receptive. We’ve been using the lines during busy weekends to our advantage, both for ticketing and the elevators—captive audiences help. But what has been most interesting to discover is that the Liaisons can most effectively get people downloading and using the app if they are the second point of contact.

At the ticketing desk visitors are asked if they are iphone users. If so, they get a special tag (right) which helps us differentiate them.

At the ticketing desk visitors are asked if they are iphone users. If so, they get a special tag (right) which helps us differentiate them.

As Shelley introduced in her previous post, so far the most important point in our messaging is our ticketing process. A few weeks ago, our admissions staff began telling people about the app at the point of sale. The goal here is to identify iPhone users early (our potential audience) and to inform them about the app. iPhone users are given a branded tag so that Liaisons know who to approach. When this process is in play, the Liaisons’ job is that much easier because visitors know we have an app. Then the Liaison can focus on the hard part—explaining how it works.

]]>
/2015/08/11/a-personal-invitation-to-ask/feed/ 0