Shelley Bernstein – BKM TECH https://www.brooklynmuseum.org/community/blogosphere Technology blog of the Brooklyn Museum Thu, 14 Apr 2016 17:02:42 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 ASK Snippets Integrated Into BKM Website /2016/04/14/ask-snippets-integrated-into-bkm-website/ /2016/04/14/ask-snippets-integrated-into-bkm-website/#respond Thu, 14 Apr 2016 17:02:42 +0000 /?p=7863 A number of things happen after a visitor has a chat with our ASK team. At the end of each day, the ASK team takes the long form conversations happening in the app and they break that content down into what we call “snippets.” A snippet contains a question asked by a visitor, photos that may have been taken as part of that conversation, and the answer given by the ASK team. The resulting snippet is then tagged with the object’s accession number.

So, what do we do with all these snippets of conversation? Once a snippet is tagged with an object’s accession number we use it in a number of ways internally. For starters, the snippet becomes available to the team in the dashboard ensuring the team has previous conversations at their fingertips when someone else is asking questions about the same object. Additionally, snippets are exported into Google Docs on a quarterly basis and sent to curatorial for a review. Curators review all the snippets for their collections and exhibitions, meet with the ASK team to discuss the content, and then certain snippets—those that contain the most accurate answers and are most on point with curatorial vision—are marked “curator vetted.”

These post-processing steps ensure that we can quantify how many questions have been asked and, more importantly, what questions are being asked about our objects and exhibitions. This process sees that there’s ongoing communication between the ASK team and our curatorial staff; something we’ve found critical in this project. It gives curatorial the chance to learn from the conversations taking place in our galleries every day.

ASK snippets can now be seen on object pages like in this example of our Spacelander Bicycle.

ASK snippets can now be seen on object pages like in this example of our Spacelander Bicycle.

But, that’s not all. Once a snippet is marked “curator vetted,” it becomes available for various uses. Internally, we’ve developed a portal where staff can search for snippets related to objects and exhibitions. Externally, these snippets have now been integrated into the website and you’ll find them in two areas. In our collection online, you’ll see snippets related to specific objects—check out Spacelander Bicycle, Peaceable Kingdom, or Avarice for examples. Additionally, our special exhibitions now have a dedicated page of snippets related to that exhibition; you can see examples on Steve Powers, This Place, and Agitprop pages.

The beauty of this is now everyone can start to see the website immersed in the activity that’s been happening through the app and there’s a certain life that this user generated content brings to our own material. There are, of course, a myriad of ways conversational content can help shape our online presence. This is a just a start to something we hope to see grow more robust over time.

]]>
/2016/04/14/ask-snippets-integrated-into-bkm-website/feed/ 0
Selectively Flying Blind After Android User Testing /2016/04/05/selectively-flying-blind-after-android-user-testing/ /2016/04/05/selectively-flying-blind-after-android-user-testing/#respond Tue, 05 Apr 2016 13:47:04 +0000 /?p=7837 ASK Brooklyn Museum for Android is now available on Google Play. We had one early quandary, but this was a fairly straightforward development process. That is, until we got to user testing.

User testing sessions are a critical part of the process.

User testing sessions are a critical part of the process.

Android development is tricky. There are a lot of devices all running different system versions in various states of updates with hardware manufactured by different parties, distributed independently or by various carriers. By comparison, iOS is a fairly controlled environment; we knew we could develop the iOS version of the app in house, but it was clear to us that an external team would need to tackle Android and we contracted with HappyFunCorp.

In the beginning of our Android development process we looked at our Google Analytics to figure out which devices/systems were in our majority and this became our supported device list. Simply put, there are too many devices running too many systems to be able to support all of them, so you have to pick your battles. We settled on a combination of support for devices running at least Android 4.3 and with Samsung Galaxy 4S (and higher) and Nexus 5 (and higher) hardware.

As with our iOS release, we did a number of invited user testing sessions onsite at the Museum. Many of these sessions were attended with just a few users giving us their time. Each session helped us surface bugs, but it was difficult to get a critical mass. One thing we started to see, however, is that at each session we had users show up with hardware that was not on our supported list and, inevitably, we saw a lot of bugs on these devices. It was the very well attended session with Bloomberg employees that helped us identify a trend, come to terms, and make a critical decision that will affect all Android users (for the better).

For both iOS and Android apps, Bloomberg employees helped us test each app prior to launch.

Bloomberg employees helped us test both our iOS and Android app prior to launch.

Most of the bugs we found on unsupported devices came down to problems with beacon ranging. We could reliably require Bluetooth on supported devices, but on others we’d see two problems. First, if a device didn’t have bluetooth support the user couldn’t use the app at all. This requirement on iOS devices made sense because of the near ubiquity of Bluetooth on current hardware, but was more difficult on the plethora of Android hardware situations. Second, if users were on an unsupported device beacon ranging was hit or miss often causing problems like device sluggishness or outright device crashes.

It was during the Bloomberg testing session when we could see a number of users all having the same problems that issue became really clear.

It was during the Bloomberg testing session when we could see a number of users all having the same problems that issue became really clear.

We had three options. Option one would be to not allow the download on unsupported devices, but this would mean some users could find it in Google Play and other users wouldn’t see it at all. This presented a nightmare for messaging—”We have an Android app….sort of…”  Option two allowed the download, but many users would experience bugs and it would be difficult to communicate why.  Option three would be to turn off bluetooth/beacon ranging for all unsupported devices, but this would mean the ASK team would not see a user’s location.

When an unsupported device is in use, a "no bluetooth" icon appears on the ASK team dashboard alerting them to the situation.

When an unsupported device is in use, a “no bluetooth” icon appears on the ASK team dashboard alerting them to the situation.

In the end, we went with option three and decided to turn off beacon ranging for all unsupported devices. This means ASK will work on most Android devices, but on devices where we’ve disabled beacon ranging, the ASK team will be flying blind with “no location found.” They can chat with users, but the object information won’t be at their fingertips so readily in what we hope are users who represent the very edge case.

]]>
/2016/04/05/selectively-flying-blind-after-android-user-testing/feed/ 0
How Important is Anonymity When Asking a Question? /2016/03/02/how-important-is-anonymity-when-asking-a-question/ /2016/03/02/how-important-is-anonymity-when-asking-a-question/#respond Wed, 02 Mar 2016 19:05:02 +0000 /?p=7776 As reported earlier, the Android version of our ASK app is due to launch in April. For the most part, the app will look and feel the same. There will be adjustments to the ways menus work to make them feel more appropriate for this platform, but nothing major. The biggest difference we’ve found is a potential challenge in the way we identify and retain information on unique users. This was such an interesting issue for ASK that it warrants its own post because what’s at stake is the core engagement of the product.

As I start to outline what’s going on here, keep in mind there seems to be a general fear in the world about the perception of asking stupid questions. In the early days of user testing, we heard this time and time again and it was clear from the onset that if we were going to raise the bar in the interaction we were going to have to give people a safe space to engage. From the get go, we made the decision that we wouldn’t onboard users asking for personal information, we wouldn’t collect a login, or even a name to get started. Essentially, we know if you return, but we don’t know anything about you because we don’t ask for any information up front.

In iOS, we use an Apple ID to recognize a user from multiple devices (if they own and iPad and an iPhone, for example) and, as long as they use the same Apple ID it carries a user through when the user upgrades their phone. All of this is pretty seamless on iOS because it happens out of view of the user and, bonus, we are not storing personally identifiable information, so we’re where we want to be on privacy.

Going with the Google ID to recognize users across devices may be problematic for ASK engagement.

Going with the Google ID to recognize users across devices may be problematic for ASK engagement.

Android operates a little differently. We can use a Google ID, but this action happens in view of the user and this is what creates a conundrum. On first use, a user is presented with Google IDs from which to choose. The good news is we still wouldn’t be storing personal information, but the really bad news is twofold. First, it’s impossible to tell users that we are not storing personal information and the natural assumption may be that by selecting an ID it feels like we’d know them more deeply than we do. Second, a known user ID associated with the app may significantly change user interaction because it runs counter to what we’ve heard from users. Namely, people like the anonymity of the app for fear of asking what might be perceived as a stupid question; the app feels like a safe space to explore.

The issue, for ASK, is a big one. A known user ID may change that behavior and in the interest of time, we’ve decided to go with Device ID (seamless to users) and then think about switching to Google ID post-launch when we have enough space to accomplish focus group testing around the change.

In the end, we’ve decided to use the device ID, but going this route only helps us identify the same user on that particular device; if a user upgrades or uses a different device they look like a new user to us. Using the device ID means we can’t effectively welcome someone back, see their conversation history, and make recommendations that build on that relationship.

We’re okay with this as a stopgap measure because it’s the most surefire way for us to retain the engagement that we know has been working. Post launch, however, this will be one of the first things we have to think about re-factoring because those long term goals of building relationships are key. As we rethink this, we’ll need to do a lot of focus groups paired with A/B testing to see if engagement changes with a Google ID and, if so, how much.

]]>
/2016/03/02/how-important-is-anonymity-when-asking-a-question/feed/ 0
Code Release: Going from iOS to Android Solving iBeacon Issues Along the Way /2016/02/23/code-release-going-from-ios-to-droid-solving-ibeacon-issues-along-the-way/ /2016/02/23/code-release-going-from-ios-to-droid-solving-ibeacon-issues-along-the-way/#respond Tue, 23 Feb 2016 17:44:39 +0000 /?p=7759 Our Android release is coming in April. I’m often asked about our strategy to expand into Android when 74% of our users are on iOS devices. The reasoning is pretty simple; we have a mandate at the institution to make every product as accessible as possible and user expectation dictates ASK availability on both handsets. The marketshare of Android devices is only growing and it’s way better to be ahead of that curve than behind it.

When thinking about Android expansion we had to re-evaluate how we were staffing mobile. In our first year it was invaluable to have someone on the internal team dedicated to iOS mobile development because the process at that time was more iterative. We were developing features and testing with users as we went along—having someone on staff to make changes as we discovered them was critical. Moving beyond that stage we had to reconsider the most efficient way of working and the best route forward would be to shift from staffing internally to hiring a firm. We contracted HappyFunCorp (HFC) to develop the ASK app for Android using our iOS app as a model. HFC is also handling our ongoing iOS maintenance allowing us to shift away from staffing internally in full.

The Android version of the app will function the same as iOS and in a future post I’ll talk about some of the changes that make ASK feel more appropriate for this platform and one of the bigger challenges we hit. Mostly, though, the transition to Android has been straightforward and, luckily for us, that meant we could concentrate on more vexing issues like how the app detects beacons and sends locations back to the ASK team. What follows is a lengthy post that details how our code works and the adjustments we’ve made. We are also taking this opportunity to release all of our code related to beacons in both iOS and Android regardless of the state it’s in—read on. 

In Android, permissions are granted in a one-step process at initial run. iOS, by contrast, stages permission actions as a user needs them. This delay in granting access to bluetooth may be causing “no location found” on start messages because we can’t range for beacons and build our array quickly enough.

So let’s take a look at the problem at hand. We’ve been seeing “no location found” on 15% of messages sent to the team with and high proportion of those on a user’s first message in a chat. We have a hunch this is likely because the beacon ranging starts too late. In iOS ranging only begins when a user turns bluetooth on and this prompt occurs very close to when a user would send that first message; turning on bluetooth is one of many things a user needs to enable and all of these prompts have been carefully staggered, so that users are not overwhelmed at the start. In Android, a user is asked for all permissions as a one step process up front and this means ranging for beacons starts right away. We think this change will help enormously, but we are still testing and this is to be determined. 

The other cause we’ve seen with “no location found” is attributed to the human error side of the equation. We have an admin tool that keeps track of our beacons and assigns them to museum locations. The beacon may be missing from that tool having been entered incorrectly (or not at all). To solve these issues the BKM web developer team enabled server-side logging; each time a beacon is sent to the dashboard that is not in the beacon database we’ve logged it in an admin tool so we can periodically use the data to chase down these problems.

Admin tool shows when we receive an invalid beacon ID likely the cause of a data entry error in our beacon tool.

Admin tool shows when we receive an invalid beacon ID likely the cause of a data entry error in our beacon tool.

The HFC team has also coded a debugger tool within the app which shows, in real time, all of the beacons in the application’s cache and the closest beacon the application would send with a message. This is helps us get visibility beyond the Estimote app because it shows what’s happening in our own application. Chris Wilson at HFC explains:

We now have a Chat/Beacon Log page that shows the list of messages sent since the list was reset. It has the beacons (with message optionally visible) showing the message timestamp, and the beacon’s major and minor ids. It uses live beacon data from the museum’s web api to determine if the beacons associated with these messages are valid, invalid, or if no beacon info was sent. The messages in the list are then color coded based on these designations. For easy visibility, messages with valid beacons are colored green, invalid designations are colored yellow, and messages sent with no beacon data are colored red. There are also total counts for each designation visible on the log screen.

 

Mobile side debugger tool developed by HFC to show beacons being ranged and which beacon would be sent with a message if a user were to hit send.

Mobile side debugger tool developed by HFC to show beacons being ranged and which beacon would be sent with a message if a user were to hit send.

Our coding changes have not just been limited to the addition of debugging tools and as we discuss improvements it’s worth reviewing how the beacon code in our ASK app actually works. In a nut shell, as a user walks around the building the app ranges beacons as encountered by the device and builds an array with the associated beacon distance. When a user composes a message and hits send, we send the closest beacon to the user that is in the array. The following bulleted lists are coming direct from HFC:

Here’s the way the (newer) Android code works—

  • On app start, the app begins ranging beacons using the Android Beacon Library.
  • About every second the beacon ranging returns a list of beacons that have been seen.
  • It cycles through each beacon and adds them to the cache, removing old copies of beacons that have been ranged with new distances.
  • It removes beacons from the cache that have outlived the TTL (currently 2,500 ms – this is something we can try to tweak to improve accuracy).
  • It then cycles through the list to determine which beacon is closest, replacing the closest beacon variable with this beacon. TTL on this is 3 minutes.
  • The closest beacon variable is picked up and sent along to the chat server when the user hits the send button.

Here’s what we know about the way the (older) iOS code works—

  • On app start, it the app starts the beacon ranging. However, the bluetooth check is only conducted when the user tries to send a message. Ranging requires bluetooth to be on, so this may be the source of “no location found” issues.
  • When beacon ranging is run, an array of beacons, sorted by proximity is returned every second. If the proximity is unknown, the beacon is removed from the array. Only the first beacon (the one with closest proximity in the array) is used until the next ranging cycle. If the cache is empty, that beacon is added to the cache.
  • If the cache is not empty, then the first beacon on this list (the one with closest proximity) is compared with the last beacon found with the last cache object. (1) If the major/minor ID of the beacon is the same AND the distance is less than the object, it adds the beacon to the list. If the major/minor ID of the beacon is the same and the distance is more, then it is not added to the cache. (2) If the major/minor ID of the beacon is different from the last object, it is added to the bottom of the cache.
  • The last beacon in the cache array is grabbed along with the message when the user taps the “send” button in the chat message. If the beacon has been in the cache for more than 5 minutes, no beacon information will be sent.

So, what are the differences?

  • Beacons aren’t removed from the cache in the iOS app, so duplicate beacons with different distances are added.
  • Rather than comparing all new beacons found to all cache beacons and updating existing beacons and adding new ones as in the Android app, the iOS app compares only the last beacon found to see if it is closer than the last cache array beacon.
  • There is a TTL of 5 minutes in the cache in the iOS app, whereas the TTL on beacons in the cache in the Android app is 2.5 seconds, and the TTL of the closest beacon if no new beacons have been ranged is 3 minutes.
  • In the Android app, in addition to the short lived cache of beacons, there is also a closest beacon variable set in case there are no ranged beacons for a period of time. This beacons is then sent with messages if it has been more than 2.5 seconds but less than 3 minutes. In the iOS app there is no concept of a closest beacon variable.

We are now going to begin the process of testing Android with users to see if these changes have helped and, if so, we’ll start to port these lessons learned back into the iOS code after April. In the meantime, given how many people are working (and struggling) with beacon deploys, we’ve decided to release both sets of code in the state they are currently in along with the mobile side debugging tools. Having a fresh set of eyes from HFC looking at the code has help a bunch and we hope having many more eyes on this code will only help everyone.

Lastly, I’d be remiss if I didn’t take this opportunity to talk a bit about our funders as related to this post in particular. ASK Brooklyn Museum is supported by Bloomberg Philanthropies and one reason we are releasing this code today is the amount of encouragement and enthusiasm that has come from the Bloomberg team toward information sharing in all states of a project’s progress. This blog, our lessons, and our code are published in the largest part due to their support; we are honored to be as open as we are because of the standard they have set among their grantees.

]]>
/2016/02/23/code-release-going-from-ios-to-droid-solving-ibeacon-issues-along-the-way/feed/ 0
Getting Visibility on the iBeacon Problem /2016/02/23/getting-visibility-on-the-ibeacon-problem/ /2016/02/23/getting-visibility-on-the-ibeacon-problem/#respond Tue, 23 Feb 2016 16:21:33 +0000 /?p=7760 It’s been just over a year since I wrote about the realities of installing ibeacon to scale. Our ASK app, funded by Bloomberg Philanthropies, has been active for the past year and the beacon-powered solution that sends a visitor’s location to the ASK team continues to be difficult; this post will help quantify some of the issues we have been seeing.

We started to look deeper into our beacon difficulties roughly six months ago because the project had been on the floor long enough to give us patterns in the data. The timing was critical, too, because we were about to begin Android development; working to solve the problems now would help make that deploy more straightforward. To get going quickly, we implemented a manual tracking system using Survey Monkey. The ASK team would log each time they ran into an issue with the location information in an ASK chat. This was not an easy thing to juggle because mid-chat they’d need to fill out a long and detailed form about what was happening and where, but it showed us two very clear trends. Problem messages were either coming in with “no location found” or nearby signal bleed was causing an incorrect location to be sent to the team.

If you thought the manual tracking system was labor intensive what comes next may boggle your mind. As we started to troubleshoot beacon issues, we wanted a clean slate. This meant updating the firmware on all the beacons, checking the battery life, and turning off the advanced power settings that Estimote provides. This was a painstakingly manual process where I’d have to go and update each unit one-by-one. In some cases, I’d use Estimote’s cloud tool to pre-select certain actions, but I’d still have to walk to each unit to execute the changes and use of the tool hardly made things faster. More frustrating was the process to update a unit’s firmware because the phone has to be in close physical proximity to the beacon you are trying to update; this meant carting a ladder around all day in the galleries because all of our beacons are installed high on the wall.

Using a selfie stick to update Estimote beacon firmware in the galleries.

Using a selfie stick to update Estimote beacon firmware in the galleries.

Our solve for this—and I can’t claim credit for coming up with this one because it was our Head of IT, Christina, who drummed this up—was to use a selfie stick to circumvent the need for a ladder. In the end, it took 11 staff hours to make the needed updates/changes to the roughly 150 beacons installed in the Museum. There is some good news. After a year on the floor, the batteries are still holding strong with most units reporting 36+ months of battery life remaining.

With two clear issues on the table, we started down a technical path toward solving them. The web development team pulled data to see if we could quantify how big these issues were. Turns out, “no location found” was overwhelming larger than the “incorrect location” problem; roughly 15% of messages coming into ASK didn’t have a location. This was puzzling because if you walked around with the Estimote app ranging for beacons, it would show that the chance for a dead spot was unlikely; if anything we have too many beacons creating too much noise (as evidenced by the incorrect location problem). Diving in a bit more, the data was showing us that within the 15% of messages with no location data, 26% had no location on a user’s first message in a chat. This pointed to a problem with the beacon detection code in our ASK app.

While this is about the last thing I wanted to hear, it was a darn good time to figure it out. With Android development on the horizon, we decided to evaluate the iOS code to try and figure out why this was happening, but fix the issue in the Android code. Then we could go back after learning those lessons to refactor how the iOS code is working. I’m going to expand on this at length—code release included—in my next post.

For now, let’s move on to the issue not affected by code—the incorrect location problem. Beacon placement in a situation like this one is challenging—we’ve got an older and complicated building with walls of all different material types, special exhibitions that change the physical spaces constantly, object cases that affect signal, and even things like attendance can cause fluctuations in signal that are not expected. To get a handle on signal bleed, we needed two things: a way quantify where we had problems and a way the ASK team could work around them because, no matter what, things may get better with bleed, but that problem is always going to be there with this technology.

The ASK team can toggle between adjacent locations if the an incorrect beacon location is delivered.

The ASK team can toggle next/previous to show objects in adjacent locations if a incorrect beacon location is delivered.

The solve for the incorrect location problem gives the ASK team a tool to work around the issue while leveraging crowdsourcing to diagnose where we have the problem. In the ASK dashboard, the ASK team has a way to toggle the location. If it’s clear that an incorrect location has been sent, they can use the next/previous arrows to see the objects in adjacent locations; I have the ability to set the adjacencies in an admin tool. As an example, I can define that beacon group 59 (European) is adjacent to beacon group 34 (Assyrian) and 29 (Egyptian); if a message gets sent from European, but the visitor is clearly asking about an Egyptian object, the team can use the toggle to get to the adjacent Egyptian objects.

Admin tool showing which beacon group is returned and how many times the ASK team has clicked to a different location using the previous/next toggle in the dashboard.

Admin tool showing which beacon group is returned and how many times the ASK team has clicked to a different location using the previous/next toggle in the dashboard.

The toggle helps the ASK team work around the bleed in the most immediate sense, but the toggle is also helping us discover our pain points. The web development team is now tracking clicks on the toggle and a report in our admin tool displays toggle clicks and where they land. This then helps me go into the galleries and adjust the transmit rates and/or placement of beacons to lessen the instances of the problem.

Speaking of, when I go into the galleries and adjust the transmit rates here are the general rules of thumb that I’m using; keep in mind, for ASK all we care about is knowing the room a visitor is standing in and we don’t need more granularity. Larger, cavernous spaces with fixed boundaries and little chance of bleed—our lobby and west wing special exhibition galleries—get fewer beacons with higher transmit rates for better coverage. Smaller galleries closer together—most of our permanent collections—tend to get more beacons with lower transmit rates; this helps us get the coverage we need while keeping the bleed at a minimum.

]]>
/2016/02/23/getting-visibility-on-the-ibeacon-problem/feed/ 0
Asking with a New Set of Eyes /2015/12/22/asking-with-a-new-set-of-eyes/ /2015/12/22/asking-with-a-new-set-of-eyes/#respond Tue, 22 Dec 2015 16:08:50 +0000 /?p=7721 I’m sure it will come as no surprise to anyone that getting out of your own head every once in a while can have great benefits. We’ve been working on ASK for more than a couple of years and we had the unique opportunity to do just that when I got an email from the staff at Cornell Tech that read:

Cornell Tech’s current campus is up and running in Chelsea while it waits for its future home on Roosevelt Island in 2017.

Cornell Tech’s current campus is up and running in Chelsea while it waits for its future home on Roosevelt Island in 2017.

“We are reaching out to invite you to submit a Company Challenge for the fall semester of 2015. Cornell Tech Company Challenges inspire integrated teams of computer science, business, and information science Masters students to deliver new business ideas and prototypes in response to challenges posed by leading startups, companies, and nonprofits. Company Challenges are expressed in the form of a ‘How might we…’ question that goes beyond a problem to solve or work to be done, giving our students the freedom to innovate, explore different paths, and impress you with their creativity.”

The BKM Tech team submitted a couple of ideas and one of the student teams decided to take us up on our offer to find out how might we use data collected from our ASK Brooklyn Museum mobile app to greatly improve the visitor experience at the museum? The Cornell student team—Sean Herman, Daniel Feusse, Gideon Glass, Jean Lin, and Yilin Xu—worked all semester through meetings onsite, a user testing with our visitors, and three studio sprints to come up with a prototype that would address our challenge.

The Cornell team moved through the challenge with a lot of ideas and they decided to run with an alternate entry to the ASK app specifically for those visitors who might not immediately have a question or who may feel intimidated in trying to come up with one. In their project, a visitor could take a photo using a web app and then image recognition would be leveraged to match the photo with existing questions in our ASK dataset. These questions could be used to pique the curiosity of users who could then ask that question as a starting point using the existing ASK app. It’s a nice way to get a conversation started and a bit of an easier onboarding process than having to come up with your own question right out of the gate. Take a look at the demo below.

Sean Herman presents about the team's ASK BKM solution during Cornell Tech's studio sprint #3.

Sean Herman presents about the team’s ASK BKM solution during Cornell Tech’s studio sprint #3.

Now for readers of this blog you might be thinking…”wait a minute this goes against a bunch of what we’ve been reading here” and, you’re right, there are a ton of reasons why our own internal team didn’t go this route. We had developed our own solution knowing the feedback in user testing was telling us that people were looking at works more closely to figure out what questions to ask (behavior we wanted). We were also seeing if we provided too much material people were really interested in these questions, but they got sucked into screens (behavior we didn’t want). Our own experience in past projects had shown us that the more difficult the task (in this case figuring out what to ask), the more deeply engaged the users became, so our current solution was implemented to address all of these things.

But just because we’ve developed things this way and are happy with those results doesn’t mean we’ve 100% hit the mark. The hardest part of working with the student team was actually letting go of what we knew and, instead, letting this group of students go where they wanted to go. Having an outside group try the app and naturally go to this place for a solution was incredibly valuable and, better still, how they decided to execute their own implementation also taught us a great deal, so let’s talk about what we learned.

We’ve reported on the blog about our adoption rate being less that what we want. We hope marketing the app will help fix this (coming in April), but the Cornell team solution had us wondering if we should start playing with alternate entry points to see if it would naturally encourage more use. If so, we could easily start comparing our current engagement benchmarks to new implementations. The idea of doing so in a web based app is a quick way to do experimentation without the need for expensive native development. As things work and engagement is sustained, we could bake these alternates into the app as part of the full experience after it has been fully tested. That’s a big win for us because our default right now is we’ve built a great product that works, so we don’t want to mess with it. This goes against the iterative process we used to make the app in the first place and the Cornell team helped energize us about what’s possible in continuing to hone the experience more as we move through the project.

Technical implementation of the Cornell team which included the moodstocks api, our snippet data, and our open collection API.

Technical implementation of the Cornell team which included the moodstocks api, our snippet data, and our open collection API.

There are other things we started to think about, too, as a result of the technical implementation the Cornell team used. Could image recognition support our ibeacon implementation? You know we’ve struggled with beacons; the results coming back to the team about where a person is standing have not been reliable. We could start experimenting with a combination of the two to see if we could get better results.  Based on the image a person takes combined with probable beacon locations, could the dashboard more accurately predict which object a person is asking about and serve the metadata to the team in a less cumbersome way? Yes, very likely.

We also started to think about the ASK start screen. It’s pretty simple call to action—Ask about art and we’ll answer right now, but that call requires you to immediately have a question. What if we changed that to something more like what you see in the Cornell demo and go with “Take a photo of a work that interests you and we’ll answer right now.”  Would that streamline the entry process to something people do naturally (photos) and get them started conversing more easily? This would be easy to test using a paper prototype.

Cornell Team user testing image recognition with BKM visitors.

Cornell Team user testing image recognition with BKM visitors.

Lastly, we started to look at the QA model that the Cornell team hints at in their demo and wondered if that could be something we turn on or off during times of heavy load. So, for instance, if the team is flooded with requests and wait times are on the rise, having the app automatically start to provide previous questions and answers might help us scale. The need to scale is a good problem to have and not something we’ve experienced just yet, but that may change as we head into April. Having some thoughts around this now will only help us develop prototypes more quickly if the need arises.

In the end, I think you’ll start to see us taking some of these great ideas and playing with them more. This was a much needed spark for us at this stage of the ASK project and we can’t thank the Cornell team enough for their insight and their work throughout the semester. The ideas they were working with will likely end up being a part of the future of ASK.

]]>
/2015/12/22/asking-with-a-new-set-of-eyes/feed/ 0
Sleuthing Clues about the Future from Visitor Interaction /2015/12/02/sleuthing-clues-about-the-future-from-visitor-interaction/ /2015/12/02/sleuthing-clues-about-the-future-from-visitor-interaction/#respond Wed, 02 Dec 2015 21:23:47 +0000 /?p=7705 Things have been pretty quiet over here for a while—have you noticed?  We had been blogging our progress on ASK weekly and in my last post we were talking about the experimentation and prototyping we’d been doing with the ASK team on the floor. So, where did we land?

Let’s start with our status of engagement. Visitors have been using the app and are having great experiences as evidenced by five star reviews in the app store and feedback we are getting directly through the in-app conversations. Use has remained mostly consistent from our earlier findings—a soft launch without much marketing is yielding a 1-2% use rate and we’ve had a little over 2600 conversations thus far. Visitors are taking the app through multiple galleries and, on average, asking questions in at least two spaces. The power user average—defined as those who ask questions in three or more galleries—are asking questions, on average, in six galleries and these users represent 18.79% of our total. Conversations are fairly deep with chats consisting, on average, of 13 messages back and forth. One of the most compelling things we’ve seen is that people using the app can remember—sometimes days later—the names of the ASK team who helped them. Beyond any numerical statistic, this particular trend shows us that the exchanges are both personal and meaningful in a way that is similar to face to face communication. That’s a big win for this app and something I’m incredibly proud of; it also shows just how great the ASK team members are at their jobs of engaging the public.

Coming into the museum there's not much telling visitors we have an app, but that marketing plan is something we've been working on.

Coming into the museum there’s not much telling visitors we have an app, but that marketing plan is something we’ve been working on.

Having said all that we’ll also tell you that if you’ve come to the museum lately you have not seen the ASK team interacting on the floor. From earlier posts, you know we had done a lot of prototyping work with the team in various locations and found that the best spot was likely going to be a somewhere mid-visit. Finding a place for the team in the heart of the museum’s spaces has been a challenge and with new institutional direction the galleries are changing considerably; essentially, we know what kind of home they need, but right now is not the most ideal time to be building it. This means the team is working behind the scenes and, to my own surprise, this does not seem to be affecting the actual user experience. If anything, the team is finding they are more equipped to handle incoming questions via the app because there are no additional distractions and they can communicate amongst themselves more effectively.

One issue in having the team off the floor is the lack of visibility in the museum. If you walked into the Brooklyn Museum today, you’d be hard pressed to know we have an app because we just don’t have much marketing and we are heavily relying on the front desk staff to tell people about it. That will change, however, in late spring when we do a more formal launch which will include a version for Android. This marketing plan is one of the most important things we are working on right now. Given what we are seeing with the engagement and the current institutional goals, the thought is we should let the marketing do the job of building awareness while the team thinks about other ways to engage on site. This might translate to meeting and greeting the public through public programming instead of a permanent presence on the floor.

What else have we been doing? This is a three year project—year one was about getting mobile into the hands of visitors, year two is figuring out what the interactions teach us, and year three is still very much a work in progress. The single most important thing we’ve been doing is data review because what we learn from the ASK interactions and how that could transform the visitor experience is at this initiative’s core.

Data review meeting between the ASK team and our curators in European art.

Marina Klinger, the ASK Curatorial Liaison, has been organizing and leading data review meetings between the ASK team and our curators.

To this end, we’ve been meeting with curators in every single collection. Curators are given “snippet reports” which contain each exchange we’ve had with visitors on every object in their collection and these reports represent one of the deepest and richest data sets I’ve ever seen. We export exchanges into Google Docs (using their API) and share with curatorial teams who can comment on the conversations. We then have followup meetings to discuss the data. This process gets the interaction to curators, but it also serves our ongoing need to train the ASK team; curators can tell us where answers might need improvement and make sure we are aligned with curatorial vision. Already we are hearing the data is giving curatorial staff some key learnings which may help them think about visitor experience as they reinstall and/or make changes to galleries. We are also taking this opportunity to find out how often curators would like reports, how we can make reporting more efficient, and how reporting may need to differ from permanent collection galleries to special exhibitions.

So, while it may seem like we’ve been quiet over here, we’ve been steadily working away and making decent progress on this year two of learning from the interaction. It’s pretty critical to move through this year with great measure because what we learn at this stage helps us figure out what we want year three of ASK to be, so you may see us blogging less frequently, but you’ll find when we do we’ve got a lot of information to share. Speaking of, Sara will be up next to talk about similar meetings with education staff, engagement strategy, and the introduction of new staff members.

]]>
/2015/12/02/sleuthing-clues-about-the-future-from-visitor-interaction/feed/ 0
Seeking a Home on the Range /2015/08/27/seeking-a-home-on-the-range/ /2015/08/27/seeking-a-home-on-the-range/#respond Thu, 27 Aug 2015 15:33:07 +0000 /?p=7658 As summer draws to a close, so does our testing for the location of our ASK team. You may remember the results from our earlier testing in our pavilion and just off the lobby. For the remainder of the summer we’ve continued testing in locations throughout the building to learn how various spaces work.

A very typical day in the lobby. Visitor liaison tries to help stem the tide of questions, but once one person is there asking...more follow.

A very typical day in the lobby. One of our Visitor Liaisons tries to help stem the tide of questions, but once one person is there asking…more follow.

Testing in the lobby proved to be an incredibly tough spot. In this location, the team was highly visible, but this visibility was confusing because visitors saw them as general information points. And the kind of information visitors were looking for included everything from, “Isn’t there a zoo around here?” (referring to the Prospect Park Zoo) to “I need to sign up for the Bernie Sander’s campaign.” There was so much of this questioning going on, in fact, that it became difficult for the team to actually work and, in some cases, there were delays answering questions coming in via the app because interactions were proving to be too distracting. It should also be said that this working environment also included plenty of noise.

Simply put, this location proved to be too early in a visitor’s trajectory for visitor to be aware that there is an app and who the ASK team is in relation to it. They need to hear about the app from the ticketing transaction and see the team as a second (or even third) point of contact for everything to really gel.

The sheer amount of traffic and pre-visit questions coming to the team necessitated the use of "staff workspace" signage. Normally, these signs are used only when desks are not occupied, but here the use has been adapted off the cuff.

The sheer amount of traffic of pre-visit questions coming to the team necessitated a hack of our “staff workspace” signage. Normally, these signs are used only when desks are not occupied, but here the use has been adapted to help visitors identify what’s going on here.

These findings do not necessarily mean the ASK team won’t eventually end up in the lobby, but they do help us figure out what that presence would need to be more like in order to be more successful. A full marketing plan at the entry could help the awareness factor, so the team becomes a second point of contact even at this early stage. Also, a “glass box” with planned interaction time a la Southbank Center could also work in this location helping allow the team to get their work done. The planned interaction time would become key, though, in keeping with the project’s engagement goals (something Southbank did well through meetups and other scheduled interventions).

One big thing the lobby testing has taught us? Even with traffic patterns that now have much better clarity, the human presence is still something people really crave. We need to do some thinking here about the greeting process especially in light of how to work with our new information desk, which is part of the Situ Studio designed furniture set; our visitors services area is on this one.

We also tested team location in the galleries and some of the findings here have proven interesting. How close should the team be to works of art? How best to handle directional questions? When in the visit is the public most responsive to the team’s presence?—all of these questions are things we’ve been evaluating in this series of moves.

Testing in Connecting Cultures where the team was more embedded in and among the works of art.

Testing in Connecting Cultures where the team was more embedded in and among the works of art helped show that proximity helped drive conversations about art.

The team was placed in our Connecting Cultures exhibition located on our first floor; this location is post-ticketing, but fairly early in a visit because this is considered an introduction to the Museum’s collection where some visitors begin their visit. Testing here was a little complicated due to construction in the area, which created a considerable amount of noise (the team requested ear plugs at one point). Construction also didn’t help us much because it closed off exits, so many visitors would get in the space and some of the questions they had for the team were directional along the lines of, “Now how do I get out of here?” Interestingly, we don’t get many of these directional queries when people are using the app itself and that’s great, but we ideally want the team in a location that can foster in-person conversations about art. This space proved interesting because once the team was embedded in the exhibition, the conversations about art were on the rise. In the data collected the construction seemed to cause an imbalance of directional questions, but this tide would likely be stemmed once the space was fully restored to its normal state.

Testing in our forth floor elevator lobby where the team presence is more cohesive as a unit, there's proximity to works of art, but the space is also transitional.

Testing in our forth floor elevator lobby where the team presence is a more cohesive unit, there’s proximity to works of art, but the space is also transitional which has its own set of pluses and minuses.

Our next testing (going on now) has involved our elevator lobbies on the fourth and fifth floors. These are small spaces, so the team has a concentrated visible presence. These spaces are used for small exhibitions and/or have works installed, but they are also transitional in that most people passing through them are on their way somewhere. Both spaces are in a direct traffic line to special exhibitions. The fifth floor is unique in that most people start their visit on the fifth floor and start to work their way down the building, so the team in the fifth floor elevator lobby is earlier in a visit. The fourth floor elevator lobby is still in the traffic line, but more of a mid-way point in someone’s visit.

Fourth floor testing showed us that being in the middle of a visit pattern may be very beneficial. In this location, people seem more ready to talk about art and the team’s presence is more recognized because in-building marketing prior to this point helps with the connection. In one recent interaction, I watched as someone stepped off the elevator quickly making her way through the space. She spotted the team and you could see the lightbulb go off—”Oh, you’re the one answering questions in the app? The answers are so great. Thank you so much.” This is exactly the kind of thing we hope to see with the team being so accessible.

We’re still testing these areas more fully, but there are some things we know already that will help us in our quest to find an appropriate home for this team:

  • Proximity to art helps drives art-related conversations.
  • Discovery of the team mid-visit helps recognition.
  • Transition spaces might be a good fit if the team is not overwhelmed with directional questions.
  • Directional questions are an inevitable part of being on the floor, so being in a space where it’s easy to give instructions—Bathroom? ….Take the elevator down one flight. Basquiat exhibition? …Right down this hall.—helps put us in a position where we can at least quickly answer with minimal distraction.

During all of this testing, one thing has remained a constant. While the visibility of the ASK Team is important for the engagement goals of the program, their very presence does not seem to change our app’s usage numbers, so seeing the team at work does not necessarily help advertise the program.

As summer closes we’ve got a lot more to work with and we’ll begin some internal discussions about where this team might eventually land. This will, of course, involve many more factors because we have to take the learnings and align them with the most important thing of all—institutional goals.

]]>
/2015/08/27/seeking-a-home-on-the-range/feed/ 0
Measuring Success /2015/08/19/measuring-success/ /2015/08/19/measuring-success/#respond Wed, 19 Aug 2015 14:55:52 +0000 /?p=7649 We all struggle with how to measure success. We’re thinking a lot about this right now as we begin to put the pieces together from what we’ve learned over the last ten weeks since ASK went on the floor. Three components help us determine the health of the ASK: engagement goals, use rates, and (eventually) institutional knowledge gained from the incoming data.

When we look at engagement goals, Sara and I are really going for a gold standard.  If someone gets a question asked and answered, is satisfied, and the conversation endsthat’s great, but we’ve already seen much deeper engagement with users and that’s what we’re shooting for. Our metrics set can show us if those deeper exchanges are happening. Our engagement goals include:

  • Does the conversation encourage people to look more closely at works of art?
  • Is the engagement personal and conversational?
  • Does the conversation offer visitors a deeper understanding of the works on view?
  • How thoroughly is the app used during someone’s visit?

We doing pretty well when it comes to engagement. We regularly audit chats to ensure that the conversation is leading people to look at art and that it has a conversational tone and feels personal. The ASK team are also constantly learning more about the collection and thinking about, experimenting with, and learning what kinds of information and conversation via the app open the door for deeper engagement and understanding of the works. In September, we’ll begin the process of curatorial review of the content, too, which will add another series of checks and balances ensuring we hit this mark of quality control.

Right now the metrics show us conversations are fairly deep; 13 messages on average through this soft launch period (starting June 10 to date of this post). The team is getting a feel for how much the app is used throughout a person’s visit; they’ve been having conversations throughout multiple exhibitions over the course of hours (likely an entire visit). Soon we’ll be adding a metric which will give us a use rate that also shows the average number of exhibitions, so we’ll be able to quantify this more fully. Of course, there are plenty of conversations that don’t go nearly as deep and don’t meet the goals of above (we’ll be reporting more about this as we go), but we are pretty confident in saying the engagement level is on the higher end of the “success” matrix. The key to this success has been the ASK team who’ve worked long and hard to study our collection and refine interaction with the public through the app.

Use rate is on the lower end of the matrix and this is where our focus is right now. We define our use rate by how many of our visitors are actually using the app to ask questions. From our mobile use survey results, we know that 89% of visitors have a smartphone and we know from web analytics that 83% of our mobile traffic comes from iOS devices. So, we’ve roughly determined that, overall, 74% of the visitors coming through the doors have iOS devices and are therefore potential users. To get our use rate, we take 74% of the attendance rate (eligible iOS device wielding users) and balance that with the number of conversations we see in the app giving us a percentage of overall use.

Use rate during soft launch has been bouncing around a bit from .90% to 1.96%, mostly averaging in the lower 1% area. All kinds of things affect this number from the placement of the team, how consistent the front desk staff is at pitching the app as first point of contact, the total number of visitors in the building, and the effectiveness of messaging. As we continue to test and refine, the numbers shift accordingly and we won’t really know our use rate until we “launch” in fall with messaging throughout the building, a home for our ASK team, and a fully tested process for the front desk pitch and greeting process.

Our actual download rate doesn't mean much especially given the app only works to have a conversation in the building. Instead, the "use rate" is a key metric.  The one thing the download rate stats does show us is the pattern of downloads  runs in direct parallel with our open hours. Mondays and Tuesdays are the valleys in this chart and that's also when we are closed to the public.

Our actual download rate doesn’t mean much especially given the app only works in the building. Instead, the “use rate” is a key metric defined as actual conversations compared to iphone-wielding visitors. The one thing the download rate stats do show us is the pattern of downloads runs in direct parallel with our open hours. Mondays and Tuesdays are the valleys in this chart and that’s also when we are closed to the public; Saturdays and Sundays are the peaks when attendance is higher.

Still, even with these things in flux, our use rate is concerning because one trend we are seeing is a very low conversion on special exhibition traffic. As it stands, ASK is being used mostly by people who are in our permanent collection galleries. Don’t get me wrongthis is EXCELLENTwe’ve worked for years on various projects (comment kiosks, mobile tagging, QR codes, etc) that would activate our permanent collections; none have seen this kind of use rate and/or depth of interaction. However, the clear trend is ASK is not being taken advantage of in our special exhibitions and this is where our traffic resides. We are starting with getting effective messaging up more prominently in these areas. Once we get the visibility up, we’ll start testing assumptions about audience behavior. It may be that this special exhibition traffic is here to see exactly what they came for with little want of distraction; if ASK isn’t on the agenda it may be an uphill battle to convert this group of users. Working on this bit is tricky and it will likely be a few exhibition cycles before we can see trends, test, and (hopefully) better convert this traffic to ASK.

There’s a balance to be found between ensuring visibility is up so people know it’s available (something we don’t yet have) and respecting the audience’s decision about whether to use it. Another thing we are keeping in mind is the ASK team is in the galleries and answering questions in personthis may or may not convert into app use, but having this staff accessible is important and it’s an experience we can offer because of this project. Simply put, converting traffic directly may not be an end goal if the project is working in other ways.

The last bit of determining successinstitutional knowledge gained from the incoming datais something that we can’t quantify just yet. We do know that during the soft launch period the larger conversations have been broken down into 1,241 snippets of reusable content (in the form of questions and answers) all tagged with object identification. Snippets are integrated back into the dashboard so the ASK team has previous question/answer pairings at their fingertips when looking at objects. Snippets also tell us which objects are getting asked about, what people are asking, and will likely be used for content integration in later years of the project. The big step for us will come in September when we send snippet reports to curatorial so this content can be reviewed. We hope these reports and meetings help us continue to train the ASK team, work on quality control as a dynamic process, and learn from the incoming engagement we are seeing.  

Is ASK successful?  We’re leaving you with the picture that we have right now. We’re pretty happy with the overall depth of engagement, but we believe we need to increase use. It will be a while before we can quantify the institutional knowledge bit, so measuring the overall success of ASK is going to be an ongoing dialog. One thing we do know is the success of the project has nothing to do with the download rate.

]]>
/2015/08/19/measuring-success/feed/ 0