bloomberg – BKM TECH / Technology blog of the Brooklyn Museum Tue, 19 Apr 2016 14:30:13 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 Leveraging Machine Learning for Better Efficiency /2016/04/19/leveraging-machine-learning-for-better-efficiency/ /2016/04/19/leveraging-machine-learning-for-better-efficiency/#respond Tue, 19 Apr 2016 14:30:13 +0000 /?p=7868 Extremely smart people dedicated to the field of machine learning have made tools that are not only better, but far more accessible than they have been in the past. We don’t have anyone at the Brooklyn Museum who’s an expert in machine learning, but because of the improvements in machine learning tools, we don’t have to be. If you’ve been following our series of blog posts you’ll remember we talked previously about the accuracy issues we’ve had with iBeacons and the problems that poses for usprimarily that decreased accuracy in iBeacon results means delayed response times to visitor questions. Since providing a seamless, personal, educational, and ultimately extremely rewarding experience is the foundation of what ASK Brooklyn Museum is built upon, anything we can do to improve the efficiency of the system is a project worth taking on.

One of the tools we’ve leveraged to help us in this goal is called Elasticsearch which is a full text search server.  While not strictly just a machine learning tool, it uses many NLP algorithms (which are machine learning based) in its backend to do more intelligent text searching to match similar phrases. This means instead of doing a ‘dumb’ search that has to match exact phrases, we can do more fuzzy style searching that can locate similar phrases. For example, if we have a block of text that has the phrase ‘Where are the noses?,’ if we did a search using the phrase ‘What happened to the noses?,’ the first block of text would be returned near the top of the results.

This particular use case is exactly what we were looking for when we needed to solve a problem with our snippets. We’ve talked about snippets in a previous post, but just to recap snippets are pieces of conversations between visitors and our audience engagement team about works of art in the museum. Due to their usefulness in not just highlighting great conversations, but also in their acting as a sort of knowledge base, snippets have become an integral part of ASK. This means we’d like to create snippets as much as we can to grow this knowledge base and help spread the wisdom gleaned from them. However, over the course of this process it’s easy to accidentally create snippets for the exact same question which clutters the system with duplicates.  This is problematic not just for search results but also because all snippets go through a curatorial approval process and seeing the same snippets creates unnecessary extra work for everyone involved.

The dashboard automatically queries our Elasticsearch server to look for similar or duplicate snippets during the snippet creation process.

In addition to solving the problem with duplicates, cleaning up the system means we can much more accurately track the most commonly asked questions.  All of this happens as part of a seamless process during snippet creation.  When an audience engagement team member begins the snippet creation process, the dashboard automatically queries our Elasticsearch server to look for similar or duplicate snippets.  These search results show up next to the snippet editor which makes it easy to quickly find if there are existing duplicates.  If a duplicate snippet does exist, the team member simply clicks a “+1” counter next to the snippet.  This increments a number attached to the snippet which we can then use for various metrics we track in the system.

If the ASK team finds a duplicate snippet does exist, they click the “+1” counter. This increments a number attached to the snippet which we can then use for various metrics we track in the system.

Just based on our short time using machine learning tools, it’s clear how powerful the advantages of using these tools are in the here and now.  We’ve already talked about how they’re already improving our response times, metrics, and knowledge base, but that may just be the tip of the iceberg.  The AI revolution is coming and as tools get more sophisticated yet simpler to use, the question isn’t if you’re going to use it, but when and how?

]]>
/2016/04/19/leveraging-machine-learning-for-better-efficiency/feed/ 0
ASK Snippets Integrated Into BKM Website /2016/04/14/ask-snippets-integrated-into-bkm-website/ /2016/04/14/ask-snippets-integrated-into-bkm-website/#respond Thu, 14 Apr 2016 17:02:42 +0000 /?p=7863 A number of things happen after a visitor has a chat with our ASK team. At the end of each day, the ASK team takes the long form conversations happening in the app and they break that content down into what we call “snippets.” A snippet contains a question asked by a visitor, photos that may have been taken as part of that conversation, and the answer given by the ASK team. The resulting snippet is then tagged with the object’s accession number.

So, what do we do with all these snippets of conversation? Once a snippet is tagged with an object’s accession number we use it in a number of ways internally. For starters, the snippet becomes available to the team in the dashboard ensuring the team has previous conversations at their fingertips when someone else is asking questions about the same object. Additionally, snippets are exported into Google Docs on a quarterly basis and sent to curatorial for a review. Curators review all the snippets for their collections and exhibitions, meet with the ASK team to discuss the content, and then certain snippets—those that contain the most accurate answers and are most on point with curatorial vision—are marked “curator vetted.”

These post-processing steps ensure that we can quantify how many questions have been asked and, more importantly, what questions are being asked about our objects and exhibitions. This process sees that there’s ongoing communication between the ASK team and our curatorial staff; something we’ve found critical in this project. It gives curatorial the chance to learn from the conversations taking place in our galleries every day.

ASK snippets can now be seen on object pages like in this example of our Spacelander Bicycle.

ASK snippets can now be seen on object pages like in this example of our Spacelander Bicycle.

But, that’s not all. Once a snippet is marked “curator vetted,” it becomes available for various uses. Internally, we’ve developed a portal where staff can search for snippets related to objects and exhibitions. Externally, these snippets have now been integrated into the website and you’ll find them in two areas. In our collection online, you’ll see snippets related to specific objects—check out Spacelander Bicycle, Peaceable Kingdom, or Avarice for examples. Additionally, our special exhibitions now have a dedicated page of snippets related to that exhibition; you can see examples on Steve Powers, This Place, and Agitprop pages.

The beauty of this is now everyone can start to see the website immersed in the activity that’s been happening through the app and there’s a certain life that this user generated content brings to our own material. There are, of course, a myriad of ways conversational content can help shape our online presence. This is a just a start to something we hope to see grow more robust over time.

]]>
/2016/04/14/ask-snippets-integrated-into-bkm-website/feed/ 0
Image Matching Now Supporting iBeacon Results /2016/04/12/image-matching-now-supporting-ibeacon-results/ /2016/04/12/image-matching-now-supporting-ibeacon-results/#comments Tue, 12 Apr 2016 17:01:44 +0000 /?p=7855 Every second counts when the ASK team is responding to visitor questions. With that in mind, a few weeks ago we looked into how we could use image matching to match visitor photos to objects and make it even easier for the ASK team to find the object a visitor is asking about. This idea came to us after working with the students at Cornell Tech. The Cornell team was using image matching slightly differently in their own project, but it sparked an idea in our own team and we started to experiment to to see if we could use it to help improve efficiency of the ASK team.

There is a lot of research into Computer Vision right now but most of it focuses on image recognition (identifying a thing in an image like a dog, car, tree, etc.) but for this project we need to do image matching which is comparing the entirety of an image to see if it is the same or similar to another image. The most common use of image matching is a reverse image search. Google has a service like this where you can give it an image and it will search the web to see if the same photo is out there on the web somewhere even if it has been slightly cropped or edited. For example, I did a reverse image search to see if one of my photos is used anywhere else and found a tumblr blog and a few sites in Japan that are using it.

Original image on the left. Modified version found through a reverse image search on the right.

Original image on the left. Modified version found through a reverse image search on the right.

What we’re doing is quite different than just trying to find the same photo. We want to see if part of a photo matches the official image of an object in the museum collection. After some searching, we found an open source tool called Pastec. (Special thanks to John Resig for bringing Pastec to our attention.) From it’s GitHub repo homepage, “Pastec does not store the pixels of the images in its database. It stores a signature of each image thanks to the technique of visual words.” Because it uses parts of the image to make “visual words” and we are trying to match an object that would be in a photo, not the photo itself, it seemed like a promising option. (The visual words concept is very interesting and if you’d like to read more about it, check out this Wikipedia page.)

Visitor image on the left. Brooklyn Museum original image on the right.

Visitor image on the left. Brooklyn Museum original image on the right.

To test out Pastec, we compared visitor images to the official image on a few objects in the collection. It matched flat 2D objects, like paintings and photographs, about 92% of the time but only matched 3D objects, like sculptures, about 2% of the time. This makes sense since recognizing 3D objects is a whole separate and even harder problem which Pastec isn’t designed to do. Again from the GitHub homepage, “It can recognize flat objects such as covers, packaged goods or artworks. It has, however, not been designed to recognize faces, 3D objects, barcodes, or QR codes.”  Still, recognizing a huge portion of the collection 92% of the time would be very useful so we decided to move forward.

The next step was to upload the main image for every object in the collection through Pastec so it can index it and find the visual words. The indexing took about 36 hours on an m1.medium AWS EC2 instance which has two ECUs (EC2 Compute Unit) with 3.75 GB of ram. One ECU is roughly equivalent to a 1.0-1.2 GHz Opteron processor. After indexing all objects in the collection we tested searching for an image match and were getting results back in under 1 second. We also tested search speed with only 2D objects in the index, since they were so much more likely to match than 3D objects, but the difference in search speed was so minimal we decided to index them all. With that information we were ready to implement image search results into the dashboard.

When chatting with visitors, the ASK app uses iBeacon data to show the team a thumbnail grid of what objects are near a visitors current location. This location data is the main research tool for ASK team because it helps them see the object’s data, past conversations about the objects, and objects nearby the visitor. Image matching is a supplement to it and the design should reflect that. So in the dashboard if there is a match, that object will be highlighted above the main grid of images, and if there isn’t a match, nothing extra will show.

If we can match an image, we display it above the other images, so the ASK team has quick and easy access without having to hunt for it.

If we can match an image, we display it above the other images, so the ASK team has quick and easy access without having to hunt for it.

In the end, the experiment of “Can we use image matching to supplement iBeacon data?” was well worth the effort because after about a week’s worth of work we’re able to shave a few seconds off of the time it takes to get back to visitor questions making that experience more seamless. There is a secondary benefit as well because now that the image index is created it has potential to help solve more problems down the road and could be a great research tool.

]]>
/2016/04/12/image-matching-now-supporting-ibeacon-results/feed/ 1
Revising our ASK Engagement Manual /2016/04/07/revising-our-ask-engagement-manual/ /2016/04/07/revising-our-ask-engagement-manual/#respond Thu, 07 Apr 2016 17:22:56 +0000 /?p=7847 It’s been a year since the original ASK team arrived at the Museum, and we’ve been reflecting on all the ways ASK has evolved over this time. Last December, Sara posted about our collective efforts to document the various phases and facets of the ASK project, including an ASK Team Engagement Manual originally compiled by Monica Marino. Sara wrote that we had been codifying our methods “through experimentation, conversation, a lot of trial-and-error with test groups, and continued examination,” and all of that remains true.

The team brainstormed to compile helpful reminders and new info for the manual.

The team brainstormed to compile helpful reminders and new info for the manual.

Download our ASK team training manual to see how we've codified conversations via texting.

Download our ASK team training manual to see how we’ve codified conversations via texting.

As we neared our ASK Android launch, it felt like the right time for a little “spring cleaning.” In March the ASK team did a brainstorming exercise with our ever-popular post-it notes, asking themselves: What methods for engagement and research have we found most useful? What new technical features should we remember to use? What advice would we give to a new team member?

Using this internal feedback, we recently expanded our engagement and training manual, and we’d like to share it here. It now reflects new features that our Tech team developed for the dashboard; a few tweaks to our thinking about pedagogy in this context; updated protocol for archiving visitor chats as “snippets;”

our favorite research resources; and words of wisdom from the ASK team, which recently reached its capacity of six members again.

Now we’re all preparing for some major gallery reinstallations around the building—a topic for a future post!

]]>
/2016/04/07/revising-our-ask-engagement-manual/feed/ 0
Selectively Flying Blind After Android User Testing /2016/04/05/selectively-flying-blind-after-android-user-testing/ /2016/04/05/selectively-flying-blind-after-android-user-testing/#respond Tue, 05 Apr 2016 13:47:04 +0000 /?p=7837 ASK Brooklyn Museum for Android is now available on Google Play. We had one early quandary, but this was a fairly straightforward development process. That is, until we got to user testing.

User testing sessions are a critical part of the process.

User testing sessions are a critical part of the process.

Android development is tricky. There are a lot of devices all running different system versions in various states of updates with hardware manufactured by different parties, distributed independently or by various carriers. By comparison, iOS is a fairly controlled environment; we knew we could develop the iOS version of the app in house, but it was clear to us that an external team would need to tackle Android and we contracted with HappyFunCorp.

In the beginning of our Android development process we looked at our Google Analytics to figure out which devices/systems were in our majority and this became our supported device list. Simply put, there are too many devices running too many systems to be able to support all of them, so you have to pick your battles. We settled on a combination of support for devices running at least Android 4.3 and with Samsung Galaxy 4S (and higher) and Nexus 5 (and higher) hardware.

As with our iOS release, we did a number of invited user testing sessions onsite at the Museum. Many of these sessions were attended with just a few users giving us their time. Each session helped us surface bugs, but it was difficult to get a critical mass. One thing we started to see, however, is that at each session we had users show up with hardware that was not on our supported list and, inevitably, we saw a lot of bugs on these devices. It was the very well attended session with Bloomberg employees that helped us identify a trend, come to terms, and make a critical decision that will affect all Android users (for the better).

For both iOS and Android apps, Bloomberg employees helped us test each app prior to launch.

Bloomberg employees helped us test both our iOS and Android app prior to launch.

Most of the bugs we found on unsupported devices came down to problems with beacon ranging. We could reliably require Bluetooth on supported devices, but on others we’d see two problems. First, if a device didn’t have bluetooth support the user couldn’t use the app at all. This requirement on iOS devices made sense because of the near ubiquity of Bluetooth on current hardware, but was more difficult on the plethora of Android hardware situations. Second, if users were on an unsupported device beacon ranging was hit or miss often causing problems like device sluggishness or outright device crashes.

It was during the Bloomberg testing session when we could see a number of users all having the same problems that issue became really clear.

It was during the Bloomberg testing session when we could see a number of users all having the same problems that issue became really clear.

We had three options. Option one would be to not allow the download on unsupported devices, but this would mean some users could find it in Google Play and other users wouldn’t see it at all. This presented a nightmare for messaging—”We have an Android app….sort of…”  Option two allowed the download, but many users would experience bugs and it would be difficult to communicate why.  Option three would be to turn off bluetooth/beacon ranging for all unsupported devices, but this would mean the ASK team would not see a user’s location.

When an unsupported device is in use, a "no bluetooth" icon appears on the ASK team dashboard alerting them to the situation.

When an unsupported device is in use, a “no bluetooth” icon appears on the ASK team dashboard alerting them to the situation.

In the end, we went with option three and decided to turn off beacon ranging for all unsupported devices. This means ASK will work on most Android devices, but on devices where we’ve disabled beacon ranging, the ASK team will be flying blind with “no location found.” They can chat with users, but the object information won’t be at their fingertips so readily in what we hope are users who represent the very edge case.

]]>
/2016/04/05/selectively-flying-blind-after-android-user-testing/feed/ 0
Chatting About… Chats /2016/03/17/chatting-about-chats/ /2016/03/17/chatting-about-chats/#respond Thu, 17 Mar 2016 14:33:34 +0000 /?p=7827 As the ASK Team gears up for the app’s Android launch in April and expands to two full-time members and four part-time members, it seems like an appropriate time for us to refresh our thinking about visitor engagement through our chats. Engagement is a topic that’s always on our minds, but a more focused reflection feels particularly appropriate right now.

Since every ASK visitor chat is archived within the system, both as an entire conversation and as “snippets” tagged to individual works of art, we can easily look back and review past interactions. We’re building a monthly discussion about chat strategies and pedagogical approaches into our team’s meeting schedule, and we talk about engagement on a day-to-day basis as well. Some of the conversation is pretty straightforwardWhich collection areas seem to be drawing the most traffic lately? Are we getting many chats in a new special exhibition? Has any particular work of art recently challenged us in terms of content?

Two members of the ASK Team, Elizabeth and Zinia, review visitor chats.

Two members of the ASK Team, Elizabeth and Zinia, review visitor chats.

We’re always studying the collection as a team, and we try to anticipate visitor interest in specific shows or objects, but we’re also responsive to ongoing chat results. Sometimes these findings motivate the team to expand an existing wiki page or to create a new wiki for an object that doesn’t have one yet. And if we get complex questions about a special exhibition, we can follow up with exhibition’s curator and then incorporate his or her replies into our reference materials.

The first messages that visitors see are something that we continue to adapt.

The first messages that visitors see are something that we continue to adapt.

Some engagement issues require more reflection as a team. For example, we’ve been honing our use of opening messages since the app launched last year. We originally spent more time greeting the visitor and thanking him or her for trying the app. Now, however, the visitor is welcomed by a photo of the team and two intro messages that ease him or her into the app, plus two auto-fire replies in response to his or her first sent messageso we cut to the chase by offering a concise yet personable response. Making the chat as specific and, well, as human as quickly as possible also helps us to overcome the lingering challenge of some users assuming we’re a “bot” with a logarhythm.

As Shelley recently mentioned, another issue that we all deal with is user anonymity. On our end, the ASK team wants to provide a personalized experience for each visitor. Sometimes a visitor will volunteer information about his or her age, occupation, or knowledge of art history, but usually we glean what we can from the person’s texting style and choice of words. However, if we’re trying to get an even closer read, should we ask the visitor directly about himself or herself?

We experimented with this approach by asking early questions like “Is this your first visit to the Brooklyn Museum?” or “Are you here with friends/family today?” When we often didn’t receive a reply, we realized that the visitor preferred to maintain privacy. However, we also found that if a chat was progressing in a friendly manner and we were starting to have a hunch about the visitor, we sometimes could throw out a casual (and complimentary) remark like, “You know a lot about printmaking techniques! Are you an artist?” or “That’s a really great historical pointyou could be a teacher!” Comments like these were well-received by visitors, whether we had guessed correctly or not, and then they sometimes went on to offer more information about themselves after all.

A student asks for help with homework.

A student asks for help with homework.

We also keep track of specific chats that deserve review as a group. One case emerged last August, when we suddenly noticed that we were getting requests from summer-session students who wanted help with their final assignments. On one hand, we want to act as a helpful source of reliable information. On the other hand, if all we did send factual answers to a student’s questions, would that really be the best way for the student to learn?

We pondered our approach and decided that we would push the students to look closely and think critically by guiding them with questions rather than simple answers. And if a student sent us a photo of her or her homework assignment instead of an actual question (something that happened more than once) or confessed that he or she was actually sitting on a bench outside the Museum (our geo-fence includes the plaza!), we humorously but firmly encouraged that student to come inside, follow our directions to the art, and get up close and personal with it.

In this instance a visitor has sent us a photo with no question.

In this instance a visitor has sent us a photo with no question.

One type of chat that originally frustrated us wasn’t really a chat at all. At least once a day, a visitor would send us a sequence of photographs without any text attached. At first we tried to draw these visitors out with our own questions (“Did that work catch your eye for any particular reason?” “What do you think of that artist’s use of color?”). However, when we didn’t receive replies, and the photos just kept coming, we resigned ourselves to sending back interesting factual information about the works.

For a while we were bothered by this kind of exchange, because we felt we weren’t meeting our goals of engagement. Then we realized that we had to shift our way of thinking. If the visitor was sufficiently involved in the app to send us photos of three or four (or often more) works of art that she or he had viewed, then the exchange actually was a rewarding experience for that person.

Any form of reflective practice is a cyclical and ongoing process, and the ASK team will continue to refine its techniques for engagement as we enter a new phase of the project this spring. We’re continually making new connections across the collections, keeping up with the content of changing installations and new exhibitions, and learning more about our visitors’ expectations. We hope they enjoy learning along with us.

]]>
/2016/03/17/chatting-about-chats/feed/ 0
Lessons Learned Staffing ASK /2016/03/08/lessons-learned-staffing-ask/ /2016/03/08/lessons-learned-staffing-ask/#respond Tue, 08 Mar 2016 15:51:39 +0000 /?p=7817 Hard to believe that it’s been a full year since we began the initial hiring process for our ASK team. We’ve accomplished so much in the past yearlearning the collection, creating an internal wiki, and establishing best practices for engagement. Like any good agile project, there are some elements we continue to tweak as we go and staffing is one of them.

When first hired for ASK, we made a best guess as to how many people we would need to staff the dashboard and settled on one full-time lead and six part-time team members. This solution gave us flexibility in scheduling so that at least a pair of team members were on the dashboard during all open hours, more for busier times like weekends. Early testing sessions indicated that one team member could handle anywhere from about seven chats at once, depending on how in-depth they were, and that number helped provide a baseline for staffing based on app traffic.

Our current ASK team (from L to R): Andy Hawkes, Elizabeth Treptow, Zinia Rahman, Stephanie Cunningham, Megan Mastrobattista. Stephanie and Megan have been on the team since the beginning. Andy, Elizabeth, and Zinia joined the team last fall. Andy is our first full-timer; we’re in the processing of hiring the second to complete our awesome team of six.

Our current ASK team (from L to R): Andy Hawkes, Elizabeth Treptow, Zinia Rahman, Stephanie Cunningham, Megan Mastrobattista. Stephanie and Megan have been on the team since the beginning. Andy, Elizabeth, and Zinia joined the team last fall. Andy is our first full-timer; we’re in the processing of hiring the second to complete our awesome team of six.

ASK team members are responsible for gaining broad knowledge of the entire collection and deep knowledge of a selected collection area, best practices for engagement via the app, and technical training on using the app’s backend.  A great deal of training is required to ensure that an ASK team member is ready to answer questions via the app. Over the course of the year, we’ve already seen some turnover in the positions, which is of real concern. We expected some regular turnoverafter all, the positions are only part-timebut it happened more quickly than anticipated, and as those team members left, so did some institutional knowledge about ASK’s development process. We began to worry not only about staffing the dashboard, but about continuity.

We considered a few solutions to the attrition problem. One was a year-long graduate internship program, which would address the natural turnover head-on by building it into the job. Ideally this would include an intern per collection area for a total of 10 graduate students. While a program like this would provide a great opportunity for art history graduate students looking to work in the museum field, establishing, building, and managing such a program would be a great deal of work and so we decided against this approach for now. A second solution was to transition from six part-time team members to three full-time team members (plus the team lead). This was appealing for a few reasons: we could develop a more regular schedule for staffing the dashboard, more easily build in time for research since full-timers would be here at least one day a week that we’re closed to the public, and it would ameliorate the continuity problem. We considered this option for a long time, but eventually decided against it because there is a certain strength in numbers. We have six individuals with unique experiences and areas of study, and each one brings something important to the table. This variety of backgrounds and expertise leads to deeper self-reflection, better conversations as a team, and most importantly, better engagement with our visitors. Variety makes our team strong. We didn’t want to give that up, even in the name of continuity.

In the end, we came up with a compromise: two-full time team members and four-part timers (plus the team lead). We think this will provide a baseline of continuity moving forward, while still allowing for the richness of multiple viewpoints and voices. We’ll keep our scheduling flexibility, and the full-time team members will have time to take deeper dives into content and best practices, working with the rest of the team to develop these ideas.

All of that being said, we’re about to go into a very public launch with Marketing (with a capital M!) and more visitors able to engage with us via ASK once Android is on the floor. Happily this new team structure also provides flexibility to staff up by hiring additional part-timers, should app traffic demand it. In the meantime, we’re getting ready for launch and looking forward to many more chats with our visitors.

]]>
/2016/03/08/lessons-learned-staffing-ask/feed/ 0
How Important is Anonymity When Asking a Question? /2016/03/02/how-important-is-anonymity-when-asking-a-question/ /2016/03/02/how-important-is-anonymity-when-asking-a-question/#respond Wed, 02 Mar 2016 19:05:02 +0000 /?p=7776 As reported earlier, the Android version of our ASK app is due to launch in April. For the most part, the app will look and feel the same. There will be adjustments to the ways menus work to make them feel more appropriate for this platform, but nothing major. The biggest difference we’ve found is a potential challenge in the way we identify and retain information on unique users. This was such an interesting issue for ASK that it warrants its own post because what’s at stake is the core engagement of the product.

As I start to outline what’s going on here, keep in mind there seems to be a general fear in the world about the perception of asking stupid questions. In the early days of user testing, we heard this time and time again and it was clear from the onset that if we were going to raise the bar in the interaction we were going to have to give people a safe space to engage. From the get go, we made the decision that we wouldn’t onboard users asking for personal information, we wouldn’t collect a login, or even a name to get started. Essentially, we know if you return, but we don’t know anything about you because we don’t ask for any information up front.

In iOS, we use an Apple ID to recognize a user from multiple devices (if they own and iPad and an iPhone, for example) and, as long as they use the same Apple ID it carries a user through when the user upgrades their phone. All of this is pretty seamless on iOS because it happens out of view of the user and, bonus, we are not storing personally identifiable information, so we’re where we want to be on privacy.

Going with the Google ID to recognize users across devices may be problematic for ASK engagement.

Going with the Google ID to recognize users across devices may be problematic for ASK engagement.

Android operates a little differently. We can use a Google ID, but this action happens in view of the user and this is what creates a conundrum. On first use, a user is presented with Google IDs from which to choose. The good news is we still wouldn’t be storing personal information, but the really bad news is twofold. First, it’s impossible to tell users that we are not storing personal information and the natural assumption may be that by selecting an ID it feels like we’d know them more deeply than we do. Second, a known user ID associated with the app may significantly change user interaction because it runs counter to what we’ve heard from users. Namely, people like the anonymity of the app for fear of asking what might be perceived as a stupid question; the app feels like a safe space to explore.

The issue, for ASK, is a big one. A known user ID may change that behavior and in the interest of time, we’ve decided to go with Device ID (seamless to users) and then think about switching to Google ID post-launch when we have enough space to accomplish focus group testing around the change.

In the end, we’ve decided to use the device ID, but going this route only helps us identify the same user on that particular device; if a user upgrades or uses a different device they look like a new user to us. Using the device ID means we can’t effectively welcome someone back, see their conversation history, and make recommendations that build on that relationship.

We’re okay with this as a stopgap measure because it’s the most surefire way for us to retain the engagement that we know has been working. Post launch, however, this will be one of the first things we have to think about re-factoring because those long term goals of building relationships are key. As we rethink this, we’ll need to do a lot of focus groups paired with A/B testing to see if engagement changes with a Google ID and, if so, how much.

]]>
/2016/03/02/how-important-is-anonymity-when-asking-a-question/feed/ 0
Code Release: Going from iOS to Android Solving iBeacon Issues Along the Way /2016/02/23/code-release-going-from-ios-to-droid-solving-ibeacon-issues-along-the-way/ /2016/02/23/code-release-going-from-ios-to-droid-solving-ibeacon-issues-along-the-way/#respond Tue, 23 Feb 2016 17:44:39 +0000 /?p=7759 Our Android release is coming in April. I’m often asked about our strategy to expand into Android when 74% of our users are on iOS devices. The reasoning is pretty simple; we have a mandate at the institution to make every product as accessible as possible and user expectation dictates ASK availability on both handsets. The marketshare of Android devices is only growing and it’s way better to be ahead of that curve than behind it.

When thinking about Android expansion we had to re-evaluate how we were staffing mobile. In our first year it was invaluable to have someone on the internal team dedicated to iOS mobile development because the process at that time was more iterative. We were developing features and testing with users as we went along—having someone on staff to make changes as we discovered them was critical. Moving beyond that stage we had to reconsider the most efficient way of working and the best route forward would be to shift from staffing internally to hiring a firm. We contracted HappyFunCorp (HFC) to develop the ASK app for Android using our iOS app as a model. HFC is also handling our ongoing iOS maintenance allowing us to shift away from staffing internally in full.

The Android version of the app will function the same as iOS and in a future post I’ll talk about some of the changes that make ASK feel more appropriate for this platform and one of the bigger challenges we hit. Mostly, though, the transition to Android has been straightforward and, luckily for us, that meant we could concentrate on more vexing issues like how the app detects beacons and sends locations back to the ASK team. What follows is a lengthy post that details how our code works and the adjustments we’ve made. We are also taking this opportunity to release all of our code related to beacons in both iOS and Android regardless of the state it’s in—read on. 

In Android, permissions are granted in a one-step process at initial run. iOS, by contrast, stages permission actions as a user needs them. This delay in granting access to bluetooth may be causing “no location found” on start messages because we can’t range for beacons and build our array quickly enough.

So let’s take a look at the problem at hand. We’ve been seeing “no location found” on 15% of messages sent to the team with and high proportion of those on a user’s first message in a chat. We have a hunch this is likely because the beacon ranging starts too late. In iOS ranging only begins when a user turns bluetooth on and this prompt occurs very close to when a user would send that first message; turning on bluetooth is one of many things a user needs to enable and all of these prompts have been carefully staggered, so that users are not overwhelmed at the start. In Android, a user is asked for all permissions as a one step process up front and this means ranging for beacons starts right away. We think this change will help enormously, but we are still testing and this is to be determined. 

The other cause we’ve seen with “no location found” is attributed to the human error side of the equation. We have an admin tool that keeps track of our beacons and assigns them to museum locations. The beacon may be missing from that tool having been entered incorrectly (or not at all). To solve these issues the BKM web developer team enabled server-side logging; each time a beacon is sent to the dashboard that is not in the beacon database we’ve logged it in an admin tool so we can periodically use the data to chase down these problems.

Admin tool shows when we receive an invalid beacon ID likely the cause of a data entry error in our beacon tool.

Admin tool shows when we receive an invalid beacon ID likely the cause of a data entry error in our beacon tool.

The HFC team has also coded a debugger tool within the app which shows, in real time, all of the beacons in the application’s cache and the closest beacon the application would send with a message. This is helps us get visibility beyond the Estimote app because it shows what’s happening in our own application. Chris Wilson at HFC explains:

We now have a Chat/Beacon Log page that shows the list of messages sent since the list was reset. It has the beacons (with message optionally visible) showing the message timestamp, and the beacon’s major and minor ids. It uses live beacon data from the museum’s web api to determine if the beacons associated with these messages are valid, invalid, or if no beacon info was sent. The messages in the list are then color coded based on these designations. For easy visibility, messages with valid beacons are colored green, invalid designations are colored yellow, and messages sent with no beacon data are colored red. There are also total counts for each designation visible on the log screen.

 

Mobile side debugger tool developed by HFC to show beacons being ranged and which beacon would be sent with a message if a user were to hit send.

Mobile side debugger tool developed by HFC to show beacons being ranged and which beacon would be sent with a message if a user were to hit send.

Our coding changes have not just been limited to the addition of debugging tools and as we discuss improvements it’s worth reviewing how the beacon code in our ASK app actually works. In a nut shell, as a user walks around the building the app ranges beacons as encountered by the device and builds an array with the associated beacon distance. When a user composes a message and hits send, we send the closest beacon to the user that is in the array. The following bulleted lists are coming direct from HFC:

Here’s the way the (newer) Android code works—

  • On app start, the app begins ranging beacons using the Android Beacon Library.
  • About every second the beacon ranging returns a list of beacons that have been seen.
  • It cycles through each beacon and adds them to the cache, removing old copies of beacons that have been ranged with new distances.
  • It removes beacons from the cache that have outlived the TTL (currently 2,500 ms – this is something we can try to tweak to improve accuracy).
  • It then cycles through the list to determine which beacon is closest, replacing the closest beacon variable with this beacon. TTL on this is 3 minutes.
  • The closest beacon variable is picked up and sent along to the chat server when the user hits the send button.

Here’s what we know about the way the (older) iOS code works—

  • On app start, it the app starts the beacon ranging. However, the bluetooth check is only conducted when the user tries to send a message. Ranging requires bluetooth to be on, so this may be the source of “no location found” issues.
  • When beacon ranging is run, an array of beacons, sorted by proximity is returned every second. If the proximity is unknown, the beacon is removed from the array. Only the first beacon (the one with closest proximity in the array) is used until the next ranging cycle. If the cache is empty, that beacon is added to the cache.
  • If the cache is not empty, then the first beacon on this list (the one with closest proximity) is compared with the last beacon found with the last cache object. (1) If the major/minor ID of the beacon is the same AND the distance is less than the object, it adds the beacon to the list. If the major/minor ID of the beacon is the same and the distance is more, then it is not added to the cache. (2) If the major/minor ID of the beacon is different from the last object, it is added to the bottom of the cache.
  • The last beacon in the cache array is grabbed along with the message when the user taps the “send” button in the chat message. If the beacon has been in the cache for more than 5 minutes, no beacon information will be sent.

So, what are the differences?

  • Beacons aren’t removed from the cache in the iOS app, so duplicate beacons with different distances are added.
  • Rather than comparing all new beacons found to all cache beacons and updating existing beacons and adding new ones as in the Android app, the iOS app compares only the last beacon found to see if it is closer than the last cache array beacon.
  • There is a TTL of 5 minutes in the cache in the iOS app, whereas the TTL on beacons in the cache in the Android app is 2.5 seconds, and the TTL of the closest beacon if no new beacons have been ranged is 3 minutes.
  • In the Android app, in addition to the short lived cache of beacons, there is also a closest beacon variable set in case there are no ranged beacons for a period of time. This beacons is then sent with messages if it has been more than 2.5 seconds but less than 3 minutes. In the iOS app there is no concept of a closest beacon variable.

We are now going to begin the process of testing Android with users to see if these changes have helped and, if so, we’ll start to port these lessons learned back into the iOS code after April. In the meantime, given how many people are working (and struggling) with beacon deploys, we’ve decided to release both sets of code in the state they are currently in along with the mobile side debugging tools. Having a fresh set of eyes from HFC looking at the code has help a bunch and we hope having many more eyes on this code will only help everyone.

Lastly, I’d be remiss if I didn’t take this opportunity to talk a bit about our funders as related to this post in particular. ASK Brooklyn Museum is supported by Bloomberg Philanthropies and one reason we are releasing this code today is the amount of encouragement and enthusiasm that has come from the Bloomberg team toward information sharing in all states of a project’s progress. This blog, our lessons, and our code are published in the largest part due to their support; we are honored to be as open as we are because of the standard they have set among their grantees.

]]>
/2016/02/23/code-release-going-from-ios-to-droid-solving-ibeacon-issues-along-the-way/feed/ 0