Pritika Nilaratna – BKM TECH https://www.brooklynmuseum.org/community/blogosphere Technology blog of the Brooklyn Museum Mon, 14 Dec 2015 17:05:14 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 Iterating on the ASK Mobile Experience /2015/03/19/iterating-on-the-ask-mobile-experience/ /2015/03/19/iterating-on-the-ask-mobile-experience/#respond Thu, 19 Mar 2015 18:23:16 +0000 /?p=7362 The ASK mobile app has gone through many design iterations and has continually evolved in a quest to to offer an exceptional user experience. In my user testing post, I had mentioned problems we discovered with our latest design for the onboarding process. Visitors were frustrated and confused by the home screen of the app and unable to jump right in.

Asking our visitors to jump through hoops before they can start. In this version, a visitor cannot get to ASK without first giving us both, their name and email, bits they often glaze over or avoid.

Asking our visitors to jump through hoops before they can start. In this version, a visitor cannot get to ASK without first giving us both, their name and email, bits they often glaze over or avoid.

Visitors were skimming over the name and email address fields, and jumping right to our “Get Started” button, but the app wouldn’t allow them to proceed without these details. Additionally, we had noticed hesitation of some visitors in providing an email address due to privacy concerns and lack of transparency about its intended purpose. In reality we were collecting these details because it would allow us to address visitors by name and also would provide the option of building more advanced features in the future. The way most apps achieve this is by forcing users to create accounts. In early phases of mobile design, as a team we decided to forego often-cumbersome and opaque account creation in favor of an alternative sign-up process for users when they first download ASK. This simpler sign-up was in the form of the name and email address combination that users were asked to provide.

Therefore we realized that even our simple sign-up was causing issues, and as a team we had to question our decision to collect this info at the start and brainstorm yet again how we could transition visitors better into ASK functionality.

In new version of ASK app, onboarding begins as a conversation with visitors right after they open the app. We delay asking for email address until our audience engagement team is stumped and needs to reach out to the visitor by email.

In new version of ASK app, onboarding begins as a conversation with visitors right after they open the app. We delay asking for email address until our audience engagement team is stumped and needs to reach out to the visitor by email.

A following version of the ASK app was designed to allow users to jump right into the functionality and provide their details at a later point. We start the conversation by asking for visitor’s name, which introduces them to the app and also gets them using it. In a future post we will talk in detail about how we are keeping track of visitors and their conversations without forcing account creation.

We ad-hoc tested this screen by itself in our galleries with visitors and found that for the most part visitors would enter their name without any problems. Great! Now we could move on to the rest of the app. As Brian mentioned, the ASK app bundles in informational experience as well. This includes typical content sections such as Exhibition, Collections, Visit etc that our visitors need to be able to access within the same app. Hence we got to designing how these would live in relation to ASK.

We came up with two separate versions of the app home screen. We have added a button for ASK, museum hours of operation, and menu options for the informational sections. The difference in the designs is the treatment of the ASK button. After much debate about the treatments we decided to ad-hoc A/B test the two versions with visitors in our galleries. Our goal was to determine if the pathway to reach ASK was clear to users, which version provided the clearer pathway, and how users reacted to the informational sections.

Versions 1 and 2 of our designs for the home screen with more than just ASK functionality. Quick question - how would you get to ASK? Which version is clearer?

Versions 1 and 2 of our designs for the home screen with more than just ASK functionality. Quick question – how would you get to ASK? Which version is clearer?

In our ad-hoc testing, we followed a very specific script with testers. Our questions were :

  1. Starting from the top to bottom, go through the app and tell us what you think each section does?
  2. If you wanted to know more about an object at the museum, how would you achieve this through the app?
  3. What do you think is the primary purpose of this app?

For the first time we were putting an app in the hands of our users which had a lot more going on than our earlier versions. We wanted to make sure we were getting the answers we needed so we could easily compare between versions 1 and 2.

Much to our disappointment, our test failed, with neither version 1 or 2 working in a satisfactory way. We were amazed to realize that we had made too many assumptions about the user’s experience going into the A/B test. However, we succeeded in discovering some big issues that we had been oblivious to during design.

The number one problem was that visitors did not understand what our app did. 100% of our testing audience assumed that our app was a generic museum app to help them with their visit. In response to our question, “what is the primary purpose of this app?” we got answers like “to bring people into the museum,” “find out what is happening at the museum,” and “don’t know! I would use the website instead.”

A short, crisp, direct call-to-action that explains ASK is used for answering questions in real-time before leading visitors into ASK functionality.

A short, crisp, direct call-to-action that explains ASK is used for answering questions in real-time before leading visitors into ASK functionality.

The second huge problem we discovered is that ASK was overshadowed by conventional and familiar areas such as Exhibitions and Collections. Visitors either ignored ASK entirely or when they did navigate to it, it was not clear to them why they were there, and what ASK was for. We thought our onboarding prompt was perfectly friendly and explicit, however most readers skimmed over or missed what we were hoping to convey and ran back to the familiar sections as fast as they could.

Once again we found ourselves brainstorming how we could put ASK front and center and make its purpose explicit from the get go. Our new home screen design features text that does double duty of acting as a call-to-action in addition to explaining the purpose of ASK.

Finally, in our testing we had noticed that our chat screen prompt had seemed automated to many visitors. They did not feel as engaged with the generic question, and some would glaze over it.

Experimenting with alternative language to prompt our visitors in ASK.

Experimenting with alternative language to prompt our visitors in ASK.

As a test, we changed this prompt to “What work of art are you looking at right now?” This resulted in firstly, lowering the bar for visitors to engage and respond, and secondly, got them walking over and looking at artwork! The change in prompt language made such a huge difference that we decided to make this prompt dynamically alterable by the Audience Engagement team remotely. The changed prompt will be fetched by the mobile app when it first loads. Now the team can experiment with the prompt language to find options that may work in a range of different scenarios.

In our attempt to design the best mobile experience for ASK, we are certain that our design will only continue to evolve. Stay tuned.

]]>
/2015/03/19/iterating-on-the-ask-mobile-experience/feed/ 0
User Testing ASK with our Members /2014/11/20/user-testing-ask-with-our-members/ /2014/11/20/user-testing-ask-with-our-members/#respond Thu, 20 Nov 2014 16:21:54 +0000 /?p=7198 Earlier this week I covered how we have been testing the ASK app internally. Today I am going to talk about how we user tested with members who were interested in participating and what we learned.

Our web team on the ASK dashboard while one of our test participants testing the app observes "Trade Sign (Boy Riding Bicycle)"

Our web team on the ASK dashboard while one of our test participants testing the app observes “Trade Sign (Boy Riding Bicycle)”

Our first round of visitor testing took place in late October with several small groups of Members. We invited testers into our American galleries on the 5th floor where we were set up. Sara Devine was typing at the dashboard for our chief curator, Kevin Stayton, who answered visitor questions. Each visitor was given an iPod touch pre-loaded with our app. We gave them brief verbal instructions which can be summed up as “use the app to ask any questions you have about our artwork as you wander.” We were careful not to hand-hold or walk them through the interface. The visitors were then free to roam the galleries with our iPods. One reason why we wanted to test in small groups is each visitor was shadowed by a staffer as they used the app and we encouraged them to verbalize what they were thinking/feeling/doing at any point as we logged observations. At the end of each session testers met with the whole team where we posed questions about the entire experience.

Interviewing our testers on the ASK experience after the test.

Interviewing our testers on the ASK experience after the test.

The unanimous feedback from visitors who used our app was that the experience is very personal and friendly. Visitors thought that the quality of responses was very high, such that it inspired them to ask better questions. Visitors expressed that they found themselves looking at the art closer so they could ask thoughtful questions. All of these things we had heard from our earliest pilot with iMessage, so we were hoping to see that again with the app that we built specifically for the program. Boom. Nailed it.

We noticed our visitors wandering in a “U” pattern; standing at a work they would ask a question then they would wander close by until an answer was given. Often, once they received the answer they would circle back to the artwork to take another look. We had seen this behavior in the iMessage pilot and were encouraged to see it again. Unlike the iMessage pilot, we found our visitors constantly checking their devices to see if a new answer came. Our test groups resoundingly asked for a vibration alert to notify them of an answer so they could put their device away while without the need to check. iMessage has this built-in, so that’s something that worked from the earlier pilot, but is needed in our own implementation to help encourage the behaviour we want—phones away and looking at art as much as possible.

Testers reported (and were observed) looking more closely at works of art often putting down screens in the process. The tester here is a good example - he's looking at our Francis Guy painting while the mobile device is in his right hand at his side.

Testers reported (and were observed) looking more closely at works of art often putting down screens in the process. The tester here is a good example – he’s looking at our Francis Guy painting while the mobile device is in his right hand at his side.

We were concerned about the wait time for visitors to get responses from our audience engagement team, in this case, Kevin Stayton. However, our visitors expressed that they didn’t mind the wait because they found the answers to be so high quality that they believed them to be worth waiting for and realized they were being connected to a real person, not an automated system. This is something we will need to revisit as we test with a larger audience because how we scale will be a challenge that we can tackle a number of ways. One idea we have right now is to warn visitors of slow response times during high traffic periods.

Kevin Stayton, chief curator answering visitor questions with Sara Devine typing them on the ASK dashboard.

Kevin Stayton, chief curator answering visitor questions with Sara Devine typing them on the ASK dashboard.

Does it matter to the visitor who exactly (staff or curator) is responding to the questions if the answers are good? When we asked testers directly, the answers that we received were that it didn’t matter as long as the answers were perceived as having value often defined as “good,” “worthwhile,” and/or “interesting.” We attempted to A/B test this by introducing our chief curator Kevin Stayton as the question answerer to part of the group, while keeping this a secret from others. This failed when our members recognized Kevin from previous museum visits. We will definitely be looking to test this with larger groups in later phases. When asked, members felt it was not important to be connected to a single voice throughout their visit, but this also something we will need to test as we get a full audience engagement team in place and they trade off in answering questions coming into the queue. While it would be ideal to have the same person on the audience engagement team respond to a visitor, it is not feasible due to our limitations with staffing and/or areas of expertise among the team manning the dashboard.

One of the biggest lessons from our testing was the reveal that our onboarding process is completely broken. We find that visitors skip reading the text on our first screen which has a form to sign up. They repeatedly miss the fields that ask for their name and email, even if there are error messages. They keep clicking our “get started” button in the hope of advancing which prevents them unless they fill out their details. They certainly are not reading why we are asking for this information, which is obscured in the text wall they didn’t read. Additionally, our error messaging was not noticeable enough and they didn’t recognize when they were hitting problems.

Our onboarding process needs some work and changes to it are the next thing we'll be testing in early December.

Our onboarding process needs some work and changes to it are the next thing we’ll be testing in early December.

Knowing our onboarding needs a major revision, we are focusing on that for our next round of testing. We are testing new ideas using internal staffers outside of the tech department and will put the best of those in front of members in early December.

Throughout, our initial testing has been with small numbers of people and we are being careful to write everything down and look for trends that we hear across all participants. Tons of great features have been suggested, but until we get many more people using the app in later stages of testing it’s important to simply gather information and identify what is a bug fix and where included features could be improved. In one instance, we saw critical mass when almost every tester expressed a desire in taking their conversation home. From the get go we designed the ASK app to only work on-site, so visitors are asking questions as they are looking at the art. In doing so, we were completely locking the visitor out from seeing their conversation history. We now know that we need to open it up and even though we are seeing that feedback across the board, we may wait on development to do more focus group testing before specific implementation.

Onward! Stay tuned on revisions for our next round of testing.

]]>
/2014/11/20/user-testing-ask-with-our-members/feed/ 0
Preparing for User Testing /2014/11/18/preparing-for-user-testing/ /2014/11/18/preparing-for-user-testing/#respond Tue, 18 Nov 2014 16:03:55 +0000 /?p=7189 I was very excited by the prospect of user testing in the field when I started working on the Bloomberg Connects project. As a web developer with a passion for user experience design, I really enjoy watching people as they use technology. How they use a user interface can accurately communicate how well something is working and where the hiccups are.

I'm playing the role of a visitor testing the ASK mobile app.

I’m playing the role of a visitor testing the ASK mobile app.

As a part of our process of developing the ASK app, we knew user testing in the museum with visitors was absolutely essential. It allows us as a team to study what our visitors are doing when they use the app and how it might be different from our expectations. We can then use this knowledge, iterate on our product, and deliver an effective experience for the next release.

Prior to user testing with actual visitors, we focussed on testing with team members internally. The goal was to smooth out any kinks with our end-to-end experience and fix major bugs before visitors got to use the ASK app.

A big lesson that we learned in our internal testing is the importance of testing early and often. Our ASK app is composed of many pieces that speak to each other. These pieces are namely: the ASK mobile app, the ASK dashboard (a web interface for the audience engagement team to receive and answer questions), and an API that connects them both. The most effective way for us to test these pieces in sync are by role-playing the experience of visitor and question-answerer. This internal role-playing led us to quickly roll out fixes.

James, our back-end web developer role-playing as visitor and testing the ASK mobile app.

James, our back-end web developer role-playing as visitor and testing the ASK mobile app.

The developers working on the ASK app switched roles frequently during testing so all of us would be familiar with all the pieces. Any time we rolled out a new change, we found ourselves forming small groups to test. We loaded our app on iPod touches, a few people went to our American galleries on the 5th floor and asked questions. Another team member stayed back manning the dashboard answering questions. As more of us used the ASK app as we would in the real-world, we discovered problems very quickly.

The design mock-ups that we referenced to develop the dashboard had failed in filling some important blanks like transitions and button behaviors. Our web designer was able to quickly jump into the code and make edits that made sense in the browser in order to make the dashboard friendlier. Everything was lined up perfectly and clicking around made sense. These changes were absolutely essential for someone using the dashboard so they could quickly see and act on incoming questions.

We discovered inaccuracies with iBeacon in indicating visitor location. In order to figure out what was happening, we added debug and log messages everywhere to see if the problem was with iBeacon, the mobile app, the api, or the dashboard. Jennie, our iOS dev has written a post on how we addressed a lot of our iBeacon woes.

Weaker wifi signal in some galleries would cause messages or images to not successfully deliver to the dashboard. This was one of many bugs that were missed when we tested fixes at our desks.

Internal testing of the Ask app resulted in the team using the chat interface to log bugs or notes for discussion.

Internal testing of the ASK app resulted in the team using the chat interface to log bugs or notes for discussion.

Users in the dashboard were not able to send messages to visitors if the message was not in response to an unanswered question. This was a limitation that we had built in without realizing and a quick test made it very clear that dashboard users should be able to send messages at any time after a conversation has been initiated with a visitor.

We started noticing latency issues in the dashboard and the api as more people were using the dashboard and mobile app at the same time. This resulted in everything being processed a little slower and inaccuracies at many points. We knew we had to introduce optimizations so things were faster. In some cases of latency, we were able to doctor our interfaces so things felt faster than they were in reality.

We found ourselves using the ASK app as a way to communicate bugs and make notes for testing to discuss later as a team. Most of our problems would have gone unnoticed without an end-to-end test.

After several rounds of internal testing we were ready for a test with real visitors. The first question was—who do we invite? We knew from past projects that our audience is diverse in their use of tech; some are tech savvy, some are not, many own smartphones, but others do not or choose not to use them at the Museum. In previous projects, we had tested with a more exclusively tech savvy audience and this created problems with end products because we were not testing directly with the entire range of our core audience. When testing ASK, we wanted to think about these challenges and test directly with visitors.

Monthly newsletter inviting members to participate in user testing the ASK app.

Monthly newsletter inviting members to participate in user testing the ASK app.

Because ASK is designed with our most local visitors in mind, we decided to test with  museum members first—those who have a deep relationship with the museum, have been here many times, and have a want to make the visitor experience the best that it can be and are willing to help us do so. Our membership department invited participants in their monthly newsletter and we scheduled our testing groups during the day and evening to attract  different audience segments.

We were surprised at how many members answered this call-to-action and that has helped us schedule people through various rounds of testing.  We are testing with small groups in our American Identities galleries on our 5th floor.  Up next, I’ll talk about what we learned.

]]>
/2014/11/18/preparing-for-user-testing/feed/ 0