engagement – BKM TECH / Technology blog of the Brooklyn Museum Thu, 07 Apr 2016 17:22:56 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 Revising our ASK Engagement Manual /2016/04/07/revising-our-ask-engagement-manual/ /2016/04/07/revising-our-ask-engagement-manual/#respond Thu, 07 Apr 2016 17:22:56 +0000 /?p=7847 It’s been a year since the original ASK team arrived at the Museum, and we’ve been reflecting on all the ways ASK has evolved over this time. Last December, Sara posted about our collective efforts to document the various phases and facets of the ASK project, including an ASK Team Engagement Manual originally compiled by Monica Marino. Sara wrote that we had been codifying our methods “through experimentation, conversation, a lot of trial-and-error with test groups, and continued examination,” and all of that remains true.

The team brainstormed to compile helpful reminders and new info for the manual.

The team brainstormed to compile helpful reminders and new info for the manual.

Download our ASK team training manual to see how we've codified conversations via texting.

Download our ASK team training manual to see how we’ve codified conversations via texting.

As we neared our ASK Android launch, it felt like the right time for a little “spring cleaning.” In March the ASK team did a brainstorming exercise with our ever-popular post-it notes, asking themselves: What methods for engagement and research have we found most useful? What new technical features should we remember to use? What advice would we give to a new team member?

Using this internal feedback, we recently expanded our engagement and training manual, and we’d like to share it here. It now reflects new features that our Tech team developed for the dashboard; a few tweaks to our thinking about pedagogy in this context; updated protocol for archiving visitor chats as “snippets;”

our favorite research resources; and words of wisdom from the ASK team, which recently reached its capacity of six members again.

Now we’re all preparing for some major gallery reinstallations around the building—a topic for a future post!

]]>
/2016/04/07/revising-our-ask-engagement-manual/feed/ 0
Chatting About… Chats /2016/03/17/chatting-about-chats/ /2016/03/17/chatting-about-chats/#respond Thu, 17 Mar 2016 14:33:34 +0000 /?p=7827 As the ASK Team gears up for the app’s Android launch in April and expands to two full-time members and four part-time members, it seems like an appropriate time for us to refresh our thinking about visitor engagement through our chats. Engagement is a topic that’s always on our minds, but a more focused reflection feels particularly appropriate right now.

Since every ASK visitor chat is archived within the system, both as an entire conversation and as “snippets” tagged to individual works of art, we can easily look back and review past interactions. We’re building a monthly discussion about chat strategies and pedagogical approaches into our team’s meeting schedule, and we talk about engagement on a day-to-day basis as well. Some of the conversation is pretty straightforwardWhich collection areas seem to be drawing the most traffic lately? Are we getting many chats in a new special exhibition? Has any particular work of art recently challenged us in terms of content?

Two members of the ASK Team, Elizabeth and Zinia, review visitor chats.

Two members of the ASK Team, Elizabeth and Zinia, review visitor chats.

We’re always studying the collection as a team, and we try to anticipate visitor interest in specific shows or objects, but we’re also responsive to ongoing chat results. Sometimes these findings motivate the team to expand an existing wiki page or to create a new wiki for an object that doesn’t have one yet. And if we get complex questions about a special exhibition, we can follow up with exhibition’s curator and then incorporate his or her replies into our reference materials.

The first messages that visitors see are something that we continue to adapt.

The first messages that visitors see are something that we continue to adapt.

Some engagement issues require more reflection as a team. For example, we’ve been honing our use of opening messages since the app launched last year. We originally spent more time greeting the visitor and thanking him or her for trying the app. Now, however, the visitor is welcomed by a photo of the team and two intro messages that ease him or her into the app, plus two auto-fire replies in response to his or her first sent messageso we cut to the chase by offering a concise yet personable response. Making the chat as specific and, well, as human as quickly as possible also helps us to overcome the lingering challenge of some users assuming we’re a “bot” with a logarhythm.

As Shelley recently mentioned, another issue that we all deal with is user anonymity. On our end, the ASK team wants to provide a personalized experience for each visitor. Sometimes a visitor will volunteer information about his or her age, occupation, or knowledge of art history, but usually we glean what we can from the person’s texting style and choice of words. However, if we’re trying to get an even closer read, should we ask the visitor directly about himself or herself?

We experimented with this approach by asking early questions like “Is this your first visit to the Brooklyn Museum?” or “Are you here with friends/family today?” When we often didn’t receive a reply, we realized that the visitor preferred to maintain privacy. However, we also found that if a chat was progressing in a friendly manner and we were starting to have a hunch about the visitor, we sometimes could throw out a casual (and complimentary) remark like, “You know a lot about printmaking techniques! Are you an artist?” or “That’s a really great historical pointyou could be a teacher!” Comments like these were well-received by visitors, whether we had guessed correctly or not, and then they sometimes went on to offer more information about themselves after all.

A student asks for help with homework.

A student asks for help with homework.

We also keep track of specific chats that deserve review as a group. One case emerged last August, when we suddenly noticed that we were getting requests from summer-session students who wanted help with their final assignments. On one hand, we want to act as a helpful source of reliable information. On the other hand, if all we did send factual answers to a student’s questions, would that really be the best way for the student to learn?

We pondered our approach and decided that we would push the students to look closely and think critically by guiding them with questions rather than simple answers. And if a student sent us a photo of her or her homework assignment instead of an actual question (something that happened more than once) or confessed that he or she was actually sitting on a bench outside the Museum (our geo-fence includes the plaza!), we humorously but firmly encouraged that student to come inside, follow our directions to the art, and get up close and personal with it.

In this instance a visitor has sent us a photo with no question.

In this instance a visitor has sent us a photo with no question.

One type of chat that originally frustrated us wasn’t really a chat at all. At least once a day, a visitor would send us a sequence of photographs without any text attached. At first we tried to draw these visitors out with our own questions (“Did that work catch your eye for any particular reason?” “What do you think of that artist’s use of color?”). However, when we didn’t receive replies, and the photos just kept coming, we resigned ourselves to sending back interesting factual information about the works.

For a while we were bothered by this kind of exchange, because we felt we weren’t meeting our goals of engagement. Then we realized that we had to shift our way of thinking. If the visitor was sufficiently involved in the app to send us photos of three or four (or often more) works of art that she or he had viewed, then the exchange actually was a rewarding experience for that person.

Any form of reflective practice is a cyclical and ongoing process, and the ASK team will continue to refine its techniques for engagement as we enter a new phase of the project this spring. We’re continually making new connections across the collections, keeping up with the content of changing installations and new exhibitions, and learning more about our visitors’ expectations. We hope they enjoy learning along with us.

]]>
/2016/03/17/chatting-about-chats/feed/ 0
Who are we looking for in an Audience Engagement Team? /2015/02/25/who-are-we-looking-for-in-an-audience-engagement-team/ /2015/02/25/who-are-we-looking-for-in-an-audience-engagement-team/#respond Wed, 25 Feb 2015 18:00:50 +0000 /?p=7314 I’ve just joined the Bloomberg Connects project as the Audience Engagement Lead. I will be heading the team that will be answering inquiries from visitors and engaging them in dialogue about objects in the Museum’s collection.

Learning to use the dashboard prior to user training.

Learning to use the dashboard prior to user testing.

One of my first experiences in the position was to participate in a round of user testing—the largest thus far.  It was intense, to say the least.  We had thirty-five individuals (from a range of backgrounds) over the course of three hours in our American Identities exhibition asking questions, and sharing their thoughts on objects.  Our Chief Curator, Kevin Stayton, was there to answer questions, and Marina Kliger (our new Curatorial Liaison for the project), and I were there to run the dashboard—typing Kevin’s responses to the users, and providing reinforcement by responding to some of the user’s inquiries, and on-the-fly research when necessary.  At the end of the three hours our collective heart-rates must have been alarming.

Earlier rounds of user testing used this prompt which felt too automated to users and proved a barrier to their participation.

Earlier rounds of user testing used this prompt which felt too automated to users and proved a barrier to their participation.

The first hour was especially stressful.  In the first fifteen minutes we received around one question every 30 – 60 seconds.  It started with users sending snapshots or titles to artworks in response to the app’s initial prompt, “What work of art are you looking at right now?” The intent of this prompt was to immediately engage the user with the Museum’s collection (and the app)  and for us (Kevin, Marina, and I) to follow with a question or information about the object to instigate close looking, or further inquiry about the object on behalf of the user.

As mentioned throughout the blog, the development of this app is being executed through an iterative process.  The team had learned during previous evaluations that users found this prompt to be automated, and felt that they needed to craft a smart question, which limited the amount of engagement on behalf of the users.  Considering that the prompt created a barrier for the users, we wanted to present the user with an initial prompt that would invite immediate participation.  As one of the intended outcomes of this project is to foster dialogue about the Museum’s collection, we decided to begin with the collection.

Shifting to this prompt in the latest round immediately engaged users and gives us flexibility in how we respond.

Shifting to this prompt in the latest round immediately engaged users and gives us flexibility in how we respond.

Changing the prompt proved effective in getting the users to use the app immediately.  We had twenty individuals registered for the first hour (6-7pm) of testing, and everyone showed up right on time.  As we hoped, the prompt instigated immediate engagement with the app once the users entered the galleries.  We hadn’t anticipated, however, the stressful situation of receiving a deluge of inquiries at once.  Fortunately, we were able to temper the deluge by staggering entry into the galleries for the next two rounds of registrants.  We know that we won’t have control of how many visitors will be using the app once it’s live, so we will have to continue to refine the prompt over the coming months to encourage participation, but in a way that’s manageable on the backend.

Our second intent for the new prompt was to encourage further inquiry on behalf of the user by sharing information that could hopefully spark curiosity about the objects and collection.  We found this to be true in some of our conversations.  For example, the first snapshots that we received was of “Winter Scene in Brooklyn.”  We received it and asked each other, “What can we say about this that will get them (the visitor) curious, or have them look more closely?”  The object is rich with details—the groups of men in conversation, the man carrying the bundle of wood,  the various storefronts—each provide us with a glimpse into the daily life and labor force of early 19th century Brooklyn.

Francis Guy (American, 1760-1820). Winter Scene in Brooklyn, ca. 1819-1820. Oil on canvas, 58 3/8 x 74 9/16 in. (148.2 x 189.4 cm). Brooklyn Museum, Transferred from the Brooklyn Institute of Arts and Sciences to the Brooklyn Museum, 97.13

Francis Guy (American, 1760-1820). Winter Scene in Brooklyn, ca. 1819-1820. Oil on canvas, 58 3/8 x 74 9/16 in. (148.2 x 189.4 cm). Brooklyn Museum, Transferred from the Brooklyn Institute of Arts and Sciences to the Brooklyn Museum, 97.13

We had to decide—in a flash—how we were going to engage the visitor with the painting.  As we were deciding on the response,  a flood of other snapshots and object titles inundated the dashboard—we had to get our first response out so we could attend to other visitor’s who were already waiting.  As time was a constraint, we responded first with a general background, “The picture is one of the richest one’s for content and stories,” hoping that this would serve as a teaser for the visitor to look for some of the stories, and content and follow with questions.  Which they did (!), their next question was,  “I’m curious which portion of Downtown Brooklyn depicts.” Kevin knew the answer immediately, and we responded to the visitor, “This is the area near the Fulton Ferry, low on the horizon, rather than on the hills of the Heights,” and then a few moments later we added, “but none of the buildings in this picture survive.”  The visitor again responded with a “thanks,” and “That’s development for you.”

This snippet of one our first conversations from our night of user testing reflects what we’re hoping the Audience Engagement Team Members will be able to accomplish: provide accurate information at a rapid fire pace, framed in a way that instigates closer looking, in a manner that is conversational and hopefully opens further dialogue.

A tester during our last testing session.  Engagement through the app encourages closer looking.

A tester during our last testing session. Engagement through the app encourages closer looking.

We are now in the process of hiring the six individuals who will make up that team.  Having the user testing just before the hiring process has provided us with a great insight into what we’re looking for in the Team Members.  As I mentioned above, they will need to provide information on the fly, which means that we are looking for individuals with a breadth of art historical knowledge, as well as the ability to do background research under the pressure of time constraints (within minutes!).  The level of pressure that we felt with such an incredibly tight time constraint was not something I had anticipated before the user testing—which is great to know when hiring.  The ability to stay calm, and personable in a stressful situation will be essential for individuals on the team.  In addition to having grace under pressure, and a breadth of knowledge and curiosity to learn more, we’re looking for individuals who are also curious about people, and engaging them with art objects through thoughtful conversation, and the sharing of information.

I envision working with the team as a cohort of individuals who are learning and experimenting together to finding the best ways engage our Museum visitors to the collection using the ASK app. If you know anyone who would like to join the team, send them our way!

]]>
/2015/02/25/who-are-we-looking-for-in-an-audience-engagement-team/feed/ 0
Local Matters /2014/09/25/local-matters/ /2014/09/25/local-matters/#comments Thu, 25 Sep 2014 15:36:06 +0000 /?p=7033 If you’ve been reading the blog lately you know we’ve been taking stock of our digital efforts and making considerable changes. I’ve been discussing what’s not working, but it’s also worth reporting on the trends we’ve been seeing and some of the new directions we are headed as a result. Did you know that our most engaged users on the web are locals? Over many years of projects, metrics have been showing us the closer someone is physically to the Museum the more likely they are to be invested with us digitally.

In 2008, we saw this with Click! A Crowd-Curated Exhibition. 3,344 people cast 410,089 evaluations using a web based activity that would determine a resulting exhibition. Participants in more than 40 countries took part in the activity, but 64.5% were local to the extended tri-state area around New York. A deeper look shows us the bulk of the participation was coming from local audience: 74.1% of the evaluations were cast by those in the tri-state area with 45.7% of evaluations being cast by those within Brooklyn. At the time, we figured this was because of the thematic nature of the exhibition, which depicted the “changing faces of Brooklyn.”

Google Analytics (along with zip code metrics) showed the majority of participation in Click! and Split Second was coming from local sources.

Google Analytics (along with zip code metrics) showed the majority of participation in Click! and Split Second was coming from local sources.

In 2011, we launched another web driven project to produce an exhibition. Split Second: Indian Paintings began with an online activity which would analyze people’s split second reactions to our collection of Indian paintings. The resulting exhibition was anything but local in theme, so we figured a much broader audience would find this of interest. In total, 4,617 participants created 176,394 ratings and spent 7 minutes and 32 seconds on average in their session. Participants took part from 59 countries, but it was the ones in the New York City area that were the most invested; their sessions averaged 15 minutes, which was more than double.

It’s not only these two projects that demonstrate this trend; we see similar things happening in our general website statistics and, also, on our social media.  Though we’ve disbanded the Posse and tagging games, it’s worth noting that, though small in number, the most engaged users were also locals many of whom had a strong long-term relationships with the Museum.

We’ve started to see a clearer picture here about how much local participation matters and if we are going for “engagement” as a strategy, we’re finding these users should be at the forefront of our minds. After all, GO was conceived to address this trend and, as a result, saw participation that I’d describe as incredibly deep. 1,708 artists opened their studios to 18,000 visitors who made 147,000 studio visits over the course of weekend (full stats). In order to nominate artists for the resulting exhibition, we asked voters to see at least five studios, but the average turned out to be eight. More than the metrics, though, it was the comments that so clearly demonstrated how invested people were.

As we head into our project for Bloomberg Connects engagement is the goal and we see our local users as central to both its creation and success.

 

]]>
/2014/09/25/local-matters/feed/ 1
Clear Choices in Tagging /2014/07/22/clear-choices-in-tagging/ /2014/07/22/clear-choices-in-tagging/#respond Tue, 22 Jul 2014 17:17:07 +0000 /?p=7028 Remember my post on Social Change? We’ve been evaluating our digital projects with a careful eye toward what’s working and what isn’t.  At this juncture, we’re making sometimes difficult choices because we are on the road to coding a large scale digital project (more on this soon) and we need to streamline in order to allocate our small staff toward this substantial new initiative.

Every project takes time and energy both to create and maintain over time. As we evaluate we consider several factors: institutional goals, comparative engagement metrics across many projects, and a careful look at what’s going on within any given offering.

As of today, we are retiring the Brooklyn Museum Posse along with our tagging games, Tag! You’re It and Freeze Tag. The decision to pull these activities was difficult because we fully believe in how important tagging is to the health of our collection online. After all, one person’s “landscape” may be another person’s “tree,” and all of these terms help make our objects discoverable online. As invested as we were in the program…

  • Engagement within the games and posse has been incredibly low in numbers, but high in yield.  Launched in 2008, our Posse only numbered 1100 users over all these years. While 1100 users many not seem like many, collectively their contribution was quite a lot given Posse members had contributed 230,186 tags. That’s a lot of SEO in our collection online and represents an incredible effort that cannot be ignored.
  • Posse members were using both games and object pages to tag objects. In fact, more than half of the tags were being delivered directly via the object pages showing the games were not necessarily a more compelling option.
  • Tagging has shifted to a more social language, not a descriptive one. For as much as we want the keywords, the notion of tags as keywords has changed considerably. We need to change along with our audience and recognize that our games are outdated conceptually.
  • Over the years, tagging has decreased substantially.  Within the games, tagging contribution peaked in 2009 with 32,409 tags, but by 2014 we were logging 8,089 (ytd).

When we started seeing the above, we began asking ourselves who we were engaging.  If our institutional mission centers around community with the aim to engage a broad audience, are the Posse and our tagging games doing that effectively? No…

  • I’ll preface this one by saying “we love you, we really do.”  Our core taggers were likely the very niche audience reading this blog.  When we looked carefully at all the bio statements people were giving us when they signed up for a Posse account, it was incredibly clear these activities were engaging museum professionals and museum studies students with a smattering of art history students. 30% of the tags were coming from Brooklyn Museum staff who used the games to contribute their own tags (and were encouraged to do this through accounts so we could track internal participation), 22% were coming from museum professionals, and 8% were coming from other accounts not identified as one of the other two buckets.
  • Staff were the most consistently engaged with an average activity rate of 338 days.  Museum professionals were with us an average of 102 days.  Other accounts held on for an average of 97 days.  Even though the metrics between Museum professionals and other accounts were roughly similar, when we compare that with the tagging percentages it shows us that the people fitting into the “other” account bucketthe core audience we were hoping to engagewere far less engaged given the numbers of tags they were contributing.
Hiroko Okada (Japanese, born 1970). Future Plan #2, 2003. Chromogenic photograph, 54 13/16 x 35 1/8 in. (139.2 x 89.2 cm). Brooklyn Museum, Gift of the artist and Robert A. Levinson Fund, 2008.25. © Hiroko Okada

Hiroko Okada (Japanese, born 1970). Future Plan #2, 2003. Chromogenic photograph, 54 13/16 x 35 1/8 in. (139.2 x 89.2 cm). Brooklyn Museum, Gift of the artist and Robert A. Levinson Fund, 2008.25. © Hiroko Okada

So, we faced a bit of a conundrum. We know tagging is incredibly valuable, but our statistics were showing that we had a small audience for it and, in addition, that audience was more one of insiders than the general public.  If tagging is meant to democratize collections by applying everyday words instead of specialized ones, you have to wonder how much traction we were getting if the majority of tags were coming from specialized voices. That insider aspect is pretty interesting…

  • In looking at our analytics we can see the majority of the search terms for the collection are actually specific. People are looking for artists, movements, types, specific cultures and fewer people seem to be searching by general themes. (It’s important to note that we are not looking in depth at the analytics, just glancing for trends.) The biggest exception to this in the last year was 169 searches on the term “seaweed,” likely owing to this tumblr post.
  • Analytics is also showing users browsing by tags and this represents 4% of our collection traffic. At a glance, the majority of these terms are specific (egypt, tissot, sculpture, painting, watercolor, amarna) and fewer are thematic (woman, erotic, mask, glass, bird).
  • In presentations, I’ve always cited Hiroko Okada’s Future Plan #2 as a key example to the benefits of tagging. The object metadata tells us practically nothing about this image.  If you didn’t know the name of the artist or the title of the work, you’d likely never find it.  Tagging allows the image to be now be searched on the term “pregnant.” This is clearly an isolated example, but when only two searches cropped up “pregnant” in the last year it makes you take a close look at where time and energy is being spent.

At this point it was pretty clear that tagging wasn’t working on many levels, but why not keep these activities around in the hopes that some data is better than none? Well, tagging isn’t gone from our site totally and you can still add and delete tags from any object page.  What’s gone is the technical overhead that is required for signing in, creating a profile that attributes your tags to your identity, and the games. We decided we needed to eliminate the games because we have to allocate the limited resources of our staff carefully. We simply had to acknowledge this was not working well enough to keep the staff time going.

This was not a decision we took lightly especially given this a program that we hold dearly and are known for; it took us months of wrangling before concluding this was the route. The path, however, comes with the learning there’s a better way for our community to contribute to our web presence and this is something you’ll be hearing about very soon.

]]>
/2014/07/22/clear-choices-in-tagging/feed/ 0