collection – BKM TECH / Technology blog of the Brooklyn Museum Thu, 05 Sep 2019 15:16:37 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 Building a little data capture into our admissions process /2019/09/05/building-a-little-data-capture-into-our-admissions-process/ /2019/09/05/building-a-little-data-capture-into-our-admissions-process/#respond Thu, 05 Sep 2019 15:16:37 +0000 /?p=8331 As I mentioned in my previous post about mapping our digital landscape, we’re not letting the lack of CRM completely get us down. We have been trying to find creative ways to gather data with the systems we currently use. For years we have asked for visitor zip codes as part of the admissions transaction since we need to report those numbers to our city funders. We recently started to wonder if we could get just a bit more info at the front desk. In July we launched a simple test using our point-of-sale system (Siriusware) to gather the answer to a single-question survey: What is your reason for visiting? The answer to this basic question would be extremely helpful as we plan for future exhibitions, forecast revenue, and think about how to market ourselves. 

We began with a very short list of options in a drop down menu that included the special exhibitions, a few specific collection areas, and the collection more generally. We quickly found the need to add a few more options. For example, the admissions team asked for a “just in the neighborhood” option as it’s a common response to the question (though the data shows it’s not as common as they likely felt it was).

The survey appears in a pop-up window and has a drop-down menu of options. Unfortunately, the option to skip or cancel is bakedin; we can’t make this a required question to complete the transaction.

The survey appears in a pop-up window and has a drop-down menu of options. Unfortunately, the option to skip or cancel is baked-in; we can’t make this a required question to complete the transaction.

Results for the first two months are interesting. In July, the permanent collection ranked highest in response rate, while for August it was our Pierre Cardin special exhibition. The initial lack of options is one of the reasons for the high “other” response rate in July, which dipped the second month as more options were added. Currently, we have 16 options plus skip/decline. This feels like a lot, but maybe it’s ok. In particular, I wonder about including Korean art and African art in the list at the moment since both are temporarily off-view, but it would help us track an uptick once those collections are on view once more. We also have to remember to update the list regularly as special exhibitions open and close. For example, both Liz Johnson Artur and One: Egúngún exhibitions closed mid-August, which explains the dip in responses.

reason for visiting chartA quick comparison of the total number of survey responses (which should be every transaction) to total number of visitors who were required to visit the admissions desk shows the transaction count is about 60-65% of the visitor count. Multiple tickets can be purchased through a single transaction—and we know most of our visitors come in pairs or groups—and that feels close to the right percentage. I think we are still getting more cancellations than we should and we’ll keep working on it. The admissions team is meant to pose the question in a casual and conversational manner so it doesn’t feel like a survey (or an interrogation!) and select a response in order to proceed with the sale, although it is possible to cancel and move on. To avoid cancellation, we included a skip/decline option. Unfortunately, not everyone is consistently asking the survey question, which we know because we can run reports on who is logging which responses. For example, we found one person mostly just cancelled the survey in the first week and were able to speak with them. While we don’t want survey completion rate to become punitive, we do want to encourage completion because the information is important for us as an institution. Finding that balance can be tricky.

After two months, we are still working out the kinks, mostly in terms of making this process habit for the admissions team. A next step is to work with our Tech team to create a report that would knit together the survey answer, ticket info, and zip code from each transaction so we can compare the data set as a whole. That would be a pretty powerful triumvirate. 

]]>
/2019/09/05/building-a-little-data-capture-into-our-admissions-process/feed/ 0
Trends Across Time: An ASK Fashion Tour /2019/08/22/trends-across-time-an-ask-fashion-tour/ /2019/08/22/trends-across-time-an-ask-fashion-tour/#respond Thu, 22 Aug 2019 16:23:05 +0000 /?p=8320 As a follow-up to our ASK-guided gallery tours for Frida Kahlo: Appearances Can Be Deceiving and Pride Month, the ASK team has created a new tour as a tie-in for the special exhibition Pierre Cardin: Future FashionThis time, we focused on the Museum’s fifth floor, where the Cardin show is installed. We wanted to plan something that visitors can try before or after seeing this exhibition, or even as a freestanding option.

Cards feature a detail of William Merritt Chase’s portrait of Lydia Field Emmett, with instructions.

Cards feature a detail of William Merritt Chase’s portrait of Lydia Field Emmett, with instructions.

In preparation, we discussed our learnings from previous themed tours and established a few small but important goals for the structure and promotion of this engagement activity. Specifically, we wanted to manage visitor expectations by specifying the tour location and keeping that area clearly defined (just one floor, in this case); indicating in advance what the format would be (i.e., texting/chatting); and offering the user a prompt word to send as their first message.

Our palm card for this new tour includes all the above information, succinctly stated, as well as a detail of a favorite painting from our American Art collections. For card handout, we’re concentrating on the fifth floor. Cards are displayed in a rack near the elevator that visitors are taking to reach that floor. The cards are also placed at the ticket check-in kiosk for Pierre Cardin: Future Fashion, where Visitor Experience ticketing staff can hand them out and explain them. Our ASK Ambassadors are also promoting the tour activity as they circulate around the Museum, of course!

Once a visitor begins the tour, the ten stops encompass works in a range of media from different locations and time periods, from a Chiriqui gold pendant (circa 1000-1500) to Luigi Lucioni’s portrait of artist Paul Cadmus (1927). For each one, the ASK team offers a few interesting facts, sometimes touching on past trends in cosmetics and grooming as well as costume history.

These women’s pant-suits were as edgy in the 1930s as Cardin’s unisex designs would be in the 1960s.

These women’s pant-suits were as edgy in the 1930s as Cardin’s unisex designs would be in the 1960s.

We can also make occasional parallels between historical clothing, jewelry, and accessories and Cardin’s designs, to complement users’ visits to that exhibition or to inspire them to check it out in the future.

So far, we’ve had some positive user response. One visitor thanked us by writing, “This has been terrific, a great interactive tour. Definitely encouraged me to look closer, which I do like and tend to do.” We’ll be tracking visitor use and reactions throughout the run of this exhibition, to be shared at a later date!

]]>
/2019/08/22/trends-across-time-an-ask-fashion-tour/feed/ 0
Tiny Cards, Big Fun: What Impact? /2019/08/08/tiny-cards-big-fun-what-impact/ /2019/08/08/tiny-cards-big-fun-what-impact/#respond Thu, 08 Aug 2019 14:00:07 +0000 /?p=8311 In 2017 we partnered with educational start-up Duolingo and their new digital platform, Tinycards, to produce fun and educational art history flashcard decks. 2 years, 24 decks, 1,000 followers, and 9,339 pins later, our partnership has officially come to a close. Here’s the wrap up.

Our collection of decks span a variety of subjects. While some, like "Jewish Ritual Objects" are object focused, others capitalize on small details, like hieroglyphs or hand gestures, to broaden the thematic scope.

Our collection of decks span a variety of subjects. While some, like “Jewish Ritual Objects” are object focused, others capitalize on small details, like hieroglyphs or hand gestures, to broaden the thematic scope.

Our original goal in partnering with Tiny Cards was to reach new audiences. Not solely by publishing photos and fun facts to their platform, but by framing our collection in new and engaging ways. Our themed decks explored Egyptian hieroglyphs, Buddhist mudras, and Jewish ritual objects, to name a few.

These decks proved a stimulating challenge to formulate, requiring both creative thinking and rigorous research. They were also surprisingly time consuming to produce, requiring collaboration with our curatorial, editorial, and design departments, which often created unforeseen delays. From start to finish, producing a deck took about 5 weeks. In the beginning, we were publishing biweekly, but that quickly became unsustainable and we scaled back to once per month. Even then, we kept returning to the same question: what is the return on this significant investment of staff time and energy?

As time went on, it became less about whether Tiny Cards was fun (it was!) and more about whether Tiny Cards was useful (hard to say). Were we reaching this sought after “new audience”? Were we meeting their wants and needs? What made a deck popular? How could we apply that knowledge to create more successful decks in the future? And so on.

Metrics are limited, which make many of these questions unanswerable, but here’s what we do know. We have access to the number of people who follow our account (1,000) and the number of people who have “pinned” a deck, essentially bookmarking or favoriting it for future reference. 9,339 pins across 24 decks is not insignificant. These pins serve as a measure of relative popularity and success within our 24 deck set.

Here are the top and bottom 5 Tiny Cards decks, according to pins. The "success" of our decks varied wildly with some garnering as many as 961 pins and others only 39.

Here are the top and bottom 5 Tiny Cards decks, according to pins. The “success” of our decks varied wildly with some garnering as many as 961 pins and others only 39.

What’s interesting is that publish date doesn’t seem to have an impact on accumulated pins. Rather it’s the content of the decks, likely as conveyed by the title and cover image, that draw people in. For example, our 3rd published deck, titled Images of God: Hindu Deities, is our most popular with 961 pins. It’s followed by our 12th published deck, Egyptian Hieroglyphs, with 795 pins. In fact our top 5 decks, published 3rd, 12th, 4th, 23rd, and 19th respectively, account for 40% of our total pins. 
Without more granular data, it’s hard to say exactly why these decks performed so much better than the others. Three-fifths of the Top Five deal with religion, so maybe that’s a draw, but then how do you explain our worst performing deck International Buddha, which has only 39 pins? Egypt is generally popular, right? But the other Egyptian decks rank 7th, 9th,12th, 13th, and 16th in terms of pins. Highlights of Art History (679 pins) being a draw for students might account for its popularity, but our other “art history 101” deck on the Ashcan School has less than half the pins (300). The truth is there’s just nowhere further to go until and if more data becomes available, which seems unlikely.

Tiny Cards has prompted the formulation of new frameworks for interpreting our collection that can be exploited elsewhere. For instance, the research conducted for this deck on games could inform in-gallery engagement activities.

Tiny Cards has prompted the formulation of new frameworks for interpreting our collection that can be exploited elsewhere. For instance, the research conducted for this deck on games could inform in-gallery engagement activities.

Let’s broaden the scope of our lessons learned beyond the data for a moment. This project served as an outlet for creative energy that will no doubt spark inspiration for future activities. I’ve already been thinking of ways to incorporate gameplay into our galleries after researching the various game boards, playing cards, and die in our collection for the It’s All Fun and Games deck. This wrap up also highlights the importance of establishing goals, measures for success, and evaluation strategies before or shortly after we rollout public facing activities. Otherwise, how can we continue to make data driven decisions to serve our audiences, both existing and untapped?

While the decks we have already created will remain on the platform for audiences to enjoy, it’s time for us to redirect our energies into projects with more measurable impact. Goodbye Tiny Cards; it has indeed been Big Fun.


 

 

]]>
/2019/08/08/tiny-cards-big-fun-what-impact/feed/ 0
Showing Our Pride: A New Themed ASK Tour /2019/07/25/showing-our-pride-a-new-themed-ask-tour/ /2019/07/25/showing-our-pride-a-new-themed-ask-tour/#respond Thu, 25 Jul 2019 17:25:33 +0000 /?p=8296 “Celebrate Pride Month! Our team of friendly experts guide you on a tour of LGBTQ+ artists and themes throughout the Museum via text message, chatting with you in real time as you explore.”

This was the message on palm cards that our ASK Ambassadors distributed to Museum visitors throughout June. As a special engagement activity for Pride Month, visitors could take an ASK-guided tour of our galleries and learn more about gender and queer identity in art. 

The card featured a detail of a work by Amaryllis DeJesus Moleski, on view in the exhibition "Nobody Promised You Tomorrow."

The card featured a detail of a work by Amaryllis DeJesus Moleski, on view in the exhibition “Nobody Promised You Tomorrow.”

This tour could be taken as a complementary activity to the special exhibition Nobody Promised You Tomorrow: Art 50 Years After Stonewall or as a standalone activity. And, as with all our ASK engagement offerings, we kept things responsive and personalized —every visitor could set their own pace and tone.

Visitors could begin their experience in the Museum lobby at a painting by Kehinde Wiley.

Visitors could begin their experience in the Museum lobby at a painting by Kehinde Wiley.

As we envisioned it, this app-guided tour would include a few very popular works from our collections (like Kehinde Wiley’s Napoleon Crossing the Alps) as well as some lesser-known works. They could be works by artists who identified as LGBTQ+, portraits of LGBTQ+ individuals, or works that touched on broader themes of gender identity.

The ASK Team collaborated to select ten works of art with a range of dates and media, from Donald Moffett’s Lot 043017 (Multiflora, Radiant Blue) to a coffin in the Ancient Egyptian collection, from Aaron Ben-Shmuel’s stone bust of Walt Whitman to Deborah Kass’s neon wall-piece After Louise Bourgeois. They compiled information about these works into a reference document and they strategized about giving directions to help the visitor navigate from stop to stop.

Elizabeth of the ASK Team tracked these tours (which accounted for about 22% of our app traffic) throughout June , and she noticed an interesting split. Visitors who began engaging with us on the Museum’s first floor were more likely to invest in the total tour experience, following our cues to visit works on the third, fourth, and fifth floors of the Museum. They often spent more than a half-hour with us for this itinerary.

Special labels with Pride flag icons were placed beside the “tour stops.”

Special labels with Pride flag icons were placed beside the “tour stops.”

Meanwhile, other visitors encountered individual works with our ASK Pride Month labels in the galleries and sent questions about them. These visitors were usually satisfied with learning about that particular work and might move one more stop nearby when we invited them to continue chatting. However, they were less interested in experiencing the complete tour.

The ASK Team also received a few requests from visitors who were ready to go even further. For example, when one visitor asked whether they could see anything by LGBTQ+ artists in the new special exhibition Rembrandt to Picasso: Five Centuries of European Works on Paper, we added a drawing by Rosa Bonheur to our list.

It’s been two years since we first tailored an ASK activity to a specific show or event, during Georgia O’Keeffe: Living Modern, and we continue to learn from each iteration. Next up? An engagement option related to the special exhibition Pierre Cardin: Future Fashion. More about that soon!

]]>
/2019/07/25/showing-our-pride-a-new-themed-ask-tour/feed/ 0
Labels Provide an Entry Point for ASK (We Think) /2019/05/09/labels-provide-an-entry-point-for-ask-we-think/ /2019/05/09/labels-provide-an-entry-point-for-ask-we-think/#respond Thu, 09 May 2019 17:17:52 +0000 /?p=8229 In my last post I detailed how I knitted together thematic connections across different collections and what effect in-gallery labels have on object engagement, but I wasn’t yet able to get any insight into what users’ full conversations looked like. We met with the Tech team to talk through potential needs and issues exist in order to effectively analyze chat data. Since one of my main goals was to determine a bit more about how visitors are using the in-gallery ASK labels during their visit, we decided that a search function would be most useful, similar to the search for snippets. Our incredible web developer Jacki Williams implemented the chat search into the dashboard so I could pull complete visitor chats based on what I was looking for.

Jacki modeled the chat search function after the snippet search, which allows for three ways to access the information.

Jacki modeled the chat search function after the snippet search, which allows for three ways to access the information.

The chat search function has three possible ways to search through chats. The (seemingly) easiest way is to search for a particular object via its accession number. However, not all chats have the objects tagged with its accession number. That is why being able to search words or phrases in the chats (the second option) is so useful. I might not be able to search the accession number, 83.84, and pull up all of the chats for it. However, I can search “East River View with Brooklyn Bridge” or “Yvonne Jacquette” to pull up more chats where individuals were asking about this particular work. There is a third option to search Chat by ID, which is useful if I need to reference a particular chat.

Jacki also created a function to filter out the search results. This has been especially useful as of late. A significant number of visitors recently have been using ASK for Frida Kahlo themed tours or quote hunts. If I want to look at an object that was incorporated in the Frida Kahlo activities I can just filter out the past few months in my results rather than manually combing through to remove can filter out the past few months of Frida Kahlo.

There is also the option to export desired chats from the results into a JSON file. The JSON file export is super useful because the file format lets allows me to read the full chat conversations and is a great record of what I have already analyzed. This is a huge step up from copy/pasting into Google docs and will likely have future benefits that the next Fellow or researcher can explore!

Searching by text via the chat search function was often the best way to find the ASK labels.The search text is even highlighted in the results making it even easier to do a quick scan for information.

Searching by text via the chat search function was often the best way to find the ASK labels.The search text is even highlighted in the results making it even easier to do a quick scan for information.

Additionally, the ability to search words or phrases was also useful for searching the in-gallery ASK label text. The object 83.84, East River View with Brooklyn Bridge, has an in-gallery label that says: “How did the artist get this view? Download our app or text … to learn more from our experts.” I used the search to pull up any chats that contain the label text to see if visitors were using the language verbatim and then what else they were looking at. 

I used the various search features to start compiling a table organized by object, which looks at the following:

  • Did the visitor use the label question
  • Where in the trajectory of their visit (beginning, middle, end) did they ask about this object in particular
  • Did they ask other questions about this object and how many
  • Did they ask about other objects, if so how many and which objects
  • Did they use specifically other in-gallery label questions, if so how many and which objects? (using the question provided in the prompt or a slight variation)
  • Did they ask any question about objects that have ASK? labels, if so how many and which objects? (not using the labels prompts specifically rather asking in question in general about the object that does have the label)

This process took a lot longer than I anticipated for one object alone. Unfortunately there was no way to streamline gathering this information from each chat. The most time consuming aspect was having to look up accession numbers and/or titles of different works that visitors asked about which did not get tagged in the chat or the conversation did not explicitly include the title. This highlights an overarching issue with the chat data that not everything has been tagged consistently over time especially in regards to objects asked about.

With how long it took me just to go through one object/in-gallery label and time running out of my fellowship, I wouldn’t have time to go through as many objects and chats as I would have liked. I decided to focus on a few objects that have in-gallery label questions, in-gallery generic labels, and popular objects with no labels at all. The downside is, the more popular or asked about an object the longer they take to analyze, further limiting how many I can get through.

From my very brief overview of a few objects and roughly 200 chats only one visitor used the in-gallery labels more than once. This visitor used two question labels but also asked about two objects that did not have in-gallery labels.  Additionally, from what I did look at, visitors who did use the in-gallery labels most often used them as the first object that they asked about. My hypothesis is that visitors are using the in-gallery labels to get an introduction to the app and then proceed asking questions about objects that they were interested in. This is great news, since that was the original goal of these labels: to get people using the app. More chats and objects will definitely need to be looked at to confirm this, but the foundation for going through chat data has now been established.

Over the past year I have learned a lot not just about the ASK app and its users but also that big learnings can still come from small tools. I’m going to miss working with this challenging but incredibly interesting data and I wish the best of luck to next year’s Fellow!

]]>
/2019/05/09/labels-provide-an-entry-point-for-ask-we-think/feed/ 0
What encourages people to ASK about certain objects? /2019/05/02/what-encourages-people-to-ask-about-certain-objects/ /2019/05/02/what-encourages-people-to-ask-about-certain-objects/#respond Thu, 02 May 2019 13:09:58 +0000 /?p=8220 While I wanted to learn more about visitors complete interactions through the app, without the ability to systematically dive into chats, I chose to focus on another aspect of user behavior: in-gallery ASK labels. The first iteration of ASK labels were question prompts, which was later switched to generic text about the app (e.g. “ask us for more info” or “ask us about what you see”), and are now back to questions (and a few hopefully provocative statements). The switch back to questions was due to anecdotal evidence that the generic prompts weren’t motivating people to use the app. To check this assumption, I looked at user behavior based on frequency of questions asked (determined via snippet counts) and if that changes for objects with or without ASK labels. Spoiler alert: it does. 

To start, I needed to complete an audit of which objects currently have or have ever had ASK-related labels. This was more challenging than it seems, because over the years the record of which labels actually made it into the galleries was not always accurately kept up to date. I was able to look at what objects currently have ASK labels by easily walking through the museum and making note. For objects that are no longer up, there were two installments of labels that had been tagged in the dashboard (a first iteration of questions and a later iteration of generic labels).

The prompt for this ASK label reads: Curious about how turquoise was used in Tibetan medicine? Download our app to ask an expert.

The prompt for this ASK label reads: Curious about how turquoise was used in Tibetan medicine? Download our app to ask an expert.

The audit of existing labels was able to be a bit more elaborate. I was able to look at the text of the label and snippet counts as well as compare the label question with the questions that visitors actually asked. I would have also liked to look at the location of the labels (wall/case/pedestal etc.) to see how that influences behavior, but time constraints prevented me from pursuing this avenue of inquiry.

Through this comparison, I was able to gain a little insight into how the labels affect user behavior. Based on snippet counts, objects with ASK labels have on average 16.19 snippets while objects with no labels have, on average, 8.28 snippets. That means objects that have current ASK labels have a 2x higher engagement rate than objects without.

To take it a step further, I broke down snippet counts by objects that have ever had a label (according to the dashboard categories) and the type of label. Objects that have or had generic labels have an average of 14.15 snippets. Objects that have had or had question labels have an average of 19.90 snippets. Objects without any type of ASK label have an average of 5.96 snippets. This suggests that the question prompts are the most effective method to get users to engage with objects and the app.

I will add that there was one object that was not considered in these counts. The Dinner Party has over 400 snippets and is a project of analysis within itself. It is asked about so much more frequently than the other objects that it was skewing the averages and therefore temporarily removed it from the dataset.

Since I had every object asked about (over 2,000 different artworks) and their snippet counts at my disposal, I also wanted to get back to some of the less frequently asked about objects. I put together a spreadsheet of all the objects organized by their total snippets. From this I learned that objects with snippet counts above 50 snippets only make up 2% of all the asked about objects. Objects with snippet counts 20 and above make up 6% of total objects. The majority of objects have fewer than 20 questions asked about them.

What was it about the 2% of objects that made them so appealing and what makes visitors want to ask about less “popular” objects? Other than the ASK labels, I wanted to see if there was anything glaringly obvious about the popular or less popular objects. I looked at a selection of the 50 or higher snippet count objects individually and recorded various qualities about them. I did the same with a selection of objects with 2 snippets associated with them. Here’s what I found….

What makes people ask about objects?

  • Is larger in size
  • Has religious connotations
  • On view for extended length
  • Made by an extremely well-known artist (i.e. Rodin) or depicts a recognizable figure (i.e. Jesus)
  • Is a mummy

Why might people ask fewer questions about objects?

  • Smaller in size
  • Are functional objects

There is still much to be answered about the second question. However, since 98% of objects have fewer than 50 snippets, there just isn’t enough time to look at each object’s qualities and questions to determine more. Plus, I wanted to dedicate some remaining time to the full visitor journey with the app, but I was still dealing with no way to search through chats.

Stay tuned for my final blog post as I detail how this was resolved and what more I learned.

 

]]>
/2019/05/02/what-encourages-people-to-ask-about-certain-objects/feed/ 0
What kinds of questions do users ASK us about art? /2019/04/26/what-kinds-of-questions-do-users-ask-us-about-art/ /2019/04/26/what-kinds-of-questions-do-users-ask-us-about-art/#respond Fri, 26 Apr 2019 13:43:52 +0000 /?p=8195 I ended my last post with a brief exploration of what people are asking about via ASK. I was particularly interested in going beyond the top 100 most-asked about works that the dashboard metrics pull. Based on the information that the top 100’s gave me and my desire to learn if there are any similarities in questions asked across different collections, I decided to break down the dashboard analysis further. I looked again at the top 100 snippets, but broken out by collection type. This was the key to finding similarities across collections. From each of these top 100 collection-specific snippets, I coded the questions based on what they were generally about. Here’s an example of what part of the Asian Art collection chart looked like:

chart for blogNext came standardizing the themes so I could code them and compare them across collections.

This chart serves as basic data viz for themes across collections. Note that these themes are my interpretation from the questions visitors asked and could vary based on what another researcher codes the questions as.

This chart serves as basic data viz for themes across collections. Note that these themes are my interpretation from the questions visitors asked and could vary based on what another researcher codes the questions as.

Several universal themes across collections came out of this analysis. I personally find ‘damages/missing parts’ one of the most fascinating findings. If a work of art has something that appears to be missing from it, intentional or not, visitors will likely ask about it. Other themes across most of the different collection areas include:

  • Creation techniques/materials
  • Symbolism
  • Purpose/function
  • Curatorial decisions
  • Significance/meaning
The coding system of themes, definitions, and collection areas. Note ECANEA stands for Egyptian, Classical and Ancient Near Eastern Art.

The coding system of themes, definitions, and collection areas. Note ECANEA stands for Egyptian, Classical and Ancient Near Eastern Art.

While this thematic breakdown does provide interesting insight into what visitors want to know, I still wanted to delve deeper into user behavior with the app, especially in regards to the complete trajectory of a visit. These themes are based on only a snippet of entire conversation. What could we learn if we looked at that entire conversation? Unfortunately, the dashboard won’t export entire conversations yet, so I had to pause this line of inquiry. But I picked up another thread and began to follow it. More on that next week!

 

]]>
/2019/04/26/what-kinds-of-questions-do-users-ask-us-about-art/feed/ 0
Initial Insights from ASK Data /2019/04/11/initial-insights-from-ask-data/ /2019/04/11/initial-insights-from-ask-data/#respond Thu, 11 Apr 2019 17:50:16 +0000 /?p=8173 During my first semester as the Pratt Visitor Experience & Engagement fellow I was able to learn a significant amount about ASK user behavior—despite limitations of data sets—and answer some of the following questions.

What does pre-download visitor behavior look like?

The notes from ASK ambassadors provided critical insight into behavior around the app, especially before downloading. Two main trends are especially worth noting:

Download locations occurs most often and most easily at start of visit. Ambassadors most often had success approaching people and creating desire to use the app earlier on in visit. Favorite spots include near the directory in the lobby (just past the admissions point) and first floor exhibitions and elevators.

Ambassadors help lower a barrier to entry. ASK Ambassadors frequently encountered visitors who struggled with how to use the app due to various technology issues (i,e. Location services, or bluetooth not being on). There were instances of individuals having the app downloaded, but were unable to get it to work due to some setting or password issue on their phone. An ongoing challenge with ASK is that visitors aren’t always sure what questions to ask (especially if there were no labels with suggestions), so ambassadors helped provide a starting point.

Ambassadors were crucial to solving these issues of uncertainty and highlight the importance of having staff on the floor who are well trained in whatever technology the museum is trying to promote.

Who are the users and non-users?

Based on ASK Ambassador notes, I was able to paint a small picture of a few user/non-user personas. In the chart below, personalities highlighted in red indicate those not likely to use the app, those highlighted in yellow indicate visitors who could be convinced to use the app, and green indicate those who are very enthusiastic about using the app.

User Personas for Blog

The data from Sara’s Pratt class, who is currently conducting user surveys via the ASK app, will be able to refine this in the near future (more in a future post from Sara).

Where are people asking questions?

As mentioned in my previous post, locations proved a huge challenge to identify. To start, I had to manually create this table by filtering the total chats metric by location and entering it in.

chats by location for blogAs you can see, the top gallery locations are American Identities (now American Art), Egyptian, the Lobby, and European. On top of the fact that locations tracked by the dashboard do not account for visitors who use the text option, there are other glaring issues with taking this data at face value. The first is that from going through chats and the kinds of questions asked it became clear that areas without descriptive labels often were areas where visitors asked a lot of questions. Additionally, the overwhelmingly most asked-about artwork is The Dinner Party, which is located in the Elizabeth A. Sackler Center for Feminist Art (EASCFA). Despite this, EASCFA is not one of the top gallery locations because The Dinner Party is tracked separately and is not included in the EASCFA data. Nuances like these can provide a more complete picture about app use and visitor interest, but are not reflected in this data. If the question is where is the app being used in the museum, this data can provide a general overview. However, if the questions is more about what are people asking about, this data does not offer any valuable insights.

What are people asking about?

Next I wanted to know what objects people were asking about and the kinds of questions they asked. The dashboard actually made the start of this easy since it offers some of that information. I was able to pull the most popular objects and popular snippets of all time as well as by year to see what kinds of variations there were. The problem with this was that there are a handful of objects that are so popular (such as The Dinner Party, or the Assyrian Reliefs)  that it thwarted my ability to learn anything new about user behavior. Through anecdotal information (having to answer the same questions about the same objects over four years) and the dashboard metrics, we already know the most asked about objects and most popular questions. However, over 2,000 objects have been asked about, and the dashboard only pulls the top 100 works. I start to wonder: what about the objects that don’t make that list?

]]>
/2019/04/11/initial-insights-from-ask-data/feed/ 0
Mining Data With Limited Tools /2019/04/04/mining-data-with-limited-tools/ /2019/04/04/mining-data-with-limited-tools/#respond Thu, 04 Apr 2019 15:49:27 +0000 /?p=8165 In my last post, I laid out some of the challenges working with the current metrics dashboard and the data exporting process for ASK. Despite the limitations of available metrics, edited snippets, and overwhelming amounts of data, I was able to work my way around many of these issues in some way (a lot of Google docs and spreadsheets). However, each solution then presented more challenges.

In regards to limited dashboard metrics, there were some easy (and some not so easy) fixes. To get a better sense of where people were using the app, I was able to look at chat stats by the various locations where beacons fired. I compiled these numbers by year as well as all-time into Google sheets and could make simple visualizations that way.

Google sheets allows for basic data viz, but the data has to be updated manually, which is a time-consuming process.

Google sheets allows for basic data viz, but the data has to be updated manually, which is a time-consuming process.

However, with this solution came new (currently unsolved) challenges. For example, there is no way to update this data in real-time and, as you can see, my chart reflects data from January 29, 2019. Additionally, locations are only attributed to chats if visitors use the app. If they use the texting feature—about 70% of ASK usage for the last year —there is no location data available.

For other metrics and visualizations I was able to generate chat stats and export that as a CSV. This gave me a spreadsheet of chat ID, start time, date, chat duration, host/s name, exhibition location, and device type. With some hefty data manipulation, I was able to parse out device type variations, chat locations, show chats by year and records by year.

Basic data viz allows us to see overall trends for further exploration, even if the data can't be updated in real time.

Basic data viz allows us to see overall trends for further exploration, even if the data can’t be updated in real time.

Again we see limitations with this solution, which is another instance of manual data manipulation leading to dated visualizations (October 2018 in this instance). Additionally, there are huge changes and spikes in 2018 both for SMS (text-messaging) and overall chats recorded. These spikes are likely reflective of the Bowie Trivia popularity, however, there is no easy way to remove those records from the exported CSV file.

For some challenges there were no simple solutions. Edited snippet content became a challenge that just had to be accepted as an overarching limitation. The most efficient way to analyze what visitors were asking based on collection was through snippets because unlike chats, snippets would be tagged by collection. As mentioned earlier, chat locations are based on beacons firing through the app.  So a chat could actually have an attributed location, but not necessarily reflect where the object being asked about is actually located. For example, a visitor could take a picture of an object on view in the Ancient Egyptian Art galleries, but send the question from the European galleries, therefore the chat would be marked as occurring in European despite actually being about an Egyptian work. However, when the ASK team processes snippets, they tag the artworks with their associated collection making it easier to look specifically at what people were asking based on the type of object. This also bring me to my next challenge of combining and exporting data.

In order to read through the different snippets associated with collections I had to sort the dashboard metric ‘How many snippets have been created (via collections)?’ by each collection and export the snippets four times by snippet editorial status (draft, team approved, curator approved internal, and web approved). Through this method I was able to analyze the various questions asked by collection and draw out themes by collection (which I will come back to later).

Of course, within this framework, I had to be selective in what I attempt to do with the available data since there is a great deal of content and my time on the project is limited. In my next post, I’ll present some of the initial findings as a result of these workarounds.

]]>
/2019/04/04/mining-data-with-limited-tools/feed/ 0