Sydney Stewart – BKM TECH https://www.brooklynmuseum.org/community/blogosphere Technology blog of the Brooklyn Museum Thu, 09 May 2019 17:17:52 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 Labels Provide an Entry Point for ASK (We Think) /2019/05/09/labels-provide-an-entry-point-for-ask-we-think/ /2019/05/09/labels-provide-an-entry-point-for-ask-we-think/#respond Thu, 09 May 2019 17:17:52 +0000 /?p=8229 In my last post I detailed how I knitted together thematic connections across different collections and what effect in-gallery labels have on object engagement, but I wasn’t yet able to get any insight into what users’ full conversations looked like. We met with the Tech team to talk through potential needs and issues exist in order to effectively analyze chat data. Since one of my main goals was to determine a bit more about how visitors are using the in-gallery ASK labels during their visit, we decided that a search function would be most useful, similar to the search for snippets. Our incredible web developer Jacki Williams implemented the chat search into the dashboard so I could pull complete visitor chats based on what I was looking for.

Jacki modeled the chat search function after the snippet search, which allows for three ways to access the information.

Jacki modeled the chat search function after the snippet search, which allows for three ways to access the information.

The chat search function has three possible ways to search through chats. The (seemingly) easiest way is to search for a particular object via its accession number. However, not all chats have the objects tagged with its accession number. That is why being able to search words or phrases in the chats (the second option) is so useful. I might not be able to search the accession number, 83.84, and pull up all of the chats for it. However, I can search “East River View with Brooklyn Bridge” or “Yvonne Jacquette” to pull up more chats where individuals were asking about this particular work. There is a third option to search Chat by ID, which is useful if I need to reference a particular chat.

Jacki also created a function to filter out the search results. This has been especially useful as of late. A significant number of visitors recently have been using ASK for Frida Kahlo themed tours or quote hunts. If I want to look at an object that was incorporated in the Frida Kahlo activities I can just filter out the past few months in my results rather than manually combing through to remove can filter out the past few months of Frida Kahlo.

There is also the option to export desired chats from the results into a JSON file. The JSON file export is super useful because the file format lets allows me to read the full chat conversations and is a great record of what I have already analyzed. This is a huge step up from copy/pasting into Google docs and will likely have future benefits that the next Fellow or researcher can explore!

Searching by text via the chat search function was often the best way to find the ASK labels.The search text is even highlighted in the results making it even easier to do a quick scan for information.

Searching by text via the chat search function was often the best way to find the ASK labels.The search text is even highlighted in the results making it even easier to do a quick scan for information.

Additionally, the ability to search words or phrases was also useful for searching the in-gallery ASK label text. The object 83.84, East River View with Brooklyn Bridge, has an in-gallery label that says: “How did the artist get this view? Download our app or text … to learn more from our experts.” I used the search to pull up any chats that contain the label text to see if visitors were using the language verbatim and then what else they were looking at. 

I used the various search features to start compiling a table organized by object, which looks at the following:

  • Did the visitor use the label question
  • Where in the trajectory of their visit (beginning, middle, end) did they ask about this object in particular
  • Did they ask other questions about this object and how many
  • Did they ask about other objects, if so how many and which objects
  • Did they use specifically other in-gallery label questions, if so how many and which objects? (using the question provided in the prompt or a slight variation)
  • Did they ask any question about objects that have ASK? labels, if so how many and which objects? (not using the labels prompts specifically rather asking in question in general about the object that does have the label)

This process took a lot longer than I anticipated for one object alone. Unfortunately there was no way to streamline gathering this information from each chat. The most time consuming aspect was having to look up accession numbers and/or titles of different works that visitors asked about which did not get tagged in the chat or the conversation did not explicitly include the title. This highlights an overarching issue with the chat data that not everything has been tagged consistently over time especially in regards to objects asked about.

With how long it took me just to go through one object/in-gallery label and time running out of my fellowship, I wouldn’t have time to go through as many objects and chats as I would have liked. I decided to focus on a few objects that have in-gallery label questions, in-gallery generic labels, and popular objects with no labels at all. The downside is, the more popular or asked about an object the longer they take to analyze, further limiting how many I can get through.

From my very brief overview of a few objects and roughly 200 chats only one visitor used the in-gallery labels more than once. This visitor used two question labels but also asked about two objects that did not have in-gallery labels.  Additionally, from what I did look at, visitors who did use the in-gallery labels most often used them as the first object that they asked about. My hypothesis is that visitors are using the in-gallery labels to get an introduction to the app and then proceed asking questions about objects that they were interested in. This is great news, since that was the original goal of these labels: to get people using the app. More chats and objects will definitely need to be looked at to confirm this, but the foundation for going through chat data has now been established.

Over the past year I have learned a lot not just about the ASK app and its users but also that big learnings can still come from small tools. I’m going to miss working with this challenging but incredibly interesting data and I wish the best of luck to next year’s Fellow!

]]>
/2019/05/09/labels-provide-an-entry-point-for-ask-we-think/feed/ 0
What encourages people to ASK about certain objects? /2019/05/02/what-encourages-people-to-ask-about-certain-objects/ /2019/05/02/what-encourages-people-to-ask-about-certain-objects/#respond Thu, 02 May 2019 13:09:58 +0000 /?p=8220 While I wanted to learn more about visitors complete interactions through the app, without the ability to systematically dive into chats, I chose to focus on another aspect of user behavior: in-gallery ASK labels. The first iteration of ASK labels were question prompts, which was later switched to generic text about the app (e.g. “ask us for more info” or “ask us about what you see”), and are now back to questions (and a few hopefully provocative statements). The switch back to questions was due to anecdotal evidence that the generic prompts weren’t motivating people to use the app. To check this assumption, I looked at user behavior based on frequency of questions asked (determined via snippet counts) and if that changes for objects with or without ASK labels. Spoiler alert: it does. 

To start, I needed to complete an audit of which objects currently have or have ever had ASK-related labels. This was more challenging than it seems, because over the years the record of which labels actually made it into the galleries was not always accurately kept up to date. I was able to look at what objects currently have ASK labels by easily walking through the museum and making note. For objects that are no longer up, there were two installments of labels that had been tagged in the dashboard (a first iteration of questions and a later iteration of generic labels).

The prompt for this ASK label reads: Curious about how turquoise was used in Tibetan medicine? Download our app to ask an expert.

The prompt for this ASK label reads: Curious about how turquoise was used in Tibetan medicine? Download our app to ask an expert.

The audit of existing labels was able to be a bit more elaborate. I was able to look at the text of the label and snippet counts as well as compare the label question with the questions that visitors actually asked. I would have also liked to look at the location of the labels (wall/case/pedestal etc.) to see how that influences behavior, but time constraints prevented me from pursuing this avenue of inquiry.

Through this comparison, I was able to gain a little insight into how the labels affect user behavior. Based on snippet counts, objects with ASK labels have on average 16.19 snippets while objects with no labels have, on average, 8.28 snippets. That means objects that have current ASK labels have a 2x higher engagement rate than objects without.

To take it a step further, I broke down snippet counts by objects that have ever had a label (according to the dashboard categories) and the type of label. Objects that have or had generic labels have an average of 14.15 snippets. Objects that have had or had question labels have an average of 19.90 snippets. Objects without any type of ASK label have an average of 5.96 snippets. This suggests that the question prompts are the most effective method to get users to engage with objects and the app.

I will add that there was one object that was not considered in these counts. The Dinner Party has over 400 snippets and is a project of analysis within itself. It is asked about so much more frequently than the other objects that it was skewing the averages and therefore temporarily removed it from the dataset.

Since I had every object asked about (over 2,000 different artworks) and their snippet counts at my disposal, I also wanted to get back to some of the less frequently asked about objects. I put together a spreadsheet of all the objects organized by their total snippets. From this I learned that objects with snippet counts above 50 snippets only make up 2% of all the asked about objects. Objects with snippet counts 20 and above make up 6% of total objects. The majority of objects have fewer than 20 questions asked about them.

What was it about the 2% of objects that made them so appealing and what makes visitors want to ask about less “popular” objects? Other than the ASK labels, I wanted to see if there was anything glaringly obvious about the popular or less popular objects. I looked at a selection of the 50 or higher snippet count objects individually and recorded various qualities about them. I did the same with a selection of objects with 2 snippets associated with them. Here’s what I found….

What makes people ask about objects?

  • Is larger in size
  • Has religious connotations
  • On view for extended length
  • Made by an extremely well-known artist (i.e. Rodin) or depicts a recognizable figure (i.e. Jesus)
  • Is a mummy

Why might people ask fewer questions about objects?

  • Smaller in size
  • Are functional objects

There is still much to be answered about the second question. However, since 98% of objects have fewer than 50 snippets, there just isn’t enough time to look at each object’s qualities and questions to determine more. Plus, I wanted to dedicate some remaining time to the full visitor journey with the app, but I was still dealing with no way to search through chats.

Stay tuned for my final blog post as I detail how this was resolved and what more I learned.

 

]]>
/2019/05/02/what-encourages-people-to-ask-about-certain-objects/feed/ 0
What kinds of questions do users ASK us about art? /2019/04/26/what-kinds-of-questions-do-users-ask-us-about-art/ /2019/04/26/what-kinds-of-questions-do-users-ask-us-about-art/#respond Fri, 26 Apr 2019 13:43:52 +0000 /?p=8195 I ended my last post with a brief exploration of what people are asking about via ASK. I was particularly interested in going beyond the top 100 most-asked about works that the dashboard metrics pull. Based on the information that the top 100’s gave me and my desire to learn if there are any similarities in questions asked across different collections, I decided to break down the dashboard analysis further. I looked again at the top 100 snippets, but broken out by collection type. This was the key to finding similarities across collections. From each of these top 100 collection-specific snippets, I coded the questions based on what they were generally about. Here’s an example of what part of the Asian Art collection chart looked like:

chart for blogNext came standardizing the themes so I could code them and compare them across collections.

This chart serves as basic data viz for themes across collections. Note that these themes are my interpretation from the questions visitors asked and could vary based on what another researcher codes the questions as.

This chart serves as basic data viz for themes across collections. Note that these themes are my interpretation from the questions visitors asked and could vary based on what another researcher codes the questions as.

Several universal themes across collections came out of this analysis. I personally find ‘damages/missing parts’ one of the most fascinating findings. If a work of art has something that appears to be missing from it, intentional or not, visitors will likely ask about it. Other themes across most of the different collection areas include:

  • Creation techniques/materials
  • Symbolism
  • Purpose/function
  • Curatorial decisions
  • Significance/meaning
The coding system of themes, definitions, and collection areas. Note ECANEA stands for Egyptian, Classical and Ancient Near Eastern Art.

The coding system of themes, definitions, and collection areas. Note ECANEA stands for Egyptian, Classical and Ancient Near Eastern Art.

While this thematic breakdown does provide interesting insight into what visitors want to know, I still wanted to delve deeper into user behavior with the app, especially in regards to the complete trajectory of a visit. These themes are based on only a snippet of entire conversation. What could we learn if we looked at that entire conversation? Unfortunately, the dashboard won’t export entire conversations yet, so I had to pause this line of inquiry. But I picked up another thread and began to follow it. More on that next week!

 

]]>
/2019/04/26/what-kinds-of-questions-do-users-ask-us-about-art/feed/ 0
Initial Insights from ASK Data /2019/04/11/initial-insights-from-ask-data/ /2019/04/11/initial-insights-from-ask-data/#respond Thu, 11 Apr 2019 17:50:16 +0000 /?p=8173 During my first semester as the Pratt Visitor Experience & Engagement fellow I was able to learn a significant amount about ASK user behavior—despite limitations of data sets—and answer some of the following questions.

What does pre-download visitor behavior look like?

The notes from ASK ambassadors provided critical insight into behavior around the app, especially before downloading. Two main trends are especially worth noting:

Download locations occurs most often and most easily at start of visit. Ambassadors most often had success approaching people and creating desire to use the app earlier on in visit. Favorite spots include near the directory in the lobby (just past the admissions point) and first floor exhibitions and elevators.

Ambassadors help lower a barrier to entry. ASK Ambassadors frequently encountered visitors who struggled with how to use the app due to various technology issues (i,e. Location services, or bluetooth not being on). There were instances of individuals having the app downloaded, but were unable to get it to work due to some setting or password issue on their phone. An ongoing challenge with ASK is that visitors aren’t always sure what questions to ask (especially if there were no labels with suggestions), so ambassadors helped provide a starting point.

Ambassadors were crucial to solving these issues of uncertainty and highlight the importance of having staff on the floor who are well trained in whatever technology the museum is trying to promote.

Who are the users and non-users?

Based on ASK Ambassador notes, I was able to paint a small picture of a few user/non-user personas. In the chart below, personalities highlighted in red indicate those not likely to use the app, those highlighted in yellow indicate visitors who could be convinced to use the app, and green indicate those who are very enthusiastic about using the app.

User Personas for Blog

The data from Sara’s Pratt class, who is currently conducting user surveys via the ASK app, will be able to refine this in the near future (more in a future post from Sara).

Where are people asking questions?

As mentioned in my previous post, locations proved a huge challenge to identify. To start, I had to manually create this table by filtering the total chats metric by location and entering it in.

chats by location for blogAs you can see, the top gallery locations are American Identities (now American Art), Egyptian, the Lobby, and European. On top of the fact that locations tracked by the dashboard do not account for visitors who use the text option, there are other glaring issues with taking this data at face value. The first is that from going through chats and the kinds of questions asked it became clear that areas without descriptive labels often were areas where visitors asked a lot of questions. Additionally, the overwhelmingly most asked-about artwork is The Dinner Party, which is located in the Elizabeth A. Sackler Center for Feminist Art (EASCFA). Despite this, EASCFA is not one of the top gallery locations because The Dinner Party is tracked separately and is not included in the EASCFA data. Nuances like these can provide a more complete picture about app use and visitor interest, but are not reflected in this data. If the question is where is the app being used in the museum, this data can provide a general overview. However, if the questions is more about what are people asking about, this data does not offer any valuable insights.

What are people asking about?

Next I wanted to know what objects people were asking about and the kinds of questions they asked. The dashboard actually made the start of this easy since it offers some of that information. I was able to pull the most popular objects and popular snippets of all time as well as by year to see what kinds of variations there were. The problem with this was that there are a handful of objects that are so popular (such as The Dinner Party, or the Assyrian Reliefs)  that it thwarted my ability to learn anything new about user behavior. Through anecdotal information (having to answer the same questions about the same objects over four years) and the dashboard metrics, we already know the most asked about objects and most popular questions. However, over 2,000 objects have been asked about, and the dashboard only pulls the top 100 works. I start to wonder: what about the objects that don’t make that list?

]]>
/2019/04/11/initial-insights-from-ask-data/feed/ 0
Mining Data With Limited Tools /2019/04/04/mining-data-with-limited-tools/ /2019/04/04/mining-data-with-limited-tools/#respond Thu, 04 Apr 2019 15:49:27 +0000 /?p=8165 In my last post, I laid out some of the challenges working with the current metrics dashboard and the data exporting process for ASK. Despite the limitations of available metrics, edited snippets, and overwhelming amounts of data, I was able to work my way around many of these issues in some way (a lot of Google docs and spreadsheets). However, each solution then presented more challenges.

In regards to limited dashboard metrics, there were some easy (and some not so easy) fixes. To get a better sense of where people were using the app, I was able to look at chat stats by the various locations where beacons fired. I compiled these numbers by year as well as all-time into Google sheets and could make simple visualizations that way.

Google sheets allows for basic data viz, but the data has to be updated manually, which is a time-consuming process.

Google sheets allows for basic data viz, but the data has to be updated manually, which is a time-consuming process.

However, with this solution came new (currently unsolved) challenges. For example, there is no way to update this data in real-time and, as you can see, my chart reflects data from January 29, 2019. Additionally, locations are only attributed to chats if visitors use the app. If they use the texting feature—about 70% of ASK usage for the last year —there is no location data available.

For other metrics and visualizations I was able to generate chat stats and export that as a CSV. This gave me a spreadsheet of chat ID, start time, date, chat duration, host/s name, exhibition location, and device type. With some hefty data manipulation, I was able to parse out device type variations, chat locations, show chats by year and records by year.

Basic data viz allows us to see overall trends for further exploration, even if the data can't be updated in real time.

Basic data viz allows us to see overall trends for further exploration, even if the data can’t be updated in real time.

Again we see limitations with this solution, which is another instance of manual data manipulation leading to dated visualizations (October 2018 in this instance). Additionally, there are huge changes and spikes in 2018 both for SMS (text-messaging) and overall chats recorded. These spikes are likely reflective of the Bowie Trivia popularity, however, there is no easy way to remove those records from the exported CSV file.

For some challenges there were no simple solutions. Edited snippet content became a challenge that just had to be accepted as an overarching limitation. The most efficient way to analyze what visitors were asking based on collection was through snippets because unlike chats, snippets would be tagged by collection. As mentioned earlier, chat locations are based on beacons firing through the app.  So a chat could actually have an attributed location, but not necessarily reflect where the object being asked about is actually located. For example, a visitor could take a picture of an object on view in the Ancient Egyptian Art galleries, but send the question from the European galleries, therefore the chat would be marked as occurring in European despite actually being about an Egyptian work. However, when the ASK team processes snippets, they tag the artworks with their associated collection making it easier to look specifically at what people were asking based on the type of object. This also bring me to my next challenge of combining and exporting data.

In order to read through the different snippets associated with collections I had to sort the dashboard metric ‘How many snippets have been created (via collections)?’ by each collection and export the snippets four times by snippet editorial status (draft, team approved, curator approved internal, and web approved). Through this method I was able to analyze the various questions asked by collection and draw out themes by collection (which I will come back to later).

Of course, within this framework, I had to be selective in what I attempt to do with the available data since there is a great deal of content and my time on the project is limited. In my next post, I’ll present some of the initial findings as a result of these workarounds.

]]>
/2019/04/04/mining-data-with-limited-tools/feed/ 0
Diving into ASK Data /2019/03/28/diving-into-ask-data/ /2019/03/28/diving-into-ask-data/#respond Thu, 28 Mar 2019 17:46:40 +0000 /?p=8157 As the Pratt Visitor Experience & Engagement Fellow, I was tasked with conducting a deep dive into ASK-related data. There are several research questions that the team was interested in using the data to answer, which centered around visitor behavior related to ASK, as well as thoughts and attitudes towards the app and experience. In my limited time during the academic year I have focused on the following types of inquiries:

  • What kinds of questions do people ask?
  • Do questions/comments differ based on nature of artwork (e.g. material culture v. fine art)?
  • Where in the Museum do people tend to use ASK?
  • How does ASK fit into the gallery experience and overall interpretation options?

My goals for this project are two-fold. First to figure out what can and cannot be learned from the existing dashboard (the interface the team uses to answer questions) and what shortcomings might need to be addressed for future research. The second is to gather and analyze as much information about ASK users related to the above questions within the time constraints of an academic year fellowship.

When I started, my initial focus was on the readily-available data. I started out with the basic metrics accessible in the dashboard to get a feel for what was happening with visitors and the app. This included how many snippets (question and answer pairs) have been created, what objects are most popular, what are the 100 most popular snippets.

The first notes taken by one of our stellar ASK Ambassadors, Alex.

The first notes taken by one of our stellar ASK Ambassadors, Alex.

Another useful source of data was the ASK ambassador notes. Since the beginning of the Ambassador program in February 2017, the Ambassadors have been making daily notes about their experiences on the floor—what pitches worked and didn’t, people’s reactions, anecdotes and observations. I combed through all of 2017 and 2018 ambassador notes (over 500 days’ worth!) to glean insight into how people received the app and who was or was not receptive. This kind of insight was useful when paired together with the numbers and helped create a better picture. Especially since understanding the behavior of those who don’t use the app is hard to get from app metrics.

Challenges (to name but a few)

There have been many unexpected challenges conducting data analysis with a dashboard not initially built for in-depth data analysis. To start, the metrics available in the dashboard were limiting. For instance, the metrics (how many snippets have been created, what objects are most popular, what are the 100 most popular snippets) do not help me get at some basic understandings such as location of chat or usage variations over time.

The kinds of questions people ask was the most difficult to parse out due to several limitations. When the ASK team processes chats into snippets, they often edit to smooth language or make the content clearer. Snippets were initially the easiest and the only way to look at chat content. This presents two main issues. The first is that I can’t see what a series of questions a visitor asks might look like because their journey is broken apart and only searchable through the pieces (I equate this to trying to figure out the image of a puzzle with just the outside pieces). The second issue is that any kind of sentiment analysis is less reliable. Sentiment analysis could provide insight into users attitudes, opinions and emotions. However, edited text in snippets potentially changes the perception of what the user was thinking or feeling.

During snippet creation, the team creates question and answer pairs and tags the snippet with accession numbers and key words.

During snippet creation, the team creates question and answer pairs and tags the snippet with accession numbers and key words.

Another challenge was the ability to both export data and combine different kinds of data from the dashboard. There are some metrics that you can export as a CSV, some as a Google doc, and others with no exportability at all. Additionally, the variability between location and collection filters  make it difficult to understand what the data is actually representing. For instance: some metrics can be filtered by exhibition or gallery while others are filtered by collection.

The ultimate challenge, however, is the sheer number of snippet content to go through. At the time of writing this post, there have been 19,409 chats and 2,401 objects asked about. It would physically be impossible to go through all of this data as someone coming in only for an academic year, one-day a week.

Luckily, I was able to use creative problem solving (a lot of Google docs and spreadsheets) to work my way around many of these challenges in some way. To learn more about my work-arounds, check out the second half of this post next week.

]]>
/2019/03/28/diving-into-ask-data/feed/ 0