David Huerta – BKM TECH https://www.brooklynmuseum.org/community/blogosphere Technology blog of the Brooklyn Museum Mon, 14 Dec 2015 17:04:40 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 Preparing for a Post-Password Future with Pokemon Cards /2015/01/16/preparing-for-a-post-password-future-with-pokemon-cards/ /2015/01/16/preparing-for-a-post-password-future-with-pokemon-cards/#respond Fri, 16 Jan 2015 16:42:12 +0000 /?p=7248 Every year a gathering of hackers and information security professionals convene in Washington, DC to discuss how awful and broken the state of computer security is. Passwords are a perennial problem area in almost every security architecture, and here in the lobby of the Dupont Circle Hilton, tales of default or weak passwords are swapped over whisky or Red Bull. Strong passwords are an ideal which work when remembered or managed securely, but generally aren’t. When that happens, no matter what font size or weight your password reset link is, you will get emailed asking for a forgotten password to be reset.

During GO, we experienced this en masse as an explosion of human nature flew into our inboxes at the height of our scramble to scale our infrastructure to keep the website from collapsing under heavy traffic. Resetting passwords only to have them forgotten again was a problem, and a lesson to learn from when we’d have to address it again with the ASK dashboard, which is part of the Bloomberg Connects project. Without support staff resources to handle yet another password for non-technologist staff to remember, we’d have to come up with an alternative to text passwords. There’s been a few attempts at this with things like biometrics or swiping gestures, but few that we could apply to run on a standard web browser.

The most compelling use case involved memorizing combinations of images instead of text characters. Dr. Ziming Zhao’s research on the security of picture gesture authentication at Arizona State University’s SEFCOM lab pointed me to a research paper (PDF only) from a team of researchers at Charleton University on the evolution of graphical passwords over the past dozen years which compared a range of graphical password architectures that we could base a design on. Before writing a single line of code though, we wanted to test the viability of memorizing image combinations first-hand with our own staff to see if that fundamental assumption would work with our user base.

Pikachu, I choose you!

Using a series of baseball, animal facts, World of Warcraft and, eventually, Pokemon cards, we constructed a portable poster of cards which our IT Liaison, Tim, would bring to a mix of our non-technologist staff and ask them to choose one card from each category. A sampling of fifteen people had their card choices recorded, and a week later, Tim would bring the cards back to each person and ask them to remember their choice from the previous week. Then a month. Then three months. Remarkably, every person at every timeframe remembered their card choice.

We wanted to leverage our collection of art photos for this (we didn’t want to spend money on licensing Pokemon images) and choose the most memory-friendly photos in our collection from four different categories: African art, Asian art, Egyptian art and American art. With some pointers from local neurologist/photographer Lauren E. Wool about how faces are remembered in the brain, we aimed to pick out images that would contain representations of faces and images of particularly distinctive artworks.

ASK Dashboard Login Screen

Having more confidence in the idea, James and I evaluated the security issues around a passphrase made of four items. The total number of permutations that our system was higher than a PIN code, since we were using 12 photos in 4 categories rather than 10 digits in a string of four characters. However, that was still not nearly as strong as an ideal text password (which don’t tend to survive human memory well). PIN codes themselves, for being as weak as they are, allow someone to take money out of your bank, but devices like ATMs balance that risk by adding other restrictions that keep certain threat models out of the picture, like those involving the internet. In a similar sense, we have the dashboard restricted to use within the museum’s internal network, where network connections have assigned IP addresses associated with the machine and ownership of that machine. People connecting from the outside can be issued a one-time PAD token to gain access to the dashboard, making it a two-factor authentication system when used through over an authenticated connection over the public internet.

Other security features are also available; Copying a page from Apple’s iOS, we can trigger a temporary time out after a certain number of incorrect attempts. We can also intentionally slow down our API’s response to the authentication call by a number of milliseconds that wouldn’t be too noticeable to humans when logging in, but would significantly slow down password cracking software—especially since the full combination of chosen pictures is sent over all at once, not one category at a time as presented it might seem in the UI.

Currently, only a few people in the Technology department have picture passwords until the Audience Engagement Team starts using the ASK dashboard. How well this new system fares in real-world usage in a broader user base is something we haven’t been able to determine yet, but will soon. Will we see doodles of mummies and presidents on sticky notes stuck on the side of monitors? Should someone reset their picture password, will they have trouble mixing up their current password with previous ones? These are the questions we’ll find out answers to soon enough. Stay tuned!

]]>
/2015/01/16/preparing-for-a-post-password-future-with-pokemon-cards/feed/ 0
Agile Baby Steps /2014/12/03/agile-baby-steps/ /2014/12/03/agile-baby-steps/#comments Wed, 03 Dec 2014 21:10:07 +0000 /?p=7216 By and large, most software in the world is made to a spec enshrined into immutability, then interpreted differently by various parts of the teams building it, and leading finally to a loose coupling of incompatible pieces which form a whole that doesn’t serve the needs of the people who it was built for, followed by delays until it does. This is part of the reason why most software is awful, and this is called “waterfall” development.

Having experienced the software development cycle fall short of expectations a few times, we’ve decided to look at a more Agile software development process, perhaps with some skepticism since its rhetoric tends to sound a bit like Scientology or Crossfit, but with code. Much like any ideology, Agile is split into several camps including Scrum, Kanban, and eXtreme Programming. Kanban was inspired by manufacturing processes and being able to change the design of a thing if a problem is discovered, Scrum involves a full-time “Scrum Master” and eXtreme Programming is known for it’s focus on pair programming. After reading Jonathan Rasumusson’s The Agile Samurai, we decided to start adopting pieces of Agile methodologies into our latest web project.

Previously in this blog you we’ve introduced our mobile development and transforming kiosks into the greater ASK program. On the other side of the conversations happening on our mobile app is our audience engagement team, who will need to field questions from several people at once being outnumbered by curious visitors while bringing up relevant information on what artwork is near the user and more. This is software for a job which hasn’t existed in the past, exactly. Unlike building, say, a blog where its use cases are well established and understood, we needed to be able have a solid and clear understanding of what this dashboard would do.

Dashboard

We adopted Agile user stories so that our four-strong web developer team would ultimately land on the same page. Using a textbook Agile technique, we wrote stories down on physical sticky notes using a strong cryptographic cipher known as my handwriting. Bringing in our stakeholder, designer, and entire team into discussion around these stories helped build a coherent narrative which invoked more of the right questions and helped push decisions along quickly. This, along with standup meetings helped keep our vision consistent with our code. Unlike the waterfall model, the project was subject to change as we discovered new problems to make changes so that we would build a system that would survive reality.

Sticky Notes

Although sticky notes enforced a consistent narrative, the pieces needed to be built were still siloed in individual developers’ todo lists. This turned out to be problematic since it disjoined work from stories and scattered work into tasks across everything rather than tackling a story at a time. Our first in-house alpha launch fell short of its release date partially due to the underestimation of the amount of front-end work that needed to be done—this was also our first use of a JavaScript front-end framework. Since tasks were not attached to stories, it wasn’t easy to tell, from a project management perspective, which particular stories are affected or even whether tasks belonged in a story and should not have been worked on. Because of that, there was no new knowledge to gain by looking at the sticky notes and thus, everyone forgot about them. We also happened to be using free sticky notes gifted to us by a vendor we’ve worked with, so with the science of wall adhesive still not quite perfected a few stories literally fell through the cracks and weren’t rediscovered until we rearranged the furniture in the office.

After that first release, we migrated the stories into Trello, which is structured so that each “card” represents a story with checklists attached. With that structure, it’s a lot easier to see the time needed for a story grow from its original estimate as tasks are added. Once it reaches a rough “too many tasks” peak, it can then be split into multiple stories with a new time score added in, triggering a reality check on the overall project scope.

Trello Dashboard... board

Another aspect of Agile we adopted was test-driven development. The dashboard is made up of several layers of code and data, with an SlimPHP API at the core, A basic FuelPHP site as its mantle, and a large and very busy AngularJS application at its surface. If a bug should be discovered, it’s easier to find in a test for just that part of the stack while it’s being built than to build everything and go on a time-consuming multi-layer bug hut. Writing tests hasn’t eliminated bugs, but it has reduced them to mostly display or protocol/connectivity-related issues. In our case, we used phpunit for the SlimPHP and FuelPHP layers, and Jasmine for the Angular layers. Since we’re using Jenkins as our continuous deployment system, we were able to make sure code was only pushed live when tests passed.

Even with unit and integration tests passing, there is no better bloodhound for bugs than a human operator. Before any story can be moved to “Done,” it must go through “Verification” where I and sometimes others recreate the question and answer action that the story satisfies, and only move it to done if nothing is missing. This catches a lot of “it works on my machine” type issues, which can then be tossed back to be revisited and verified again. After verification, we then simulate a test with other department staff in-gallery where the whole flow of the visitor experience is tested holistically.

App Testing

We’re not only inventing a new type of application with our dashboard, but also a new type of workflow for a new type of job to cover it. The moving target of the scope as we tread into new territory is bound to change. Even though we’re not canon orthodox Agile (can’t always pair program!), we’re moving forward with much more agility than we’ve been historically.

]]>
/2014/12/03/agile-baby-steps/feed/ 1
Teaching next-gen art making for the next generation of artists /2014/08/28/teaching-next-gen-art-making-for-the-next-generation-of-artists/ /2014/08/28/teaching-next-gen-art-making-for-the-next-generation-of-artists/#respond Thu, 28 Aug 2014 17:04:06 +0000 /?p=7056 Since we first made use of our 3D printer, we’ve grown the number of things we’ve used it for, ranging from creating a participatory experience in our screening of Brooklyn Castle to combining Japanese sculpture with the Internet of Things. This has introduced new ways people can experience our art collection with 3D printing as a means to that end. As this technology has evolved however, artists have also found a use for 3D printing in the creation of art itself, building a new genre of sculpture crafted digitally but brought into the physical world one layer of material at a time.

3D Printed Strandbeest in Front of 3D Printer at Shapeways NYC

Photo by Shapeways (CC BY-NC-ND 2.0)

The first objects explicitly 3D printed as art happened were created some time in the late 90s, using a 3D printer the size of a refrigerator with a price tag bigger than a yacht. In the decade that followed, the RepRap project introduced the idea of a desktop 3D printer, eventually leaving us in the present day, where a 3D printer can cost less than a smartphone. With 3D printing in the hands of an increasing number of artists, 3D printed art is proliferating along with long-established forms or art-making with long-established methods of learning their craft.

Every summer, our Education department’s Gallery/Studio Program brings in kids from around the community to join in workshops led by a professional teaching artist to learn how art is made and create works inspired by a work in our collection. This year, we launched Forward Thinking: 3D Printing, a class for tweens which incorporated 3D scanning and printing along with traditional clay work to create 3D scanned and printed works inspired by Fred Wilson’s Gray Area and the Beaded Crown (Ade) of Onijagbo Obasoro Alowolodu, Ògògà of Ikere. This class was sponsored by Deutsche Bank Americas Foundation through their Art & Emerging Technology grant program, which advances the usage of interactive technologies in cultural institutions.

GSP Student Heads

Students used 3D scans of their heads to create busts, which they decorated with headpieces of their own design. We used more of the low-cost 3D Systems Cube 2 printers for printing with the 123Dcatch app on iPads for scanning and Tinkercad for 3D modeling to keep the costs of continuing to make art after the class reasonably low and accessible.

In addition to learning about 3D scanning and printing for our set of printers and software, the class was visited by working artists who got a chance to show how they make art and what it’s like to try making a living from it. Earlier this month the class also took a field trip to the Shapeways Factory of the Future in Long Island City, where they saw high-end printers in action transforming digital designs into SLS nylon, dyed gypsum, and other advanced materials.

After building their own individual works, students also got a chance to work together to create a collaborative work inspired by Gerrit Rietveld’s Doll’s House and the museum’s own Studio 1 room, which is being processed for printing in full-color sandstone at Shapeways at this very moment.

The students’s artworks will be on display in the Con Edison gallery on the first floor this fall starting September 13th, so be sure to check it out! In addition to the display in the museum, the crew behind Forward Thinking: 3D Printing will be presenting on the class at World Maker Faire in Queens on September 20th to the 21st. We hope to see you there!

]]>
/2014/08/28/teaching-next-gen-art-making-for-the-next-generation-of-artists/feed/ 0
Cloud Watching /2014/05/15/cloud-watching/ /2014/05/15/cloud-watching/#respond Thu, 15 May 2014 16:36:39 +0000 /?p=6980 A few years ago we moved away from hosting our website infrastructure from its dusty basement to the Cloud. This brought a certain peace of mind in knowing that even if the museum building’s internet connection or electricity was interrupted, the site would still stay up. As it turns out, the Cloud is also dependent on electricity and network connectivity, so while a storm in Brooklyn would leave our digital infrastructure unscathed, one in Virginia might make a dent. Since that fateful summer we’ve progressed in fine-tuning our virtual servers, databases, and content storage and distribution. Without going so far as to build Google/Facebook/Netflix-scale high-availability infrastructure and the 24/7 DevOps team that goes with it, we’ve gotten pretty far in making sure our website stays online.

As with building any infrastructure, a disaster plan should also be in place to make sure people know what’s happening when something goes wrong. Part of the alphabet soup of Amazon Web Services, Route53, is configured with the ability to automatically route web visitors away from a server having—or about to have—issues to a static placeholder page hosted in an S3 bucket based in Oregon, independent of website assets or server-side code. This is called a DNS Failover. The switch is triggered by an AWS health check which we’ve set up on our production server to check for whether a web or database server is unavailable. If that’s the case, the health check, a simple PHP page that only returns an HTTP header response, returns an HTTP 503 error, otherwise it returns an HTTP 200 OK response. The end result is a “fail whale” page that shows up when the site is going down or already there.

The nicest error page we hope you never see.

Aside from letting site visitors know when things are amiss, the same AWS health check triggers an email notification to our developer team, which is then picked up by their smartphones (or, in my case, a Nokia 515 which happens to have Exchange support). At the office, we’ve created a glowing 3D printed status indicator based on the 3D scan of Emma-o, King and Judge of Hell aka Yamma aka 閻魔 who we scanned for a 3D printed chess project some time ago.

All’s well in the world.The cloud is stormy tonight.

 

 

 

 

Emma-Ohnoes, King and Judge of Cloud Computing uses an Arduino Yún and Temboo to connect to the same health check page that Route 53 uses. Like the DNS failover setup, it connects to the health check page every minute, however, if a 200 OK is detected, it glows blue, otherwise it pulses red using one of the Arduino’s analog inputs with pulse width modulation (PWM).

Our health check page is pretty specifically catered to just our systems, but Amazon has put together a neat guide on how to create one for your own architecture. The Arduino sketch and schematic and 3D files for Emma Ohnoes, however, can easily be adapted to any website by changing the targetUrl to either your own health check page or the website URL directly to see if it’s up or not.

Download Emma-Ohnoes’s Arduino sketch and schematics (MIT license) on Github

Download Emma Ohnoes’s 3D models (CC-BY-3.0) on Thingiverse

]]>
/2014/05/15/cloud-watching/feed/ 0
How about a nice game of 3D printed chess? /2013/09/26/how-about-a-nice-game-of-3d-printed-chess/ /2013/09/26/how-about-a-nice-game-of-3d-printed-chess/#comments Thu, 26 Sep 2013 16:00:59 +0000 /?p=6377 Earlier this year, we started exploring how 3D printing could enhance the visitor experience and began by introducing it on that month’s sensory tour. In addition to tours, we also host film screenings and as my colleague Elisabeth mentioned, this Saturday, September 28th we’ll be hosting a special screening of Brooklyn Castle, a film about a local school with a talented chess team that crushed more chess championships than any other school in the US. Since the screening also includes some chess playing outside the film, we figured it would be great to tie that into the context of the museum’s collection by curating and scanning our own 3D printed chess set.

Robert Nardi photographing Senwosret III

Since April we’ve learned quite a bit about what makes an ideal scan and have spread that knowledge to our resident camera wizard, Bob Nardi, who I teamed up with for this project. We already had scans of the Lost Pleiad and the Double Pegasus, so we added them into the mix as the Queen and Knight, respectively. We also found the best candidates for the remaining pieces:

We worked with our conservation staff to get access to the pieces which weren’t on view, including the roughly 3,000+ year-old Egyptian gaming piece Bob and I were a little nervous around. Using the same software combination of 123D Catch and Meshmixer, the scanned models were then generated and cleaned up and made watertight for printing.

Having the 3D models ready to print, I worked on resizing them as chess pieces, making sample prints with some unsightly lime-green PLA we had laying around. Chess pieces have been remixed a lot over it’s history, varying from the small magnetic sets you would find in travel stores to the more elaborate Frank Gehry set. By and large there’s no universal standard for the size and proportions, but the US Chess Federation has some guidelines on the proportions relative to the board which were [partially] adopted in the final design of the set.

notes_angled

In the past, we’ve only printed pieces on a one-by-one basis. Since there’s 16 individual pieces to a chess set, that method quickly became impractical. Using the software for our Cube printer, we were able to add multiple models onto the platform and have the software automatically space them out. Marveling at the efficiency of this plan I made a test run and walked into the room our 3D printer resides in only to find that I made glitch art.

Print FailThe aforementioned room is generally great due to it being more or less soundproofed from the rest of the office, but due to other equipment which share the space, it’s kept at a crisp 60F degrees. Since there’s not much movement happening in the room’s air that doesn’t tend to affect the prints, but it does seem to make the glue used to stick the prints to the platform and the plastic web between the pieces when they’re being printed stiffen faster, so some individual pieces would be just attached enough to each other to cause them to be yanked off the platform mid-print and eventually turn into Katamari Damashi.

I managed to work around the temperature issue by turning on the raft option in the Cube Software settings. A raft in this case is a grid which is printed on the platform before the models are printed on top of it.

raft_printing

A raft keeps smaller pieces from detaching from the platform since it expands it’s connection to the platform beyond its otherwise tiny base size. The grid needs to be manually cut off around the edges after the print is complete, but that’s usually a quick process akin to peeling or shucking a really plasticy fruit or veggie.

finished_pieces_with_raft

After peeling it makes for a nice set ready to be shipped a whole three floors down! Sadly, I won’t be on this side of the Atlantic on Saturday due to other fun stuff, but if you want to see 3D printed chess in action, stop by and have fun in my place!

pieces_ready

Just like our previous scans, we’re releasing the latest models under a Creative Commons license which you can download and print on your own 3D printer.

Download all models used in our chess collection (CC-BY-3.0) on Thingiverse

]]>
/2013/09/26/how-about-a-nice-game-of-3d-printed-chess/feed/ 1
Replicating a 19th Century Statue with 21st Century Tech /2013/04/17/replicating-a-19th-century-statue-with-21st-century-tech/ /2013/04/17/replicating-a-19th-century-statue-with-21st-century-tech/#comments Wed, 17 Apr 2013 20:21:14 +0000 /?p=6214 My first exposure to the world of 3D printing took place in 2009 approximately 500 feet under the Earth’s surface in a former missile silo in the Washington state desert. There, three founders of a new Brooklyn-based 3D printer company hosted a workshop on building a 3D printer kit as part of Toorcamp, a nerdy version of Burning Man. At the end of the kit’s 4-hour assembly we printed out some tiny jewelry boxes. At the time 3D printing seemed to me like a novel technology for hackers with lots of potential, but not one I had any specific use for. Four years later, that use was found.

Museum sculptures are an interesting case in accessibility; they exist in a place the public can access but usually aren’t allowed to touch. Most sculpture materials aren’t too smelly or noisy so that limits the sensory experience to sight. However, not everyone has the ability to see, and although special exemptions are occasionally made to allow the visually impaired to touch some sculptures, you can only feel so much of a large object.

Sight includes the ability to expand the size or detail of what you’re looking at by moving closer or further away from the object. This isn’t possible in the two-dimensional web, so the paradigm of pairing a “thumbnail” image with a full-size counterpart became an established method for having both a high-level and up-close view of things. With similar constraints in mind, we’ve utilized 3D scanning and printing to create a “thumbnail” for large sculptures which can be used as a tactial maps of the object’s entire shape.

So how do you go from marble masterpiece to plastic replica? Like 3D printing, 3D scanning has also recently broken out of the expensive-equipment-for-expensive-professions world and into the much more afforable world of hobbyists and institutions with modest budgets. AutoCAD’s 123D Catch is a free download which was launched last year as a way to create 3D models from photos using stereophotogrammetry, which basically means taking a bunch of photos from different angles and letting software figure out how far away stuff in one photo is from stuff in the next.

The conditions those photos are taken in both in the camera and everything surrounding the subject are pretty unforgiving; out of the first eight attempts I’ve made scanning sculptures, only the double Pegasus ended up looking close to what it was supposed to. From these initial attempts and some research, I was able to narrow down the list of things to scan next by whether they met this criteria:

  • Can’t be shiny
  • Can’t be or be inside something transparent
  • Can’t be wiggly/moving (no scanning museum visitors)
  • Must fit in a photo when shot at 30 different angles in a 360 degree radius
  • Must be lit under consistent lighting
  • Can’t have shadows cast on it when shooting
  • Can’t have too many things moving around in the shot (museum visitors indoors, leaves in a windy day outdoors)

When Rachel recommended Randolph Rogers’s The Lost Pleiad, it so perfectly matched the criteria that I saw myself rendering a perfect model from the first scan. Eleven scanning attempts later, I found out:

  • Most cameras try to attempt auto-adjusting exposure when shooting towards a source of light, ruining the scan
  • Bright spotlights on bright white marble create a blur between the edge of the object and the background, ruining the scan
  • Turning off said spotlights without cranking up a camera’s ISO settings lead to slower shutter releases which lead to blurry images, ruining the scan
  • Cameraphones and point-and-shoot cameras don’t have very high ISO settings and I don’t have perfectly steady hands

Scan #11 used a Canon SLR with a manually set white balance, exposure level, and high ISO setting (5000); only auto-focus remained in the camera’s control. Approximately 30 shots in a mostly even perimeter around the statue were taken and re-taken in case if the first take was out of focus along with around 12 overhead shots in a smaller perimeter above and around the statue. After sorting out any blurry photos, the images were uploaded into the Windows version of 123D Catch which shows the angles at which each photo was taken.

123dcatch_windows_600px

Before this is printer-ready, the object had to be cleaned up so that the object has a flat base and doesn’t include stuff in the background picked up by the scan. We used MeshMixer, a free download.

With the texture removed, the remaining mesh looked as though it was melting somewhere that didn’t have gravity with swaths of wall and floor surrounding it (alt+left mouse drag to move around, alt+right mouse drag to zoom in).

meshmixer_plane_cut_600px

I removed floating artifacts is by using the plane cut tool (Edits -> Plane Cut). This was also useful for removing bulges on the surface and slicing a perfectly flat base for the model. The surface of the object was also bumpy and jagged where it should be smooth (arms, torso, etc). The way I solved this was by using the smoothing brush.

meshmixer_smooth_brush_600px

The smoothing brush (Smoothbrush/1) is basically digital sandpaper; For each rough area, I adjusted the size and strength of the brush to match the size and roughness of the surface until it looked more like it’s supposed to. In addition to the removal of defects, the object had to be made “watertight” and have any holes and cracks sealed before being printable.

meshmixer_inspector_600px

With the  inspector tool (Analysis -> Inspector), a floating color-coded sphere pointed to a gap near the bottom of the robe, which was filled by right-clicking the sphere, choosing to smooth the boundary, then left-clicking the sphere.

With the object ready, I exported it as an STL file (File -> Export), a format which most if not all 3D printers can print with. For the printer we use at the Brooklyn Museum (3D Systems Cube v2), the STL file needed to be processed using their Cube Software, also a free download. Using that, I imported the STL file and clicked Heal to double-check the model’s watertightness. Since the model itself was fairly small, I also used the Orient & Scale tool to make it 260% bigger. In Settings, I removed the raft (the Cube uses a special glue that makes printing a platform raft unnecessary) and also removed supports since most of the statue probably wouldn’t need them. Finally, I centered it with the Center icon and hit Build. For simplicity, I built the final .cube file to a USB drive that I could just plug into the printer.

The printer’s on-screen menu has incredibly clear and simple step-by-step directions on how to print, so I won’t repeat them here. Five hours later, the print was completed and looked close enough to be a handheld tactical map of the real McCoy, with only minor amount of overhanging plastic extrusion in areas near the bottom of the robe and under the raised arm.

pleiads_comparison

BONUS: We’re also releasing the STL files under a Creative Commons license for both the Double Pegasus and The Lost Pleiad which you can download and print on your own 3D printer:

Download Double Pegasus (CC-BY 3.0) on Thingiverse
Download The Lost Pleiad (CC-BY 3.0) on Thingiverse

]]>
/2013/04/17/replicating-a-19th-century-statue-with-21st-century-tech/feed/ 8