3Dprinting – BKM TECH / Technology blog of the Brooklyn Museum Thu, 28 Aug 2014 18:49:23 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 Teaching next-gen art making for the next generation of artists /2014/08/28/teaching-next-gen-art-making-for-the-next-generation-of-artists/ /2014/08/28/teaching-next-gen-art-making-for-the-next-generation-of-artists/#respond Thu, 28 Aug 2014 17:04:06 +0000 /?p=7056 Since we first made use of our 3D printer, we’ve grown the number of things we’ve used it for, ranging from creating a participatory experience in our screening of Brooklyn Castle to combining Japanese sculpture with the Internet of Things. This has introduced new ways people can experience our art collection with 3D printing as a means to that end. As this technology has evolved however, artists have also found a use for 3D printing in the creation of art itself, building a new genre of sculpture crafted digitally but brought into the physical world one layer of material at a time.

3D Printed Strandbeest in Front of 3D Printer at Shapeways NYC

Photo by Shapeways (CC BY-NC-ND 2.0)

The first objects explicitly 3D printed as art happened were created some time in the late 90s, using a 3D printer the size of a refrigerator with a price tag bigger than a yacht. In the decade that followed, the RepRap project introduced the idea of a desktop 3D printer, eventually leaving us in the present day, where a 3D printer can cost less than a smartphone. With 3D printing in the hands of an increasing number of artists, 3D printed art is proliferating along with long-established forms or art-making with long-established methods of learning their craft.

Every summer, our Education department’s Gallery/Studio Program brings in kids from around the community to join in workshops led by a professional teaching artist to learn how art is made and create works inspired by a work in our collection. This year, we launched Forward Thinking: 3D Printing, a class for tweens which incorporated 3D scanning and printing along with traditional clay work to create 3D scanned and printed works inspired by Fred Wilson’s Gray Area and the Beaded Crown (Ade) of Onijagbo Obasoro Alowolodu, Ògògà of Ikere. This class was sponsored by Deutsche Bank Americas Foundation through their Art & Emerging Technology grant program, which advances the usage of interactive technologies in cultural institutions.

GSP Student Heads

Students used 3D scans of their heads to create busts, which they decorated with headpieces of their own design. We used more of the low-cost 3D Systems Cube 2 printers for printing with the 123Dcatch app on iPads for scanning and Tinkercad for 3D modeling to keep the costs of continuing to make art after the class reasonably low and accessible.

In addition to learning about 3D scanning and printing for our set of printers and software, the class was visited by working artists who got a chance to show how they make art and what it’s like to try making a living from it. Earlier this month the class also took a field trip to the Shapeways Factory of the Future in Long Island City, where they saw high-end printers in action transforming digital designs into SLS nylon, dyed gypsum, and other advanced materials.

After building their own individual works, students also got a chance to work together to create a collaborative work inspired by Gerrit Rietveld’s Doll’s House and the museum’s own Studio 1 room, which is being processed for printing in full-color sandstone at Shapeways at this very moment.

The students’s artworks will be on display in the Con Edison gallery on the first floor this fall starting September 13th, so be sure to check it out! In addition to the display in the museum, the crew behind Forward Thinking: 3D Printing will be presenting on the class at World Maker Faire in Queens on September 20th to the 21st. We hope to see you there!

]]>
/2014/08/28/teaching-next-gen-art-making-for-the-next-generation-of-artists/feed/ 0
Cloud Watching /2014/05/15/cloud-watching/ /2014/05/15/cloud-watching/#respond Thu, 15 May 2014 16:36:39 +0000 /?p=6980 A few years ago we moved away from hosting our website infrastructure from its dusty basement to the Cloud. This brought a certain peace of mind in knowing that even if the museum building’s internet connection or electricity was interrupted, the site would still stay up. As it turns out, the Cloud is also dependent on electricity and network connectivity, so while a storm in Brooklyn would leave our digital infrastructure unscathed, one in Virginia might make a dent. Since that fateful summer we’ve progressed in fine-tuning our virtual servers, databases, and content storage and distribution. Without going so far as to build Google/Facebook/Netflix-scale high-availability infrastructure and the 24/7 DevOps team that goes with it, we’ve gotten pretty far in making sure our website stays online.

As with building any infrastructure, a disaster plan should also be in place to make sure people know what’s happening when something goes wrong. Part of the alphabet soup of Amazon Web Services, Route53, is configured with the ability to automatically route web visitors away from a server having—or about to have—issues to a static placeholder page hosted in an S3 bucket based in Oregon, independent of website assets or server-side code. This is called a DNS Failover. The switch is triggered by an AWS health check which we’ve set up on our production server to check for whether a web or database server is unavailable. If that’s the case, the health check, a simple PHP page that only returns an HTTP header response, returns an HTTP 503 error, otherwise it returns an HTTP 200 OK response. The end result is a “fail whale” page that shows up when the site is going down or already there.

The nicest error page we hope you never see.

Aside from letting site visitors know when things are amiss, the same AWS health check triggers an email notification to our developer team, which is then picked up by their smartphones (or, in my case, a Nokia 515 which happens to have Exchange support). At the office, we’ve created a glowing 3D printed status indicator based on the 3D scan of Emma-o, King and Judge of Hell aka Yamma aka 閻魔 who we scanned for a 3D printed chess project some time ago.

All’s well in the world.The cloud is stormy tonight.

 

 

 

 

Emma-Ohnoes, King and Judge of Cloud Computing uses an Arduino Yún and Temboo to connect to the same health check page that Route 53 uses. Like the DNS failover setup, it connects to the health check page every minute, however, if a 200 OK is detected, it glows blue, otherwise it pulses red using one of the Arduino’s analog inputs with pulse width modulation (PWM).

Our health check page is pretty specifically catered to just our systems, but Amazon has put together a neat guide on how to create one for your own architecture. The Arduino sketch and schematic and 3D files for Emma Ohnoes, however, can easily be adapted to any website by changing the targetUrl to either your own health check page or the website URL directly to see if it’s up or not.

Download Emma-Ohnoes’s Arduino sketch and schematics (MIT license) on Github

Download Emma Ohnoes’s 3D models (CC-BY-3.0) on Thingiverse

]]>
/2014/05/15/cloud-watching/feed/ 0
How about a nice game of 3D printed chess? /2013/09/26/how-about-a-nice-game-of-3d-printed-chess/ /2013/09/26/how-about-a-nice-game-of-3d-printed-chess/#comments Thu, 26 Sep 2013 16:00:59 +0000 /?p=6377 Earlier this year, we started exploring how 3D printing could enhance the visitor experience and began by introducing it on that month’s sensory tour. In addition to tours, we also host film screenings and as my colleague Elisabeth mentioned, this Saturday, September 28th we’ll be hosting a special screening of Brooklyn Castle, a film about a local school with a talented chess team that crushed more chess championships than any other school in the US. Since the screening also includes some chess playing outside the film, we figured it would be great to tie that into the context of the museum’s collection by curating and scanning our own 3D printed chess set.

Robert Nardi photographing Senwosret III

Since April we’ve learned quite a bit about what makes an ideal scan and have spread that knowledge to our resident camera wizard, Bob Nardi, who I teamed up with for this project. We already had scans of the Lost Pleiad and the Double Pegasus, so we added them into the mix as the Queen and Knight, respectively. We also found the best candidates for the remaining pieces:

We worked with our conservation staff to get access to the pieces which weren’t on view, including the roughly 3,000+ year-old Egyptian gaming piece Bob and I were a little nervous around. Using the same software combination of 123D Catch and Meshmixer, the scanned models were then generated and cleaned up and made watertight for printing.

Having the 3D models ready to print, I worked on resizing them as chess pieces, making sample prints with some unsightly lime-green PLA we had laying around. Chess pieces have been remixed a lot over it’s history, varying from the small magnetic sets you would find in travel stores to the more elaborate Frank Gehry set. By and large there’s no universal standard for the size and proportions, but the US Chess Federation has some guidelines on the proportions relative to the board which were [partially] adopted in the final design of the set.

notes_angled

In the past, we’ve only printed pieces on a one-by-one basis. Since there’s 16 individual pieces to a chess set, that method quickly became impractical. Using the software for our Cube printer, we were able to add multiple models onto the platform and have the software automatically space them out. Marveling at the efficiency of this plan I made a test run and walked into the room our 3D printer resides in only to find that I made glitch art.

Print FailThe aforementioned room is generally great due to it being more or less soundproofed from the rest of the office, but due to other equipment which share the space, it’s kept at a crisp 60F degrees. Since there’s not much movement happening in the room’s air that doesn’t tend to affect the prints, but it does seem to make the glue used to stick the prints to the platform and the plastic web between the pieces when they’re being printed stiffen faster, so some individual pieces would be just attached enough to each other to cause them to be yanked off the platform mid-print and eventually turn into Katamari Damashi.

I managed to work around the temperature issue by turning on the raft option in the Cube Software settings. A raft in this case is a grid which is printed on the platform before the models are printed on top of it.

raft_printing

A raft keeps smaller pieces from detaching from the platform since it expands it’s connection to the platform beyond its otherwise tiny base size. The grid needs to be manually cut off around the edges after the print is complete, but that’s usually a quick process akin to peeling or shucking a really plasticy fruit or veggie.

finished_pieces_with_raft

After peeling it makes for a nice set ready to be shipped a whole three floors down! Sadly, I won’t be on this side of the Atlantic on Saturday due to other fun stuff, but if you want to see 3D printed chess in action, stop by and have fun in my place!

pieces_ready

Just like our previous scans, we’re releasing the latest models under a Creative Commons license which you can download and print on your own 3D printer.

Download all models used in our chess collection (CC-BY-3.0) on Thingiverse

]]>
/2013/09/26/how-about-a-nice-game-of-3d-printed-chess/feed/ 1
Teaching with a 3D Simulacrum /2013/04/25/teaching-with-a-3d-simulacrum/ /2013/04/25/teaching-with-a-3d-simulacrum/#comments Thu, 25 Apr 2013 14:13:48 +0000 /?p=6234 When Shelley and David brought up the idea of 3D printing, my not-so-inner tech geek and my really-blatantly-outer education geek got pretty excited.  As Shelley mentioned in her previous post, 3D printing is a hot topic in the museum world right now, with some exciting experimentation happening around the world.  Just this week I was at a meeting at the American Museum of Natural History, hearing about some of the exciting 3D printing projects they’re working on with some of their teen programs.

In our use it made sense to start with the Sensory Tour, our monthly tour for visitors with visual impairments as well as anyone who wants to experience art using more than just their sense of sight.  We continually had great success using raised line drawings (they’re just what they sound like; the lines are literally raised from the surface of the paper) to help people feel contours of two-dimensional art.  Why not try the same thing with one more dimension in the mix?

It took some creative thinking and interdepartmental teamwork to figure out an appropriate object and Lost Pleiad hit all the right marks. So, armed with a few 3D prints of Randolph Rogers’ sculpture in our teaching bag, we hit the galleries in the capable teaching hands of Megan Holland and Brigitte Moreno to “explore lines of ink on paper, lines of movement, and lines of poetry in our most recent exhibition, Fine Lines: American Drawings from the Brooklyn Museum.”

So, how did it go, you’re probably wondering?  Did having these touchable models deepen participants’ engagement with the artwork?  Did people walk away feeling like they’d had a satisfying tactile experience with this sculpture?  Is 3D printing going to usurp the place of the statue in museums?  These are all things that were on the minds of the educators as we stepped into this new semi-charted territory.

Fine Lines Sensory Tour

As with most complicated issues, the results were mixed. Visitors were visibly, physically excited by the prospect of our inclusion of this technology.  They paid careful, detailed attention to the surface of the sculpture and all of its contours. They held up the 3D models and compared them to the original sculpture in front of them.

Fine Lines Sensory TourThey looked at the 3D prints from all angles (more than they were able to do with the original, and not unlike the animation commenter Sebastian Heath made from the Thingiverse files David shared in his last post).

During this Sensory Tour, we also passed around samples of marble in various finishes and scarves to think about the contrast between the dense stone and the diaphanous fabric.  People gave them similar amounts of time and attention as they had the 3D prints, but the stone and scarves seemed to spark a wider variety of conversation and brought people’s focus back to the sculpture more quickly.  Not that this is all on the technology, of course, but as educators we’re pretty comfortable using material like the stone samples and scarves to get quality audience conversation going.

The 3D prints are new tools for us to play with, and we need to work with them more to get more comfortable. What are the best kinds of questions to ask people when we put these into their hands?  As blogged about by Alastair Somerville, does it work better to manipulate the image for emphasis, rather than staying strictly true to the original?

In our post-game conversation, the education team behind the Sensory Tours agreed that 3D prints are great tools to help people feel the weight and balance of a sculpture.  They’re “a new way of making lines; a digital brushstroke,” said one educator, and since this month’s Sensory Tour was focused on lines, we couldn’t think of a better place in starting this project.

]]>
/2013/04/25/teaching-with-a-3d-simulacrum/feed/ 2
Replicating a 19th Century Statue with 21st Century Tech /2013/04/17/replicating-a-19th-century-statue-with-21st-century-tech/ /2013/04/17/replicating-a-19th-century-statue-with-21st-century-tech/#comments Wed, 17 Apr 2013 20:21:14 +0000 /?p=6214 My first exposure to the world of 3D printing took place in 2009 approximately 500 feet under the Earth’s surface in a former missile silo in the Washington state desert. There, three founders of a new Brooklyn-based 3D printer company hosted a workshop on building a 3D printer kit as part of Toorcamp, a nerdy version of Burning Man. At the end of the kit’s 4-hour assembly we printed out some tiny jewelry boxes. At the time 3D printing seemed to me like a novel technology for hackers with lots of potential, but not one I had any specific use for. Four years later, that use was found.

Museum sculptures are an interesting case in accessibility; they exist in a place the public can access but usually aren’t allowed to touch. Most sculpture materials aren’t too smelly or noisy so that limits the sensory experience to sight. However, not everyone has the ability to see, and although special exemptions are occasionally made to allow the visually impaired to touch some sculptures, you can only feel so much of a large object.

Sight includes the ability to expand the size or detail of what you’re looking at by moving closer or further away from the object. This isn’t possible in the two-dimensional web, so the paradigm of pairing a “thumbnail” image with a full-size counterpart became an established method for having both a high-level and up-close view of things. With similar constraints in mind, we’ve utilized 3D scanning and printing to create a “thumbnail” for large sculptures which can be used as a tactial maps of the object’s entire shape.

So how do you go from marble masterpiece to plastic replica? Like 3D printing, 3D scanning has also recently broken out of the expensive-equipment-for-expensive-professions world and into the much more afforable world of hobbyists and institutions with modest budgets. AutoCAD’s 123D Catch is a free download which was launched last year as a way to create 3D models from photos using stereophotogrammetry, which basically means taking a bunch of photos from different angles and letting software figure out how far away stuff in one photo is from stuff in the next.

The conditions those photos are taken in both in the camera and everything surrounding the subject are pretty unforgiving; out of the first eight attempts I’ve made scanning sculptures, only the double Pegasus ended up looking close to what it was supposed to. From these initial attempts and some research, I was able to narrow down the list of things to scan next by whether they met this criteria:

  • Can’t be shiny
  • Can’t be or be inside something transparent
  • Can’t be wiggly/moving (no scanning museum visitors)
  • Must fit in a photo when shot at 30 different angles in a 360 degree radius
  • Must be lit under consistent lighting
  • Can’t have shadows cast on it when shooting
  • Can’t have too many things moving around in the shot (museum visitors indoors, leaves in a windy day outdoors)

When Rachel recommended Randolph Rogers’s The Lost Pleiad, it so perfectly matched the criteria that I saw myself rendering a perfect model from the first scan. Eleven scanning attempts later, I found out:

  • Most cameras try to attempt auto-adjusting exposure when shooting towards a source of light, ruining the scan
  • Bright spotlights on bright white marble create a blur between the edge of the object and the background, ruining the scan
  • Turning off said spotlights without cranking up a camera’s ISO settings lead to slower shutter releases which lead to blurry images, ruining the scan
  • Cameraphones and point-and-shoot cameras don’t have very high ISO settings and I don’t have perfectly steady hands

Scan #11 used a Canon SLR with a manually set white balance, exposure level, and high ISO setting (5000); only auto-focus remained in the camera’s control. Approximately 30 shots in a mostly even perimeter around the statue were taken and re-taken in case if the first take was out of focus along with around 12 overhead shots in a smaller perimeter above and around the statue. After sorting out any blurry photos, the images were uploaded into the Windows version of 123D Catch which shows the angles at which each photo was taken.

123dcatch_windows_600px

Before this is printer-ready, the object had to be cleaned up so that the object has a flat base and doesn’t include stuff in the background picked up by the scan. We used MeshMixer, a free download.

With the texture removed, the remaining mesh looked as though it was melting somewhere that didn’t have gravity with swaths of wall and floor surrounding it (alt+left mouse drag to move around, alt+right mouse drag to zoom in).

meshmixer_plane_cut_600px

I removed floating artifacts is by using the plane cut tool (Edits -> Plane Cut). This was also useful for removing bulges on the surface and slicing a perfectly flat base for the model. The surface of the object was also bumpy and jagged where it should be smooth (arms, torso, etc). The way I solved this was by using the smoothing brush.

meshmixer_smooth_brush_600px

The smoothing brush (Smoothbrush/1) is basically digital sandpaper; For each rough area, I adjusted the size and strength of the brush to match the size and roughness of the surface until it looked more like it’s supposed to. In addition to the removal of defects, the object had to be made “watertight” and have any holes and cracks sealed before being printable.

meshmixer_inspector_600px

With the  inspector tool (Analysis -> Inspector), a floating color-coded sphere pointed to a gap near the bottom of the robe, which was filled by right-clicking the sphere, choosing to smooth the boundary, then left-clicking the sphere.

With the object ready, I exported it as an STL file (File -> Export), a format which most if not all 3D printers can print with. For the printer we use at the Brooklyn Museum (3D Systems Cube v2), the STL file needed to be processed using their Cube Software, also a free download. Using that, I imported the STL file and clicked Heal to double-check the model’s watertightness. Since the model itself was fairly small, I also used the Orient & Scale tool to make it 260% bigger. In Settings, I removed the raft (the Cube uses a special glue that makes printing a platform raft unnecessary) and also removed supports since most of the statue probably wouldn’t need them. Finally, I centered it with the Center icon and hit Build. For simplicity, I built the final .cube file to a USB drive that I could just plug into the printer.

The printer’s on-screen menu has incredibly clear and simple step-by-step directions on how to print, so I won’t repeat them here. Five hours later, the print was completed and looked close enough to be a handheld tactical map of the real McCoy, with only minor amount of overhanging plastic extrusion in areas near the bottom of the robe and under the raised arm.

pleiads_comparison

BONUS: We’re also releasing the STL files under a Creative Commons license for both the Double Pegasus and The Lost Pleiad which you can download and print on your own 3D printer:

Download Double Pegasus (CC-BY 3.0) on Thingiverse
Download The Lost Pleiad (CC-BY 3.0) on Thingiverse

]]>
/2013/04/17/replicating-a-19th-century-statue-with-21st-century-tech/feed/ 8
3D Printing for Accessibility /2013/04/16/3d-printing-for-accessibility/ /2013/04/16/3d-printing-for-accessibility/#comments Tue, 16 Apr 2013 15:21:43 +0000 /?p=6197 In the last year, we’ve seen a lot happening in the museum space with 3D printing.  The Smithsonian is working on what looks like a enormous project, the Met has a ongoing series of initiatives that look pretty cool, the San Francisco Asian Art Museum has hosted a “scanathon,” and the Art Institute of Chicago has been actively working in the space—just a handful of current projects going on.

As part of an internal program within the Technology department, we’ve started a series of developer led R&D projects; developers propose what they want to experiment with and we set aside time in our busy work week to foster that creativity. In our first round of experiments David Huerta wanted to work with 3D printing; he’s incredibly passionate about this and has been following the 3D printing projects in the industry and beyond.

Double Pegasus

Irwin S. Chanin (American, 1891-1988). Double Pegasus from the Coney Island High Pressure Pumping Station, 2301 Neptune Avenue, Brooklyn, 1936-1937. Limestone, granite, 48 x 24 x 48 in. (121.9 x 61.0 x 121.9 cm). Brooklyn Museum, Lent by The City of New York, L2003.7.2.

I’ll say I needed some convincing; even in asking the team to experiment, my own thoughts tend to take me toward practical applications and, while 3D printing is whiz bang cool and a lot of people had ideas for applications, we just were not seeing much materialize just yet.  But, you never know where a project can lead you, so David started his project by working with the Double Pegasus—an object from Coney Island which greets visitors in our sculpture garden.

Double Pegasus 3D Print

David Huerta with his 3D print of the Double Pegasus.

When he showed up with his 3D print, we were pretty excited and that little physical simulacrum got me thinking about practical applications and how something like this might be used to help our educators with their own goals in helping visitors who are blind or partially sighted.  After speaking with Rachel Ropeik in our Education department, she immediately saw the possibilities and wanted to experiment; David and Rachel are now working on a cross-departmental project to bring 3D printed objects into our series of Sensory Tours.

We consider this a fast, iterative project that aims to get the output right into our visitors hands as we report back on our findings.  We’ve had plenty of bumps in the road—just finding an object that was appropriate for their tour combined with our own ability to capture it was challenging. In the coming week or two, David will blog a lot more on the technical ins and outs of the project and Rachel will be reporting about education goals and visitor reaction. The Double Pegasus is just the start.

]]>
/2013/04/16/3d-printing-for-accessibility/feed/ 3