Cheap DIY 3D Scanning… or not

3D Scanning is frustrating!This article did not turn out like I wanted. I intended for this to be a How-To guide for cheap 3D scanning, filled with pictures and screenshots. It’s an incredibly useful concept, turning a physical object into a digital one. There are many applications for such an ability, from customizing an object to repairing or duplicating it. Unfortunately, cheap 3D scanning is currently, for me at least, rather impractical. Let me be clear, I don’t want to discourage anyone from doing their own DIY 3D scanning; quite the opposite, I hope that you do so that the software more quickly develops. As I learned, software is one of the greatest obstacles in scanning.

For this project I did not spend any money on software as there are several free programs that will help with the task of 3D scanning, some of which are open source, so trying them fit with the “cheap” theme. As far as hardware, I bought a line laser ($30) and a power adapter ($10) that makes the Xbox 360 Kinect work on a computer, and I already own a digital camera and an HD webcam. Additionally, the power of the computer that runs the programs is a big factor; my computer has a quadcore processor, 8 gigs of RAM, and a mid-level graphics card, which generally worked well enough in my tests, though a newer graphics card might have helped some.

Since the Kinect is a fairly advanced sensor compared to a webcam, I figured that it would yield the best results, but I had to order the adapter online. While waiting on that to arrive, we set to giving the still-photo stitching applications a go. I say we because I was being aided by my brother (and occasionally my roommate too); as previously mentioned, I’m in a wheelchair, so I could do little more than operate the software and say things like “No, that didn’t work either.”

First was Autodesk’s 123D Catch, which takes between 20 and 50 photos of an object from various angles and compiles them on a cloud to form a single 3D digital file. That sounds simple, but determining distance, angles, and lighting can be rather difficult; flashes cast strange shadows on certain geometries and pictures that are too different from the rest get booted out. Essentially, for this program and most of the rest, a makeshift photography studio is necessary to get good results. With several lights placed precariously on boxes, trying various settings on two cameras, and shooting from so many angles that some may not have even been from this dimension, we never got good results. We did at least get results, though nothing I’d share here without fear of insulting the good people of Autodesk. The app works, and it works quickly; most of our uploads were processed within 20 minutes, so that’s a big plus coming from their (likely) dedicated servers. The problem is just that the digital objects created are not crisp — straight edges would have waves and there’d be holes in areas that were surely photographed. If the light was right to capture the shape then it’d be too bright to capture texture well. Still, this was one of the least frustrating apps to use and there are lots of useful features, like showing us the positions of every shot that we took, and a manual stitching tool that didn’t help much but looks promising nonetheless. There are apparently people that get this to work, so I attribute much of our failure here to user error.

The other cloud-based photograph stitcher that we tried was My3DScanner. It basically works the same way as 123D Catch, but is more particular about you being a professional photographer, and there are more limitations on what can be processed. We uploaded the same sets of photos to both sites and 123D Catch consistently output better results; we also tried following the My3DScanner guidelines to little avail. The bigger issue with this service is how slow it is, as every upload took several hours to process into a point cloud, hinting at inefficient algorithms or minimal server power, though probably both.

We then moved to the video-based DAVID 3D Scanner, which is enabled through free software, but the site also sells hardware kits at reasonable prices. For this method an HD webcam must be calibrated through the software by using a physical dot-pattern background; again, a mini studio must be fashioned. Calibrating was pretty simple, as the software, while complex, is fairly easy to navigate and use. After the webcam was calibrated we placed various objects in front of the patterned background and began scanning. This is where the line laser comes into play; by holding the laser at a low angle and so that the laser line is horizontal, the line must be directed slowly up and down the object (for us hitting the object with the thickest, brightest middle part of the line worked best, but instructions say a thin, bright line is ideal, leading me to believe that brightness is more important than thickness, which is similar to how I feel about women). The process creates a clear depth field for the webcam, and the software actively displays what’s been captured so you can see where you need to hit more with the laser. Operating the laser movement is similar to painting, if your canvas and brush were both mostly invisible. After each scan, the created image is saved, the object is rotated slightly, and the process is repeated until the object has been turned around almost a complete 360 degrees. Those 3D images are then stitched together along the overlap created by the slight rotations (each captured image is around 90 degrees, so turning less than that creates the necessary overlap). The stitching can be done manually and with help from the program, and this is the most tedious part of the process, as well as the real barrier between you and your model. While the program is clearly powerful and rather intuitive (after reading the wiki), I’m not great at 3D modeling — I’m ok at it, but not adept enough to easily trim and align multiple 3D images into one clean image. I got it to work, but just barely, so again, not something I’d proudly display here. It’s easy to see that the program really can output high-quality 3D objects, but it’ll take better than I to pull it off.

The power adapter for the Xbox Kinect finally arrived, so our attention shifted to playing with that. Getting the Kinect to even be recognized by the computer and interface with the software is a trick in and of itself. After trying to install drivers and SDKs separately, in the end the Zigfu package did it most simply. Once the Kinect was properly recognized by the computer, I downloaded and installed SCENECT, an application that pulls 3D data from the Microsoft Kinect or the ASUS Xtion Pro Live sensors. Supposedly this software can scan objects and scenes, but we never got it to see anything but a scene, which is not as useful. As far as scene grabbing, it performed alright, but I didn’t see anything worth writing home about; scale wasn’t maintained and getting it to see areas near lights was near impossible. Initially, I thought SCENECT was really neat, until I couldn’t get it to do more than scan macro scenes. Frankly, I don’t like this one, but I probably wasn’t doing it right.

Tired and broken we moved onto ReconstructMe, which uses the same two sensors as SCENECT. The free version of this software is limited, but still works better than pretty much everything we tried. This program is as simple as waving the Kinect around an object and it loads in real time as a clean 3D image. A usable 3D scanned file was ours! At least it would have been had the Kinect power adapter not mysteriously stopped working after the second day… By then, we’d had enough. We accomplished little and learned much.

We learned that cheap 3D scanning is hard. And frustrating. And ugh. It’s coming along, and it needs people like us playing with it and complaining about it to further develop. I’m doing my part. I’ve not given up either. I’ll repeat all of this in a few months and maybe I’ll manage the How-To then. Until then, please try all of this yourself and make me look dumb with your glorious results.

  • http://www.directdimensions.com Michael Raphael

    Nice job with the article Cameron. I’ve been 3D scanning for over 20 years and its almost still as hard as ever!

    • Cameron Naramore

      It’s good to hear that a veteran still finds it difficult.

    • Cameron Naramore

      For my own ego, that is. 🙂

  • http://www.directdimensions.com Michael Raphael

    Yes Cameron, and it keeps me humble too…

    Seriously – good job on the testing and the article. We should share some ideas one day.

    • Cameron Naramore

      Sure, you can contact me through my linked Google+

      Thank you for the kind words.

  • http://www.virtumake.com virtumake

    Hi, I did some scanning with kinect and david scanners:

    https://www.youtube.com/user/virtumake/videos?flow=grid&view=0

    “Episode 3” is about scanning people with kinect scanners. “Backup of a physical object” shows the david scanner in action.

    Best,
    Bernhard

    • Cameron Naramore

      Well done! I like your laptop sling. We were working with a desktop, so free movement was quite limited due to wires.

  • http://reconstructme.net Martin Ankerl

    Thanks for the comparison! Sorry to hear that your power cable is broken. We prefer the Asus Xtion over the kinect, because it is a bit smaller does not need a power cable, just the USB.

    • Cameron Naramore

      Microsoft makes a Kinect for Windows that has normal USB, but it costs twice as much as the Xbox version, which my friend already had so I just borrowed it; its plug is close to USB but not quite, so the adapter plugs into the wall and into the Kinect and the computer, so not conducive for mobility. I believe you that the Asus Xtion works better. Have you tried any eye tracking with it?

  • Stan

    Just finding this article now. Have you posted the results of round 2 of your efforts? I recall an article from MAKE magazine 2 or 3 years ago that showed a 3D scanner that used a rotating table on which you place the object, and then a laser would scan the surface up and down. You seem to have gone the camera/CCD sensor route. Does the MAKE concept work any better? Thanks!

    • Cameron Naramore

      I haven’t done round two yet. I tried a laser method as well, but without a lazy susan. I’m sure that would help. The Kinect 2 is supposed to be much better at this kind of thing, so I’ve kind of been waiting on that. The Photon [https://3dprinter.net/the-photon-a-truly-affordable-3d-scanner-for-anyone] uses lasers and a lazy susan, and it looks like the best option right now.

  • Pingback: Er rimelig 3D skanning mulig i dag? – Og hva finnes tilgjengelig som vi kan bruke innen de skapende fagområdene? | Digital bildeskaping's Blog