Camera chip captures hi-res 3D

3d penny

3d pennyIn that picture: a penny, captured with micrometer-resolution from about 1.5 feet, with the height variations recorded as 3D data.

Researchers at the California Institute Of Technology say you might someday “pull your smartphone out of your pocket, take a snapshot with its integrated 3D imager, send it to your 3D printer, and within minutes you have reproduced a replica accurate to within microns of the original object.”

CalTech says its “new imaging technology fits on a tiny chip and, from a distance, can form a high-resolution three-dimensional image of an object on the scale of micrometers.”

The “nanophotonic coherent imager” uses an inexpensive silicon chip less than a millimeter square in size, with LIDAR capabilities with which a target object is illuminated with scanning laser beams. The light that reflects off of the object is then analyzed based on the wavelength of the laser light used, and the LIDAR can gather information about the object’s size and its distance from the laser to create an image of its surroundings. “In a regular camera, each pixel represents the intensity of the light received from a specific point in the image, which could be near or far from the camera–meaning that the pixels provide no information about the relative distance of the object from the camera,” the university adds. “In contrast, each pixel in an image created by the Caltech team’s NCI provides both the distance and intensity information. Each pixel on the chip is an independent interferometer (an instrument that uses the interference of light waves to make precise measurements) which detects the phase and frequency of the signal in addition to the intensity.”

The full article is here.

Drones photograph buildings for making 3D-printed models



3D printing is big. 3D scanning — taking multiple photos to produce 3D geometry — is big. And hey, of course aerial drone photography is big.

So how big is a new combination of these three types of tech?

SkyeCam Productions says it uses aerial photos to create a 3d model of your home. They capture several hundred high-resolution aerial shots, which are stitched and rendered together for “a full 3D representation that can then be sent through our 3D printer.” The dual-extruder prints two colors at once, and model houses can be built up to 9x6x6 inches — or “even bigger by printing parts and gluing them together.”

The result is “a custom and innovative way to showcase your home… A 3D printed representation is an accurate portrayal that you can hold and cherish for years to come.”

The Baltimore-based company says its drones have independently stabilized cameras, and “are more efficient and significantly less expensive than helicopter-based aerial photographers, and we can vary our altitude as low as four feet to as high as 1000+ feet. Most of the time, when taking aerial photos of a house, we only need to fly at 100 to 200 feet, an altitude where helicopters can’t even fly safely.” The company says it’s used drones for “photographing properties from angles that are physically impossible to achieve with ground-based photography, capturing entire landscapes in one aerial photograph, and even surveying the roofs of houses.”

There’s more information here.

Samsung: 16 simultaneous sensors capture a gigapixel of 3D

cnet samsung beyond cam

cnet samsung beyond cam

At a developer conference, Samsung debuted a virtual reality-capturing camera

“Project Beyond” will take 3D footage for use with the company’s Gear VR headset. The puck-sized gadget has 16 high-definition cameras, and captures a gigapixel per second. Samsung says it uses “stereoscopic interleaved capture and 3D-aware stitching technology to capture the scene just like the human eye, but in a form factor that is extremely compact.”

Samsung adds that the system is not yet a product, but they are showing “the first operational version of the device, and just a taste of what the final system we are working on will be capable of.”

There’s a demonstration video here.

CNet and TechCrunch have more.



HP unveils 3D printing and immersive computing


While 3D viewing (stereoscopic imaging that emulates our eyesight and 3D printing (an inkjet-like manufacturing process) really mean very different things, HP rolled out new products in each arena simultaneously as part of its new “Blended Reality ecosystem.”

The HP Multi Jet Fusion hardware “delivers on the potential of 3D printing,” and the Sprout “immersive computing platform” redefines the PC user experience “and creates a foundation for future immersive technologies,” the company says. “We are on the cusp of a transformative era in computing and printing…  enabling us to express ourselves at the speed of thought — without filters, without limitations.”

hp multijet 3d printer

The Multi Jet Fusion provides better quality, increased productivity, and break-through economics as compared to existing solutions, HP claims, with a “synchronous architecture that significantly improves the commercial viability of 3D printing and has the potential to change the way we think about manufacturing.” It’s 10-times faster, and the proprietary multi-agent printing process utilizing HP Thermal Inkjet arrays simultaneously apply multiple liquid agents to produce best-in-class quality that combines greater accuracy, resiliency and uniform part strength in all three axis directions, HP adds.

Of, HP “has been an industry leader in 2D printing for 30 years,” the company notes, and “Now, we are bringing our expertise to bear in 3D printing, leveraging all of our investments and intellectual property to develop tools that can enable the next industrial revolution.”

Sprout “Reimagines computing”


Yes, Sprout is a funky name for a desktop all-in-one Windows 8 PC. But HP says it “combines the power of an advanced desktop computer with an immersive, natural user interface to create a new computing experience.” It has a scanner, depth sensor, and a projector in a single device, to let you “take physical items and seamlessly merge them into a digital workspace,” as “people have always created with their hands.” The “Illuminator” projection system scans and captures real-world objects in 3D, allowing the user to immediately interact and create: there’s a 23-inch LCD primary display up top, and 20-inch capacitive pad on the bottom, under the camera and projector.

There a demo of the system in use here.

It’s sells for $1889 here.


Objects in photos transform into movable 3D

photo to 3d CU

photo to 3d CU

A new imaging technique will let you select an item in a photo — from a small chair to a large taxi — convert it into 3D, and reposition it in the original image as desired.

Developed at Carnegie Mellon and the University of California, the technique taps into libraries of stock 3D models. You simply alter the model to better fit the image, and then viola!: more photo manipulation than you otherwise thought possible.

“We present a method that enables users to perform the full range of 3D manipulations, including scaling, rotation, translation, and nonrigid deformations, to an object in a photograph,” the researchers say. Also, “as 3D manipulations often reveal parts of the object that are hidden in the original photograph, our approach uses publicly available 3D models to guide the completion of the geometry and appearance of the revealed areas of the object.”

You can read more here, or watch a video demonstration and explanation here.

photo to 3d

Pelican demos array camera’s 3D captures

pelican array sensor

pelican array sensor

A new sensor design with computational imaging captures the complete depth information of the scene, claims Pelican Imaging, “allowing users to refocus after the fact and perform an unprecedented range of edits.”

Like the lightfield camera from Lytro, the new camera will let you “focus on any subject, change focus (even on multiple subjects) after you take the photo, capture linear measurements, scale and segment your images, change backgrounds, and apply filters,” the company says.

The difference: “all from any device.” Pelican’s camera system will work in compacts and even phones. The “super-thin mobile array camera is less than 3mm thick, “about 50 percent thinner than best-in-class current smartphone cameras” the company says  “It is the first mobile plenoptic camera to capture video, 30 fps at 1080p resolution, and still images at approximately 8 megapixels, with excellent image quality.”

Also, with no autofocus mechanism or other moving parts “every scene is captured in complete focus.”

There’s more information here, along with a “Life in 3D” video that features Pelican Imaging CEO Chris Pickett explaining “Depth-Based Photography.”


A camera and two lasers: MakerBot makes it easier to scan objects

MakerBot Digitizer


MakerBot Digitizer

It’s not quite an exact copier like something out of Star Trek, but the MakerBot Digitizer takes a real-life object, scans it using a camera (a simple 1-megapixel CMOS sensor) and two lasers, and creates a 3D digital file – without any need for design or 3D software experience.

With the 3D data, MakerBot’s Replicator or other “3D printers” can output solid simulations to “create artworks (sculptures and figurines) as well as memorializing keepsakes and archiving,” the company says.

The $1,400 desktop model is limited in what it can accommodate: The Digitizer handles physical objects up to 8 inches in diameter and 8 inches tall, weighing up to 6.6 pounds.

More information is here.

Photo manipulation steps into the third dimension



Don’t know about you, but I was plenty amazed by the content aware fill technique as first offered by Adobe: easily select an object in a photo, such as a horse in a field, and seamlessly move it about the scene as if you were picking up and moving the horse before exposing the shot — and new background imagery is magically generated to fill in the space where the horse had been.

Impressive, yes — and now new research makes it look old hat, as scientists at the Interdisciplinary Center and Tel-Aviv Universityin Israel and Tsinghua University in Beijing have moved the cut-and-paste into the third dimension.

“3-Sweep” lets you select an object in a photo, and turn it into a 3D model that you can pivot, rotate, and move about the scene.

(It’s key to note that this works from a single shot, as various techniques for making 3D from multiple shots taken from different angles have been around for ages.)

In the demonstration video, the photo-realistic new object models are even stretched, enlarged, or otherwise altered such as new arms on a candelabra are pasted into place.

3-sweep 2

The scientists call it “an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph.” The extraction requires understanding the components of the shape, their projections, and relations, they add. “These simple cognitive tasks for humans are particularly difficult for automatic algorithms. Thus, our approach combines the cognitive abilities of humans with the computational accuracy of the machine to solve this problem” — meaning as you select and draw over the object, the algorithms are better able to recognize what you indicate and snap a selection into place. “In our interface, three strokes are used to generate a 3D component that snaps to the shape’s outline in the photograph, where each stroke defines one dimension of the component… We show that with this intelligent interactive modeling tool, the daunting task of object extraction is made simple.”


disney 3d

The 3D models the system creates are admittedly rudimentary, with basic geometry given an appearance of detail by the photo texture.

A more detailed model is made from an algorithm created by scientists at Disney Research in Zurich.

Announced earlier this summer, the Disney “lightfields” technique requires hundreds of images that capture the scene from a variety of vantage points to “build 3D computer models of complex, real-life scenes that meet the increasing demands of today’s movie, TV and game producers for high-resolution imagery. Three-dimensional models have become increasingly important for digitizing, visualizing and archiving the real world. In movie production, for instance, creating accurate 3D models of movie sets is often necessary for post-production tasks such as integrating real-world imagery with computer-generated effects.”

disney research

Disney scientists add that applications for the method extend beyond 3D  and “could be used for applications such as automatic image segmentation, which would simplify background removal in detailed scenes.  It also would be useful for image-based rendering, in which new 2D images are created by combining real images.”

Also, Disney says, many 3D models now are obtained using laser scanning. “In complex, cluttered environments, however, a single laser scan misses a lot of detail because objects in the foreground can block the laser’s view. Photography makes it easier to capture the scene from multiple viewpoints, revealing details that otherwise would be blocked from a single point of view.”

As Disney says, “building 3D models from multiple 2D images captured from a variety of viewing positions is nothing new, but doing so for highly detailed or cluttered environments at high resolution has proved difficult because of the large amounts of data involved.” Disney’s new algorithm efficiently processes these amounts of data. The method uses “the ample variation of the scene’s appearance to calculate depth estimates for individual pixels, rather than patches of pixels. The depth calculations work best at the edges of objects, producing precise silhouettes.”

More information is here.


Camera captures 3D from a kilometer away

OpEx_ 3d


TOpEx_ 3dhis is going a long way past a telephoto lens:
To get 3D information such as the distance to a far-away object, scientists today bounce a laser beam off the object and measure how long it takes the light to travel back to a detector. The technique, called time-of-flight, is used in machine vision, navigation systems for autonomous vehicles, and other applications — but most have a relatively short range and struggle to image objects that do not reflect laser light well.

That’s according to a team of Scotland-based physicists — who say they’ve tackled these limitations, and have a system that can gather high-resolution, 3-D information about objects that are typically very difficult to image, from up to a kilometer away.

At Heriot-Watt University in Edinburgh, Scotland, the new system works by sweeping a low-power infrared laser beam rapidly over an object. It then records, pixel-by-pixel, the round-trip flight time of the photons in the beam as they bounce off the object and arrive back at the source.

The system can resolve depth on the millimeter scale over long distances using a detector that virtually counts individual photons.

Other approaches have better depth resolution, but the new system images objects like items of clothing that do not easily reflect laser pulses makes it useful in a wider variety of field situations, say the researchers. “Our approach gives a low-power route to the depth imaging of ordinary, small targets at very long range. This single-photon counting approach gives a unique trade-off between depth resolution, range, data-acquisition time, and laser-power levels.”

The primary use of the system is likely to be scanning static, man-made targets, such as vehicles. With some modifications to the image-processing software, it could also determine their speed and direction.

The scanner is particularly good at identifying objects hidden behind clutter, such as foliage. However, it cannot render human faces, instead drawing them as dark, featureless areas. This is because at the long wavelength used by the system, human skin does not reflect back a large enough number of photons to obtain a depth measurement.

The system is not maxed out: it could someday scan and image objects located as far as 10 kilometers away, and be miniaturized and ruggedized. “A lightweight, fully portable scanning depth imager is possible and could be a product in less than five years.”

More information is here.


Matterport captures everything in 3D




What we’ve all been calling 3D cameras for the last few years are really “stereoscopic” — not 3D. That is, they capture two side-by-side images, just like our two eyes, and yield an image with more apparent depth than a typical flat 2D photo.

But while you can see a little bit more of one object or person in a scene by pivoting the viewpoint a little, it’s not like you can turn the whole thing around and see it from the other side. No, for that you need a full three-dimensional capture.

Real 3D photography has meant two things: a simple camera used to take dozens of shots around an object from all angles, and the multiple shots combined on a computer into a manipulatable onscreen 3D object — or large and expensive cameras or laser scanners that capture an entire environment with full depth.

Now, new startups are promising handheld cameras that will provide the best of both worlds. Last month we reported on Austin, Texas-based Lynx developing a tablet-like 3D camera system. Now comes news of Mountain View-CA-based Matterport capturing full interior spaces with its device. And while Lynx is raising money on Kickstarter, Matterport received $5.6 million in venture financing.

matterport 2

Matterport says it will help consumers and businesses create accurate, photo-realistic 3D models, quickly, easily and automatically with its 3D camera. Also, its interactive viewing platform will let you see the models and indoor spaces from a web browser. Matterport says the captured images are converted to 3D models that viewers can walk through. “You control what you want to see and where you want to go.  You have the option of looking at the space in an aerial mode we call “dollhouse” and a layout mode we call “floorplan.” Also, the 3D capture system measures rooms and objects, and creates a video of the 3D model, “giving you the same flythrough effect you enjoy in video games and movies.”

While Lynx highlights its speed, Matterport says capturing a comprehensive 3D model of a furnished 1,500-square-foot space takes between 1–2 hours; an empty space of that size takes 45 minutes.
Matterport’s camera is designed for use indoors, and does not capture small objects or moving people.

The final price isn’t set, but “it will be in the range of what you would invest in a digital SLR camera,” the company says. “The camera unit will be about the size of a lunchbox and weighs about five pounds.”

More information is here.