Samsung: 16 simultaneous sensors capture a gigapixel of 3D

cnet samsung beyond cam

cnet samsung beyond cam

At a developer conference, Samsung debuted a virtual reality-capturing camera

“Project Beyond” will take 3D footage for use with the company’s Gear VR headset. The puck-sized gadget has 16 high-definition cameras, and captures a gigapixel per second. Samsung says it uses “stereoscopic interleaved capture and 3D-aware stitching technology to capture the scene just like the human eye, but in a form factor that is extremely compact.”

Samsung adds that the system is not yet a product, but they are showing “the first operational version of the device, and just a taste of what the final system we are working on will be capable of.”

There’s a demonstration video here.

CNet and TechCrunch have more.

 

 

HP unveils 3D printing and immersive computing

sprout

While 3D viewing (stereoscopic imaging that emulates our eyesight and 3D printing (an inkjet-like manufacturing process) really mean very different things, HP rolled out new products in each arena simultaneously as part of its new “Blended Reality ecosystem.”

The HP Multi Jet Fusion hardware “delivers on the potential of 3D printing,” and the Sprout “immersive computing platform” redefines the PC user experience “and creates a foundation for future immersive technologies,” the company says. “We are on the cusp of a transformative era in computing and printing…  enabling us to express ourselves at the speed of thought — without filters, without limitations.”

hp multijet 3d printer

The Multi Jet Fusion provides better quality, increased productivity, and break-through economics as compared to existing solutions, HP claims, with a “synchronous architecture that significantly improves the commercial viability of 3D printing and has the potential to change the way we think about manufacturing.” It’s 10-times faster, and the proprietary multi-agent printing process utilizing HP Thermal Inkjet arrays simultaneously apply multiple liquid agents to produce best-in-class quality that combines greater accuracy, resiliency and uniform part strength in all three axis directions, HP adds.

Of, HP “has been an industry leader in 2D printing for 30 years,” the company notes, and “Now, we are bringing our expertise to bear in 3D printing, leveraging all of our investments and intellectual property to develop tools that can enable the next industrial revolution.”

Sprout “Reimagines computing”

sprout

Yes, Sprout is a funky name for a desktop all-in-one Windows 8 PC. But HP says it “combines the power of an advanced desktop computer with an immersive, natural user interface to create a new computing experience.” It has a scanner, depth sensor, and a projector in a single device, to let you “take physical items and seamlessly merge them into a digital workspace,” as “people have always created with their hands.” The “Illuminator” projection system scans and captures real-world objects in 3D, allowing the user to immediately interact and create: there’s a 23-inch LCD primary display up top, and 20-inch capacitive pad on the bottom, under the camera and projector.

There a demo of the system in use here.

It’s sells for $1889 here.

 

Objects in photos transform into movable 3D

photo to 3d CU

photo to 3d CU

A new imaging technique will let you select an item in a photo — from a small chair to a large taxi — convert it into 3D, and reposition it in the original image as desired.

Developed at Carnegie Mellon and the University of California, the technique taps into libraries of stock 3D models. You simply alter the model to better fit the image, and then viola!: more photo manipulation than you otherwise thought possible.

“We present a method that enables users to perform the full range of 3D manipulations, including scaling, rotation, translation, and nonrigid deformations, to an object in a photograph,” the researchers say. Also, “as 3D manipulations often reveal parts of the object that are hidden in the original photograph, our approach uses publicly available 3D models to guide the completion of the geometry and appearance of the revealed areas of the object.”

You can read more here, or watch a video demonstration and explanation here.

photo to 3d

Pelican demos array camera’s 3D captures

pelican array sensor

pelican array sensor

A new sensor design with computational imaging captures the complete depth information of the scene, claims Pelican Imaging, “allowing users to refocus after the fact and perform an unprecedented range of edits.”

Like the lightfield camera from Lytro, the new camera will let you “focus on any subject, change focus (even on multiple subjects) after you take the photo, capture linear measurements, scale and segment your images, change backgrounds, and apply filters,” the company says.

The difference: “all from any device.” Pelican’s camera system will work in compacts and even phones. The “super-thin mobile array camera is less than 3mm thick, “about 50 percent thinner than best-in-class current smartphone cameras” the company says  “It is the first mobile plenoptic camera to capture video, 30 fps at 1080p resolution, and still images at approximately 8 megapixels, with excellent image quality.”

Also, with no autofocus mechanism or other moving parts “every scene is captured in complete focus.”

There’s more information here, along with a “Life in 3D” video that features Pelican Imaging CEO Chris Pickett explaining “Depth-Based Photography.”

 

A camera and two lasers: MakerBot makes it easier to scan objects

MakerBot Digitizer

 

MakerBot Digitizer

It’s not quite an exact copier like something out of Star Trek, but the MakerBot Digitizer takes a real-life object, scans it using a camera (a simple 1-megapixel CMOS sensor) and two lasers, and creates a 3D digital file – without any need for design or 3D software experience.

With the 3D data, MakerBot’s Replicator or other “3D printers” can output solid simulations to “create artworks (sculptures and figurines) as well as memorializing keepsakes and archiving,” the company says.

The $1,400 desktop model is limited in what it can accommodate: The Digitizer handles physical objects up to 8 inches in diameter and 8 inches tall, weighing up to 6.6 pounds.

More information is here.

Photo manipulation steps into the third dimension

3-sweep1

3-sweep1

Don’t know about you, but I was plenty amazed by the content aware fill technique as first offered by Adobe: easily select an object in a photo, such as a horse in a field, and seamlessly move it about the scene as if you were picking up and moving the horse before exposing the shot — and new background imagery is magically generated to fill in the space where the horse had been.

Impressive, yes — and now new research makes it look old hat, as scientists at the Interdisciplinary Center and Tel-Aviv Universityin Israel and Tsinghua University in Beijing have moved the cut-and-paste into the third dimension.

“3-Sweep” lets you select an object in a photo, and turn it into a 3D model that you can pivot, rotate, and move about the scene.

(It’s key to note that this works from a single shot, as various techniques for making 3D from multiple shots taken from different angles have been around for ages.)

In the demonstration video, the photo-realistic new object models are even stretched, enlarged, or otherwise altered such as new arms on a candelabra are pasted into place.

3-sweep 2

The scientists call it “an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph.” The extraction requires understanding the components of the shape, their projections, and relations, they add. “These simple cognitive tasks for humans are particularly difficult for automatic algorithms. Thus, our approach combines the cognitive abilities of humans with the computational accuracy of the machine to solve this problem” — meaning as you select and draw over the object, the algorithms are better able to recognize what you indicate and snap a selection into place. “In our interface, three strokes are used to generate a 3D component that snaps to the shape’s outline in the photograph, where each stroke defines one dimension of the component… We show that with this intelligent interactive modeling tool, the daunting task of object extraction is made simple.”

———————————————

disney 3d

The 3D models the system creates are admittedly rudimentary, with basic geometry given an appearance of detail by the photo texture.

A more detailed model is made from an algorithm created by scientists at Disney Research in Zurich.

Announced earlier this summer, the Disney “lightfields” technique requires hundreds of images that capture the scene from a variety of vantage points to “build 3D computer models of complex, real-life scenes that meet the increasing demands of today’s movie, TV and game producers for high-resolution imagery. Three-dimensional models have become increasingly important for digitizing, visualizing and archiving the real world. In movie production, for instance, creating accurate 3D models of movie sets is often necessary for post-production tasks such as integrating real-world imagery with computer-generated effects.”

disney research

Disney scientists add that applications for the method extend beyond 3D  and “could be used for applications such as automatic image segmentation, which would simplify background removal in detailed scenes.  It also would be useful for image-based rendering, in which new 2D images are created by combining real images.”

Also, Disney says, many 3D models now are obtained using laser scanning. “In complex, cluttered environments, however, a single laser scan misses a lot of detail because objects in the foreground can block the laser’s view. Photography makes it easier to capture the scene from multiple viewpoints, revealing details that otherwise would be blocked from a single point of view.”

As Disney says, “building 3D models from multiple 2D images captured from a variety of viewing positions is nothing new, but doing so for highly detailed or cluttered environments at high resolution has proved difficult because of the large amounts of data involved.” Disney’s new algorithm efficiently processes these amounts of data. The method uses “the ample variation of the scene’s appearance to calculate depth estimates for individual pixels, rather than patches of pixels. The depth calculations work best at the edges of objects, producing precise silhouettes.”

More information is here.

 

Camera captures 3D from a kilometer away

OpEx_ 3d

 

TOpEx_ 3dhis is going a long way past a telephoto lens:
To get 3D information such as the distance to a far-away object, scientists today bounce a laser beam off the object and measure how long it takes the light to travel back to a detector. The technique, called time-of-flight, is used in machine vision, navigation systems for autonomous vehicles, and other applications — but most have a relatively short range and struggle to image objects that do not reflect laser light well.

That’s according to a team of Scotland-based physicists — who say they’ve tackled these limitations, and have a system that can gather high-resolution, 3-D information about objects that are typically very difficult to image, from up to a kilometer away.

At Heriot-Watt University in Edinburgh, Scotland, the new system works by sweeping a low-power infrared laser beam rapidly over an object. It then records, pixel-by-pixel, the round-trip flight time of the photons in the beam as they bounce off the object and arrive back at the source.

The system can resolve depth on the millimeter scale over long distances using a detector that virtually counts individual photons.

Other approaches have better depth resolution, but the new system images objects like items of clothing that do not easily reflect laser pulses makes it useful in a wider variety of field situations, say the researchers. “Our approach gives a low-power route to the depth imaging of ordinary, small targets at very long range. This single-photon counting approach gives a unique trade-off between depth resolution, range, data-acquisition time, and laser-power levels.”

The primary use of the system is likely to be scanning static, man-made targets, such as vehicles. With some modifications to the image-processing software, it could also determine their speed and direction.

The scanner is particularly good at identifying objects hidden behind clutter, such as foliage. However, it cannot render human faces, instead drawing them as dark, featureless areas. This is because at the long wavelength used by the system, human skin does not reflect back a large enough number of photons to obtain a depth measurement.

The system is not maxed out: it could someday scan and image objects located as far as 10 kilometers away, and be miniaturized and ruggedized. “A lightweight, fully portable scanning depth imager is possible and could be a product in less than five years.”

More information is here.

 

Matterport captures everything in 3D

matterport

 

matterport

What we’ve all been calling 3D cameras for the last few years are really “stereoscopic” — not 3D. That is, they capture two side-by-side images, just like our two eyes, and yield an image with more apparent depth than a typical flat 2D photo.

But while you can see a little bit more of one object or person in a scene by pivoting the viewpoint a little, it’s not like you can turn the whole thing around and see it from the other side. No, for that you need a full three-dimensional capture.

Real 3D photography has meant two things: a simple camera used to take dozens of shots around an object from all angles, and the multiple shots combined on a computer into a manipulatable onscreen 3D object — or large and expensive cameras or laser scanners that capture an entire environment with full depth.

Now, new startups are promising handheld cameras that will provide the best of both worlds. Last month we reported on Austin, Texas-based Lynx developing a tablet-like 3D camera system. Now comes news of Mountain View-CA-based Matterport capturing full interior spaces with its device. And while Lynx is raising money on Kickstarter, Matterport received $5.6 million in venture financing.

matterport 2

Matterport says it will help consumers and businesses create accurate, photo-realistic 3D models, quickly, easily and automatically with its 3D camera. Also, its interactive viewing platform will let you see the models and indoor spaces from a web browser. Matterport says the captured images are converted to 3D models that viewers can walk through. “You control what you want to see and where you want to go.  You have the option of looking at the space in an aerial mode we call “dollhouse” and a layout mode we call “floorplan.” Also, the 3D capture system measures rooms and objects, and creates a video of the 3D model, “giving you the same flythrough effect you enjoy in video games and movies.”

While Lynx highlights its speed, Matterport says capturing a comprehensive 3D model of a furnished 1,500-square-foot space takes between 1–2 hours; an empty space of that size takes 45 minutes.
Matterport’s camera is designed for use indoors, and does not capture small objects or moving people.

The final price isn’t set, but “it will be in the range of what you would invest in a digital SLR camera,” the company says. “The camera unit will be about the size of a lunchbox and weighs about five pounds.”

More information is here.

 

3D scanning made simple

makerbot scan

 

makerbot scan

“3D printing” is a confusing term for the increasingly popular technique of using inkjet-like devices to create solid objects from a variety of materials by laying down a thin later at a time until the full form emerges.

But before you could “print” something solid, you needed a 3D model for the device to work from — a computer file showing the underlying geometry and, in some cases, the surface texture. [See here.] Now 3D printer pioneer MakerBot is expanding from the output side to also address the input part of the equation with a prototype of its Digitizer Desktop 3D Scanner.

Using lasers and cameras, “the MakerBot Digitizer is an innovative new way to take a physical object, scan it, and create a digital file,” the company says, letting anyone “without any design, CAD software, or 3D modeling experience at all” make a 3D model — “and then print the item again and again on a MakerBot Replicator.”

3D scanners aren’t new, of course — but they have been pricey industrial devices. While pricing wasn’t announced, MakerBot says its hardware will be aimed at consumers.

“The Digitizer is a great tool for archiving, prototyping, replicating, and digitizing prototypes, models, parts, artifacts, artwork, sculptures, clay figures, jewelry, etc.” MakerBot says. “If something gets broken, you can print it again.”

Brooklyn-based MakerBot announced its first 3D printer four years ago.

 

 

On the DIMAcast — Photogrammetry: 3D from photographs

New DIMAcast 2.0 logo

New DIMAcast 2.0 logorustclad 3d world 2Most of us use photography to capture the real world. Now artists are using it to create a fantasy world based on pictures of real objects.

Photogrammetry is used in engineering, map-making, architecture and other fields to determine geometry from a photo. Now game developers can take multiple shots of an object at differing angles, and then use the free 123Sketch software to produce a textured 3D object. When the object is placed in a game, the player can virtually walk around and see it from all sides.

In this episode of the DIMAcast, Skull Theater developer Jeff Isselee explains why he chose the technique, and what camera and studio considerations affect the art.

A video preview of the 3D art is here.

Click here download the show, or use the player below.