Silicon that sees and learns

Synopsys

Synopsys

Chip developer Synopsys “showed off a new image-processor core tailored for deep learning,” reports MIT’s Technology Review, that may “add a degree of visual intelligence to many kinds of devices, from phones to cheap security cameras.”

The tech may be added to chips for smartphones, cameras, and cars — where it will recognize speed-limit signs. Thanks to “deep-learning” it can be trained to recognize faces for video surveillance, using significantly less power than conventional chips.

Here is the full article.

 

 

Nikon snaps shutter based on dog’s heartbeat

nikon dog

nikon dog

“See what happens when emotions are turned into photographs,” Nikon says. “With Heartography, anyone with a heartbeat can be a photographer.”

“Anyone” in this case being “a dog.”

At least, that’s the first demo of a system that reads heart activity to determine excitability or other intense emotions, and use that info to snap a photo.

There’s more information here.

And here’s a demo of what the dog captured.

 

 

nikon heartography

Time-lapse tech mines the Web

time lapse tech

time lapse tech

For years, millions of us have taken photos all around the world, and posted those pics online. Now researchers the University of Washington are combining those shots into time-lapse videos that show huge changes over time.

The “approach for synthesizing time-lapse videos of popular landmarks from large community photo collections” is completely automated and “leverages the vast quantity of photos available online,” they write. They clustered 86 million photos into landmarks and popular viewpoints, sorted the photos by date, and warped each photo onto a common viewpoint.

“Our resulting time-lapses show diverse changes in the world’s most popular sites, like glaciers shrinking, skyscrapers being constructed, and waterfalls changing course.”

The full story is here.

Here’s a video demo.

 

Smartphone scans your iris

iris scan cu

iris scan cu

In Japan, smartphone users can unlock their device by looking at it — and make a mobile payment as well.

The latest phone from Fujitsu scans irises via an infrared camera and infrared LED. It’s the first (shipping) phone to implement eye-based biometrics.

Everyone’s iris has a unique pattern, and reading the iris is reportedly more secure than the fingerprint scans used, for example, in the latest iPhones.

The phone also has a 20-megapixel camera and a 5-inch display.

There’s a fun story-telling video demo here.

iris scan purchase

 

Wolfram identifies images

imageidentify vacuum

imageidentify vacuum

One technology necessary for us to more easily organize our thousands of images is smarter software that can tell just what’s what in our shots. Now famed innovator Stephen Wolfram (creator of Mathematica) is looking into imaging — developing a method for more accurate identification of the subjects in a photo.

Introducing the web service, he writes: “ “What is this a picture of?” Humans can usually answer such questions instantly, but in the past it’s always seemed out of reach for computers to do this. For nearly 40 years I’ve been sure computers would eventually get there—but I’ve wondered when. I’ve built systems that give computers all sorts of intelligence, much of it far beyond the human level. Now …there’s finally a function called ImageIdentify built into the Wolfram Language that lets you ask, “What is this a picture of?”—and get an answer.”

The Image Identification Project lets you take any picture and see what ImageIdentify thinks it is. It’s only a work in progress now, of course. When I tried a photo of the robot vacuum Samsung introduced today, it thought it was a stapler. But Wolfram says that while “it won’t always get it right, most of the time I think it does remarkably well. And to me what’s particularly fascinating is that when it does get something wrong, the mistakes it makes mostly seem remarkably human. It’s a nice practical example of artificial intelligence.”

Wolfram adds that “if one had lots of photographs, one could immediately write a Wolfram Language program that, for example, gave statistics on the different kinds of animals, or planes, or devices, or whatever, that appear in the photographs.”

You can try it here.

Samsung seeing-eye vacuum

samsung robot vacuum

samsung robot vacuum cam

Samsung is one of the leading image sensor manufacturers — and now it’s added imaging to a common household device: the vacuum cleaner.
Of course, this one’s a robot.

The Powerbot has an onboard camera with a fisheye lens, and ten individual “smart sensors,” the company says, “that help it determine the optimal cleaning path by creating a complete map of your home, including walls, furniture and stairways. So you don’t need to worry about furniture or objects on the floor. Simply turn it on, and let it do the vacuuming for you.”

Samsung claims it offers “60 times more suction than previous models” thanks to its cyclonic vacuum that “generates strong centrifugal forces.”
It’s $999.
Here’s more information.
Gizmodo has a review here.

 

Coloring 3D-printed objects: Computational imaging added to Hydrographic printing

mulitimmersion printing 3

mulitimmersion printing 3

Frankly, I’d never heard of Hydrographic printing. Now university researchers have made it better with computational imaging.

The problem: the water transfer printing process applies inkjet-printed markings to a solid object — but the results were wildly distorted due to the object’s (not flat) shape.

The solution: measuring that distortion, and altering the printed image so that when it’s applied, it covers the solid object perfectly.

Wired has an overview article here.

Columbia University provides a PDF on the research here.

Best yet: just watch the video.

mulitimmersion printing 2

Flying camera is your personal paparazzi

Lily 1

Lily 1

It’s billed as the first “throw-and-shoot camera,” and it’s a drone that flies itself.

Thanks to computer vision algorithms and GPS, the Lily Camera “intelligently tracks its owner, following every move,” the company says. “With autonomous flight, Lily expands creative shooting opportunities well beyond handheld and action cameras with a single point-of-view.”

“Initiated with a simple throw in the air, the Lily Camera automatically follows its owner, capturing stunning footage and high definition images while hovering in place or flying at speeds up to 25 mph,” its developers claim. “The Lily Camera’s core technology is driven by proprietary computer vision algorithms. Lily constantly communicates with the owner’s tracking device which relays position, distance, and speed back to the built-in camera. Lily recognizes the owner and improves tracking accuracy over time. This technology enables Lily to fly completely autonomously, always keeping its owner in the shot and delivering smooth footage. Lily is also programmable and can receive directions via the tracking device or the mobile app. The camera is able to follow, loop, zoom, fly out, hover, and more.”

The drone is waterproof, weighs 2.8 pounds, and captures 12-megapixel photos and 1080p video. It has a 20-minute flight time.

Sound too good to be true? Well, it’s not shipping for almost a year… The company is taking pre-orders for $500, and ship February 2016 at $1,000.

There’s a review here.

Here’s a video demonstration.

lily Evolution

 

Flat lenses focus sharp

flat lenses

flat lenses

So as to best bend light, most lenses are curved — which means they can take up a lot of space in small consumer electronics. But now a team at Cal Tech is developing thin, flat lenses they claim can focus light as sharply as a curved one.

“Over the last few years, scientists have started crafting tiny flat lenses that are ideal for such close quarters. To date, however, thin microlenses have failed to transmit and focus light as efficiently as their bigger, curved counterparts,” the school reports. “Caltech engineers have created flat microlenses with performance on a par with conventional, curved lenses. These lenses can be manufactured using industry-standard techniques for making computer chips, setting the stage for their incorporation into electronics such as cameras and microscopes, as well as in novel devices.”

The report adds that the lens is made of silicon, and is just a millionth of a meter thick, “or about a hundredth of the diameter of a human hair, and it is studded with silicon “posts” of varying sizes.”

Here’s more information.

Microsoft will guess your age

how old do I look 37

how old do I look 37It’s no carnival trick: Upload a photo, and Microsoft’s facial recognition technology will guess your age.

Engineers in the company’s Machine Learning labs are working with imaging tools and APIs, and came up with a webpage that “predicts the age and gender of any faces recognized in that picture.”

The engineers say they’d guessed “most folks would not want to upload their own pictures but would prefer to select from pre-canned images such as what they found online. But we what we found out was that over half the pictures analyzed were of people who had uploaded their own images. We used this insight to improve the user experience and did some additional testing around image uploads from mobile devices.”

Try it yourself here.