Nikon snaps shutter based on dog’s heartbeat

nikon dog

nikon dog

“See what happens when emotions are turned into photographs,” Nikon says. “With Heartography, anyone with a heartbeat can be a photographer.”

“Anyone” in this case being “a dog.”

At least, that’s the first demo of a system that reads heart activity to determine excitability or other intense emotions, and use that info to snap a photo.

There’s more information here.

And here’s a demo of what the dog captured.

 

 

nikon heartography

Time-lapse tech mines the Web

time lapse tech

time lapse tech

For years, millions of us have taken photos all around the world, and posted those pics online. Now researchers the University of Washington are combining those shots into time-lapse videos that show huge changes over time.

The “approach for synthesizing time-lapse videos of popular landmarks from large community photo collections” is completely automated and “leverages the vast quantity of photos available online,” they write. They clustered 86 million photos into landmarks and popular viewpoints, sorted the photos by date, and warped each photo onto a common viewpoint.

“Our resulting time-lapses show diverse changes in the world’s most popular sites, like glaciers shrinking, skyscrapers being constructed, and waterfalls changing course.”

The full story is here.

Here’s a video demo.

 

Smartphone scans your iris

iris scan cu

iris scan cu

In Japan, smartphone users can unlock their device by looking at it — and make a mobile payment as well.

The latest phone from Fujitsu scans irises via an infrared camera and infrared LED. It’s the first (shipping) phone to implement eye-based biometrics.

Everyone’s iris has a unique pattern, and reading the iris is reportedly more secure than the fingerprint scans used, for example, in the latest iPhones.

The phone also has a 20-megapixel camera and a 5-inch display.

There’s a fun story-telling video demo here.

iris scan purchase

 

On the PMA Podcast: Big prints are affordable at Jondo

PMApodcast_icon_sq

Jenny-Coulston-JONDO

Jenny Coulton literally grew up in the photo printing business: she worked at Jondo after class during high school. She left to go to college — and then returned as the company’s marketing director.

On this episode of the PMA podcast, Coulton talks about the primary marketing challenge facing everyone in the photo printing business: raising awareness of the high-quality low-price products the industry now offers.

You can download the audio episode or subscribe to the podcast here.

Or you can tune in now with the player below.

Panasonic’s “4K Photo” takes 30 stills per second

panasonic g7

panasonic g7With it “4K Photo” functions, the latest from Panasonic shoots 30 frames per second as long as the shutter button is pressed, in the 4K Burst mode (and the 4K Pre-burst records 60 images from before or after release of the shutter).

The Lumix DMC-G7 mirrorless camera captures 4K video as well of course, and for stills at the full 16-megapixel resolution, it has a continuous shooting speed of 6 fps.

It’s Low Light AF “makes it possible to set focus on the subject more precisely even without an AF assist lamp in extremely low-lit situations all the way down to -4EV,” the company says, “which is as dark as moonlight.”

The camera also has an articulated 3-inch display and WiFi connectivity. It’s $800.

Full features, lower price for Fujifilm X-T10

Fujifilm X-T10

Fujifilm X-T10

With its latest mirrorless camera, Fujifilm is offering the features from one of its more popular models in a more affordable package.

The new $800 X-T10 has many specs that are similar to those of the $1200 X-T1 flagship: a 16-megapixel APS-C sized sensor, 3-inch tilting display, and a continuous shooting speed of 8 frames per second. It adds     subject tracking autofocus, and eye detection AF that automatically detect and focus on human eyes.

The camera has the X-series retro look, with “three precision-milled aluminum dials that give the X-T10 a premium feel and allow users to intuitively adjust the combination of aperture, shutter speed and shooting functions while concentrating on picture taking,” the company says.

 

Wolfram identifies images

imageidentify vacuum

imageidentify vacuum

One technology necessary for us to more easily organize our thousands of images is smarter software that can tell just what’s what in our shots. Now famed innovator Stephen Wolfram (creator of Mathematica) is looking into imaging — developing a method for more accurate identification of the subjects in a photo.

Introducing the web service, he writes: “ “What is this a picture of?” Humans can usually answer such questions instantly, but in the past it’s always seemed out of reach for computers to do this. For nearly 40 years I’ve been sure computers would eventually get there—but I’ve wondered when. I’ve built systems that give computers all sorts of intelligence, much of it far beyond the human level. Now …there’s finally a function called ImageIdentify built into the Wolfram Language that lets you ask, “What is this a picture of?”—and get an answer.”

The Image Identification Project lets you take any picture and see what ImageIdentify thinks it is. It’s only a work in progress now, of course. When I tried a photo of the robot vacuum Samsung introduced today, it thought it was a stapler. But Wolfram says that while “it won’t always get it right, most of the time I think it does remarkably well. And to me what’s particularly fascinating is that when it does get something wrong, the mistakes it makes mostly seem remarkably human. It’s a nice practical example of artificial intelligence.”

Wolfram adds that “if one had lots of photographs, one could immediately write a Wolfram Language program that, for example, gave statistics on the different kinds of animals, or planes, or devices, or whatever, that appear in the photographs.”

You can try it here.

Samsung seeing-eye vacuum

samsung robot vacuum

samsung robot vacuum cam

Samsung is one of the leading image sensor manufacturers — and now it’s added imaging to a common household device: the vacuum cleaner.
Of course, this one’s a robot.

The Powerbot has an onboard camera with a fisheye lens, and ten individual “smart sensors,” the company says, “that help it determine the optimal cleaning path by creating a complete map of your home, including walls, furniture and stairways. So you don’t need to worry about furniture or objects on the floor. Simply turn it on, and let it do the vacuuming for you.”

Samsung claims it offers “60 times more suction than previous models” thanks to its cyclonic vacuum that “generates strong centrifugal forces.”
It’s $999.
Here’s more information.
Gizmodo has a review here.

 

Mobile imaging merger: iON and Contour  

ion air 3

Two makers of wearable and mountable action cams are teaming up: iON Cameras and Contour announced the two companies will merge.

“The combined organization will offer broadest range of cutting-edge POV and wearable cameras,”iON says, “with combined global distribution in over 10,000 storefronts in 40 countries.”

Contour brings “many critical patents,” and iON adds “expertise in engineering and manufacturing… and significant North American retail distribution.”

iON CEO Giovanni Tomaselli will serve as the new company’s CEO. Contour CEO James Harrison will assume the role of president.

Both brands will continue, as will most of the complementary product lines of security cams, dash cams, wearable models, and action cameras.

Coloring 3D-printed objects: Computational imaging added to Hydrographic printing

mulitimmersion printing 3

mulitimmersion printing 3

Frankly, I’d never heard of Hydrographic printing. Now university researchers have made it better with computational imaging.

The problem: the water transfer printing process applies inkjet-printed markings to a solid object — but the results were wildly distorted due to the object’s (not flat) shape.

The solution: measuring that distortion, and altering the printed image so that when it’s applied, it covers the solid object perfectly.

Wired has an overview article here.

Columbia University provides a PDF on the research here.

Best yet: just watch the video.

mulitimmersion printing 2