Why store photos in the Cloud?

hans photo cloud

hans photo cloud

Sixteen percent of those surveyed report they store all their photos in the cloud.
Half the respondents store at least some photos in the cloud… which of course means that half do not store any there.
Researcher Hans Hartman at Suite 48 Analytics is offering a new white paper based on a study for which 1212 North Americans between the ages of 25-44 were surveyed.

“For many smartphone photographers, the cloud is becoming the most important photo storage location,” the report says. “General cloud storage or syncing services are increasingly adding features and interfaces targeting photo enthusiasts because their freemium business model – free starter packages plus tiered pricing based on storage volume – benefits from (typically large) photo and video file sizes. Since photos sell storage subscriptions, many cloud services have begun adding features like timeline, photo discovery based on metadata, visual browsing, and unified photo viewing independent of file or folder structure.”

Hartman adds that many cloud photo services now “leverage various photo metadata through user-friendly interfaces, e.g. by letting users rediscover photos that correspond with today’s date in previous years. Going forward, we expect them to also start leveraging image recognition technologies, which have made tremendous progress  in the last two years as deep-learning technologies are developed and marshaled to solve the image recognition needs of deep-pocket advertising and e-commerce vendors.”

In addition, Hartman believes that the number one factor that could drive further adoption of photo cloud storage services is for these services to more transparently address mobile photographers’ most pressing photo storage need: secure backup. “Our respondents were clear: backup is the most important reason why they use these services. Many are confused as to whether their photo cloud services offer secure backup, as well as whether they would provide full recovery of their original photo collections in the event their devices break down or are stolen. Some services need to better deliver the desired backup and restore features, others need to better explain how their features work.”

The Photos and the Cloud white paper is based on a study Suite 48 Analytics conducted for PhotoGurus and addresses the following questions:

•       Do mobile photographers store any of their photos in the cloud?
•       Why do they store photos in the cloud?
•       If they do store photos in the cloud, why not all their photos?
•       Which cloud service do they use most?
•       Why do some not store any photos in the cloud?
•       Are they concerned about not backing up all their photos in the cloud?

The free white paper can be downloaded at http://www.suite48a.com/cloud.

 

Photo app “captures Space”

fyuse screen

fyuse

“Why squeeze a complex world into a tiny square frame?” ask the developers at Fyusion. Instead, they say, their app “lets you capture dynamic panoramas, immersive selfies, and full 360 views of the things that matter to you,” in order to “share interactive representations of the world.”

The Fyuse app is “the first consumer Space capture experience,” they add. “With Fyuse we are moving away from the traditional “point and shoot” model to “tap and wave.”

fyuse screen

The new version 2.0 of Fyuse now yields “sharper, higher-resolution” images which can be viewed “seamlessly from all angles (including full 360!),” the company says. It also supports 3D tags, with which a recognized object shows the label from whatever angle it’s viewed. Also new are “Selfie Panoramas” that “unroll …into a wide-angle, dynamic panorama.”

It’s available for iOS or Android.

There’s more information here.

Moving cameras talk to each other

tracking camera

tracking camera

To track and identify pedestrians, a proposed new surveillance system will connect its cameras.

Technology developed at the University of Washington “distinguishes among people by giving each person a unique color and number, then tracks them as they walk,” the school reports.  The algorithm “trains the networked cameras to learn one another’s differences. The cameras first identify a person in a video frame, then follow that same person across multiple camera views.”

The problem with tracking a human across cameras of non-overlapping fields of view is that a person’s appearance can vary dramatically in each video because of different perspectives, angles and color hues produced by different cameras, the report notes. “The researchers overcame this by building a link between the cameras. Cameras first record for a couple of minutes to gather training data, systematically calculating the differences in color, texture and angle between a pair of cameras for a number of people who walk into the frames in a fully unsupervised manner without human intervention. After this calibration period, an algorithm automatically applies those differences between cameras and can pick out the same people across multiple frames, effectively tracking them without needing to see their faces.”

Here is the full story.

There’s a demonstration video here.

Cam-copter on a leash

cyphy pocketfly leashed drone

cyphy pocketfly leashed drone

A new pocket-sized drone is tethered with a leash — which sounds like a step backward when you want a free-flying copter to give you new perspectives from on high…

However, developer CyPhy Works says its PocketFly can help first responders navigate inside buildings or in dangerous situations, Fast Company reports.

The PocketFly weighs less than three ounces and is attached to a thin microfilament that powers it for two hours and provides uninterrupted communications. It captures “continuous, unbroken, 720p, 30fps, HD video. Not just high definition resolution, it’s actually high-quality,” the company claims.

One of the co-founders worked on the original Roomba robot vacuum.

There’s more information here.

 

 

Computational imaging boosts mobile camera quality

supersensor

supersensor

Can computational imaging deliver improved picture quality without any hardware changes? That’s the claim of imaging developer Almalence makes for its “SuperSensor.”

Mobile camera quality improvement is impossible due to size restrictions or cost, the company says. Its “computational component” on the other hand will “improve mobile camera features without adding a micron to its size and at just a small fraction of the cost of typical hardware improvement.” Also, the technology can be retrofitted to existing mobiles “via a system upgrade.”

The claims are very aggressive: the tech captures frames similar to a larger sensor with higher resolution, as well as “extended dynamic range (2-3Ev) that makes traditional HDR unneeded in most of cases; radical noise reduction in low light (PSNR increase of 5-10 dB); recovery of details in shadows; backlight scenes recovery; and highlights clipping reduction.”

Almalence says the first devices to use its SuperSensor may be out next year, but a demo “in a form of Android app for Nexus 5 is publicly available on Google Play.”

There’s more information here.

 

Capture at 100 billion frames per second “may enable new scientific discoveries”

1 billion frames cam

1 billion frames cam

A new camera developed by biomedical engineers at Washington University can capture up to 100 billion frames per second.

That’s orders of magnitude faster than current imaging techniques, reports Phys.org News, “which are limited by on-chip storage and electronic readout speed to operations of about 10 million frames per second.”

It’s not a regular camera of course: the receive-only 2-D camera uses “compressed ultrafast photography” to “see light pulses on the fly… for the first time” via computational imaging.

The research appears in the Dec. 4, 2014, issue of Nature.

50,000 body-worn cameras — White House proposes funding

axon_flex_hero_vertical_shot

axon_flex_hero_vertical_shotThe use of wearable cameras has been slowly accelerating in recent years, especially among police forces. You can now expect these tools to take off rapidly as the US government reacts to recent increased violence.

“Recent events in Ferguson, Missouri and around the country have highlighted the importance of strong, collaborative relationships between local police and the communities they protect,” the White House press office posted.

Amidst the proposals and task forces, the post says the President “proposes a three-year $263 million investment package that will increase use of body-worn cameras, expand training for law enforcement agencies (LEAs), add more resources for police department reform, and multiply the number of cities where DOJ facilitates community and local LEA engagement. As part of this initiative, a new Body Worn Camera Partnership Program would provide a 50 percent match to States/localities who purchase body worn cameras and requisite storage.  Overall, the proposed $75 million investment over three years could help purchase 50,000 body-worn cameras.”

(Pictured here: Taser’s Axon system first marketed in 2012.)

 

Technology News Digest #2

robot

robot

There’s too much tech news to keep up with yourself — so let us do it for you!

The new Top Ten Today-Tech digest provides a brief look at only the most interesting or important items of the week.

This weeks headlines include:
Robots on Patrol
Intel, Samsung back faster wireless connectivity
No hands required: type your thoughts
Police test networked guns
Motorola will find your phone
Pilot-free helicopter fights wildfires
Wearable dialysis device for kidney patients
You can read the full free briefing here.

 

 

PhotoTime automatically tags and groups photos

phototime

phototime

The developers at Orbeus claim their free PhotoTime app will give you a photographic memory — sort of.

“The human brain is an amazing thing,” they say. “But computers have definitely got us beat in memory and indexing” Two crucial skills for managing your photos in the age of digital photography, in which an estimated 880 billion photos are taken every year. A few hundred (or thousand) of those will probably be yours. That’s a lot for one human brain to process.”

Orbeus says it detects race, emotion, age, and gender, and automatically groups the same faces to label friends and family. Its scene recognition determines the context and settings of images and , automatically generates and tags searchable keywords.

PhotoTime then automatically organizes, sorts and tags all your photos. It “integrates with your iPhone Camera Roll, social networks and cloud services to automatically organize, sort and tag your photos. Instead of struggling to remember (or guess-and-checking) where you’ve stored a specific photo, then scrolling through every image in that album until you find it… you can simply type the name of the person, place, location or concept you’re looking for, and voilà!”

There’s a demonstration video here.

And there’s more information here.

Sony stabilizes full-frame mirrorless camera

sony 5-axis

sony 5-axis

Sony updated its A7 mirrorless camera, saying it now offers the first full-frame ILC with a 5-axis sensor-shift image stabilization system.

The 5-axis system compensates for yaw, pitch, roll, and vertical and horizontal motion, and Sony says it will yield 4.5 stops of stabilization correction. As the Imaging Resource notes, Olympus first debuted a 5-axis system — and with Sony’s investment in Olympus, it could be using similar technology.

The A7 II has the same 24-megapixel sensor as its predecessor, but with a 30 percent faster focus speed and 1.5x better tracking. It is currently official only in Japan.

Update
: On 11/26, Sony widely announced the camera, and will ship it in the US mid-December for $1700.