Samsung and Oculus collaborate on mobile VR headset

samsung oculus gear carmack

samsung oculus gear

Is this how you want to look at your photos and other media? Samsung says its new Gear VR “creates an immersive mobile virtual reality experience that the industry has never seen before… enabling users to fully immerse themselves in a cinematic virtual reality environment.”

Made in partnership with VR innovator Oculus, the Gear VR works with the new Galaxy Note 4 phone’s 5.7-inch display, and also uses the phone’s GPU and CPU. The headset is wireless, “so users can be fully engaged in virtual worlds without being tethered to a computer.” Its sensor include an accelerometer, gyrometer, magnetic, and proximity.

You can “sit in the best seat of a theater,” Samsung says, “be on the stage of a performance with full 360-degree 3D video, or can enjoy gaming like it’s never been seen before, inside stunning worlds where imagination becomes reality,” Samsung says.

Oculus, now owned by Facebook, developed a virtual reality headset “that allows players to step inside the game. It provides an immersive, stereoscopic 3D experience with an ultra-wide field of view and super low latency head tracking.” Oculus says its collaborated with Samsung for 12 months

“It’s still early days for mobile VR,” Oculus adds. “Some of the key challenges include a lack of 6DOF positional tracking, limited CPU/GPU bandwidth with today’s hardware, thermal management, power consumption, and overall ergonomics, but we’re making progress quickly and the Innovator Edition is only just the beginning. Still, the experience on the device today is pretty astounding. The magic of a completely portable and wireless VR headset is easy to underestimate until you have experienced it. We don’t have the raw horsepower of a high end gaming PC (yet), but there are valuable compensations that make it a very interesting trade off, and many developers will thrive on the platform, especially as it improves at the rapid pace of the mobile ecosystem.”

The Gear VR will be available this year in a beta “Innovator” version.

samsung oculus gear carmack

Toshiba develops faster 20-megapixel sensor for phones

T4KA7 toshiba

T4KA7 toshiba

“The markets for smartphones and tablets increasingly require smaller cameras but with much higher resolution,” Toshiba says. “With a 1.12-micrometer pixel size, Toshiba’s new sensor achieves 20-megapixel images …in a 6mm z-height camera modules for smartphones and tablets.”

The T4KA7 is a 1/2.4 inch BSI CMOS image sensor. It can capture 22 frames per second, 1.8 times faster than Toshiba’s previous 20-megapixel sensor.

There’s more information here.

Instagram offers free Hyperlapse app

instagram hyperlapse

instagram hyperlapse

It was only a week or two ago that we were all marveling at a Microsoft Research project that superbly smoothed out videos that had otherwise suffered from lots of motion.

And now Instagram is providing a free app that will let anyone do it on an iPhone.

Instagram said it used its own in-house stabilization technology to let you “shoot polished time-lapse videos that were previously impossible without bulky tripods and expensive equipment.” You can shoot handheld even “while you’re walking, running, jumping or falling.” The result “will be instantly stabilized to smooth out the bumps from the road and give it a cinematic feeling.”

Examples of its us could include “Capture an entire sunrise in 10 seconds—even from the back of a moving motorcycle; Walk through the crowds at an all-day music festival, then distill it into a 30 second spot; Capture your bumpy trail run and share your 5k in 5 seconds,” the company says.

The app is here.

Wired has a very interesting profile of the developers here. “What was once only possible with a Steadicam or a $15,000 tracking rig is now possible on your iPhone, for free,” the article says.

The gist of it: rather than use a smartphone’s limited computational power to replicate intensive video post-processing, Instagram’s tech uses the phone’s built-in gyroscopes during capture to measure the camera’s movement directly.

 

Vine now imports and edits video

vine edits

vine edits

“Vine” makes fun little video clips posted on Twitter and other social media. But the app has long had a serious limitation: you could only use the video you captured in the app. There was no using clips from your phone’s camera roll, no importing editned and enhanced video from other tools.

No longer: the newest version  “gives you the freedom to create a Vine in any way you want,” the company says. The new app “offers powerful ways to edit your videos as well as the ability to import existing videos on your phone and turn them into Vines. Simply put, this release gives you total creative freedom.” The app also sports imporved editing tools.

Vine boasts “Every day, millions of people open Vine to share memories in the moment. Every month more than 100 million people watch Vines across the web, and there are more than 1 billion loops every day.”

 

MediaFire automates photo and video syncing on mobile

MediaFire-for-iOS-2

MediaFire-for-iOS-2

MediaFire updated its Android cloud storage app to automatically store, access and share mobile photos and videos; it launched an iOS version earlier this Summer.

“In just one week since the launch of our automatic photo and video syncing update for iOS, MediaFire users have used our app to back up and share over 5 million photos and videos online,” the company says.

The service “can be a convenient tool for photographers” the company adds “because it supports uploading file hierarchies. Creatives with lots of subfolders can automatically sync all that organized content to the cloud without having to upload individual files.”

The service also supports one-click sharing for a large number of social media services, and includes embed links for major blogging platforms. It also supports “watching” folders, so you can share, follow and track access to specific files.

MediaFire is available for Windows, OSX, iPhone, Android and the Web, and provides 15GB free cloud storage, and 1TB for $25 per year.

The “online storage and collaboration company” says it now has 37 million active registered users. It was founded in 2006 in The Woodlands, Texas.

 

Pics.io manages Google-stored photos

picsio

picsio

A new photo Web app is built on top of Google Web services: Pics.io will let you access, manage, edit and share photos from any device via your Google+ account, with online storage in Google Drive.

The service, pronounced Pixio, will feature asset management functions and rating tools such as stars, flags and color labels, as well as the company’s Raw uploading and editing capabilities, the Next Web reports.

The app, from software development company TopTechPhoto, is now in a beta release, and will “store your entire life’s photo library in secure and redundant storage,” the company adds.

There’s more information here.

 

Vemory “transforms pictures into video memories”

vemory hero

vemory hero A new app for the iPhone and iPad is billed as a transformational tool that “automagically” finds your favorite pictures and “creates videos that incorporate comments and likes from social media.”

Vemory claims it provides the easiest way for you to “enjoy pictures you love and also the favorite moments you may have forgotten all about.”

The company says its technology intelligently discovers the top pictures from your camera roll, iCloud and social platforms (Facebook, Twitter, Instagram, Tumblr and Flickr) based on time, location and popularity on social media. Vemory creates videos around “your best pictures of all time and also from recent trips, events, moments and hashtags from you and your friends’ social networks. It’s not just building photo albums, it’s intelligently building your favorite memories from your favorite social apps and camera roll.”

The Austin, Texas company was founded in 2012.

vemory

Google acquires image-analyzing startup Jetpac

jetpac2google

jetpac2google

San Francisco-based startup Jetpac is the latest Google acquisition, bringing new imaging techniques to the search leader.
The purchase amount was not disclosed.

For its Jetpac City Guides, the company algorithmically scanned images to determine how much people liked a location — by recognizing, say, how many photos with smiling faces were taken there. The result could provide unique contextual information.

It addition to the photo analyzing application, Jetpac has tools for real-time local object recognition, and its technology is reportedly based on the work of a Google researcher.

There’s more information here, here, and here.

 

Microsoft’s amazingly smooth video interpolation

microsoft hyperlapse 1

microsoft hyperlapse 1

We’ve all seen — and likely been put off by — first-person videos captured by a camera mounted on a bike, a helmet, or worse, handheld, while the shooter goes about some fun activity. The results are jarring, shaky, and sometimes unwatchable.

Or just boring.

Now researchers at Microsoft have developed a method for converting first-person videos “into hyperlapse videos: time-lapse videos with a smoothly moving camera.”

The results are pretty amazing. Go watch the video demo here.

The algorithm reconstructs “a 3D input camera path as well as dense, per-frame proxy geometries,” the researchers say. “We then optimize a novel camera path for the output video that is smooth and passes near the input cameras while ensuring that the virtual camera looks in directions that can be rendered well from the input. Next, we compute geometric proxies for each input frame. These allow us to render the frames from the novel viewpoints on the optimized path. Finally, we generate the novel smoothed, time-lapse video by rendering, stitching, and blending appropriately selected source frames for each output frame.”

Phew! The results, at least are easy to appreciate. Again, watch the video demo here. Especially the last two minutes.

The Hyperlapse algorithm may soon be available as a Windows app.

 

Video: Disney combines multiple angles into coherent clip

disney video cut

disney video cut

These days, when one person is capturing an activity on video, odds are a few other folks are as well. But those clips hardly sync up — let alone can anyone easily cut them together to mimic a well-made movie.

Well now a computer can do the job. That is, a computer running new tools developed at Disney.

Researchers there say their new approach “takes multiple videos captured by social cameras that are carried or worn by members of the group involved in an activity—and produces a coherent “cut” video of the activity. Footage from social cameras contains an intimate, personalized view that reflects the part of an event that was of importance to the camera operator (or wearer). We leverage the insight that social cameras share the focus of attention of the people carrying them. We use this insight to determine where the important “content” in a scene is taking place, and use it in conjunction with cinematographic guidelines to select which cameras to cut to and to determine the timing of those cuts. A trellis graph formulation is used to optimize an objective function that maximizes coverage of the important content in the scene, while respecting cinematographic guidelines such as the 180-degree rule and avoiding jump cuts. We demonstrate cuts of the videos in various styles and lengths for a number of scenarios, including sports games, street performance, family activities, and social get-togethers. We evaluate our results through an in-depth analysis of the cuts in the resulting videos and through comparison with videos produced by a professional editor and existing commercial solutions.”

Okay, in the time it took to read that you could have watched the simple and interesting video here. Sorry about that.

(Via PopPhoto)