360 Video in 2020

Insta360 One R

Insta360 One R

We’ve been experimenting in the 360 video / VR headset space for a couple years now and it’s been fascinating to follow the trend in real time. In particular, we’ve been working with the Insta360 Pro and the Oculus Go headset to explore academic use cases for these immersive video projects. As we start a new year, recent announcements from both Insta360 and Oculus point towards a diminishing interest in this use case and for 360 video in general.

As mentioned in a recent blog post, Insta360’s new camera is the One R. It encourages you to “adapt to the action” with two ways to shoot: as a 360 cam or as a 4k 60fps wide-angle lens. It features a AI-driven tracking algorithm to automatically follow moving subjects in your shots. The Auto Frame algorithm automatically detects which portions of a 360 shot would work best within a 16:9 frame. In almost every marketed feature, there’s a subtext of using the 360 camera as powerful tool for outputting 16:9 video. Coming from one of the leaders in the 360 camera space, this focus isn’t particularly encouraging for the long-term consumption of 360 video.

The viewing of 360 video was always at its most immersive in a headset, which has proved to be one of the biggest boundaries to wider adoption, since most viewers are unlikely to even have a headset, let alone find it and put it on just to watch a video. As such, the standalone $200 Oculus Go seemed a natural solution for businesses who could produce their own 360 content and simply hand over an Oculus Go headset to their client. Recently, however, Oculus dropped the Go from its Oculus for Business platform, suggesting their Oculus Quest is the best solution for most business VR needs. This development sees Oculus leaning more towards support for full Virtual Reality, and less towards immersive 360 video playback.

While certainly not gone from the conversation, excitement and application for 360 video seems to be waning from a couple years ago. We’ll continue to search for use cases and projects that show the potential of this technology, so please reach out to the DDMC if you find any exciting possibilities.

New Insta360 ONE R

Insta360 just launched their latest 360 camera, the ONE R. It’s actually a modular system and not a single, self-contained camera. Only time will tell, but it seems like the ONE R could be an innovative approach to  solving the problem of how to pack the burgeoning features we are seeing in the action and 360 camera spaces into a workable form factor. Certainly Insta360 seems to have doubled down their focus on the using 360 as coverage for standard 16:9 action shots.

The ONE R starts with a battery base and a touch screen that sits on top (it can be installed forwards or backwards depending on the use case) next to an empty space that could include one of the following:

  • A 5.7K 360 camera
  • A 4K action camera that records at 60fps for 4K and 200fps for 1080p
  • A 5.3K wide-angle (14.4mm equivalent) mod co-developed with Leica that has a 1-inch sensor (30fps for 5.3K, 60fps for 4K, and 120fps for 1080p) This module was developed with the help of camera company Leica.

 

 

Key features include:

  • Insta360’s FlowState stabilization is a key part of all three modules.
  • Waterproof to 16 feet, despite the module design
  • Aerial mod that makes it possible to hide your drone from footage
  • External mic support
  • Various remote control options, including Apple Watch, voice, and a GPS enabled smart remote
  • Selfie stick
  • Motion tracking to lock in on subjects
  • Tons of software/ post production options like bullet time, time lapse, slo mo, etc.

We’re not seeing a ton of immediate academic use cases for features such as the above, but will certainly keep the ONE R in mind if the right project arises.

 

Using Thinglink to Create an Interactive 360 Video Experience

As long as I’ve been working with 360 video, one element has always been out of reach: interactivity. Particularly when viewed through a headset, the immersive nature of 360 video lends itself well to exploration and curiosity. The challenge has always been how to add that interactivity. Neither working with an external vendor, nor developing a in-house solution seemed worthwhile for our needs. However, the tool Thinglink now offers an intuitive way not only to augment media with interactive annotations, but to link that various media to each other.

Thinglink, as described previously, is a web platform that allows the user to add interactive pop-up graphics onto photos and videos, in 2D or in 360. Duke is piloting the technology, so I took the opportunity to test both the creation and publishing of 360 video through Thinglink.

The creation part couldn’t have been simpler (and in its pursuit of simplicity also feels a bit light on features). I was able to upload a custom 360 video without trouble, and immediately start adding annotated tags. You can see my test video here. There are four primary forms of tags:

  • Labels add a simple text box that are best used for… labeling things. This would be useful in a language learning context where you might want to add, say, the Spanish word for “tree” near a tree visible in the video.
  • Text/Media are fancier versions of labels which includes room for a title, description, photo, or external link. This is for something where you might want to add a little more context for what you are tagging.
  • Embeds allow you to insert embed codes. This would typically be a video (from either YouTube or Duke’s own Warpwire) but could include surveys or any other platform that provides you an HTML embed code to add to your website.
  • Tour Links allow you to connect individual tagged videos/photos together. If I wanted to provide a tour of the first floor of the Technology Engagement Center, for example, I could start with a video from the main lobby. For the various rooms and hallways visible from the lobby, I could then add an icon that when clicked on moves the viewer to a new video from the perspective of the icon that they clicked.

Adding all of these is as simple as clicking within the video, selecting what kind of tag you want, and then filling in the blanks. My only real gripe here is a lack of customization. You can’t change the size of the icons, though you can design and upload your own if you like. The overall design is also extremely limited. You can’t change text fonts, sizes etc. There is a global color scheme which just comes down to a background color, text color, button background and button text color. In the “Advanced” settings, you can reset the initial direction POV that the 360 video starts in, and you can also toggle “Architectural mode” with eliminates the the fish-eye POV at the expense of less overall visibility. While viewing

All in all, it’s incredibly easy to set up and use. Sharing is also pretty straightforward, provided you don’t intend to view the video in an actual VR headset. You can generate a shareable link that is public, unlisted, or only visible to your organization. You can even generate an embed code to place the Thinglink viewer within a website. What I was most curious about, however, was if I could properly view a Thinglink 360 video with our Oculus Go headset. In this regard, there’s a lot of room for improvement.

In principal, this use case is perfectly functional. I was able to access one of Thinglink’s demo 360 videos from within the Oculus Go headset and view and interact with the video with no trouble. The headset recognized the Thinglink video was a 360 video and automatically switched to that mode. A reticule in the center of my field of vision worked as mouse, in that if I hovered directly over a tag icon, it would “click” and activate the icon, negating the need for an external controller. The only issue was that the window that was activated when I “clicked” on an icon would sometimes be behind me and I had no idea anything had happened.

When I tried to view and access my own video, however, I had a lot of trouble. From a simple logistics standpoint, the shareable Thinglink URLs are fairly long and are tedious to input when in a VR headset (I made mine into a TinyURL which slightly helped). When I was finally able to access the video, it worked fine in 2D mode but when I clicked on the goggles icon to put the video into VR Headset mode I was met with a simple black screen. The same went for trying to view the video in this mode on my phone or on desktop. I found that after several minutes of waiting, an image from the video would eventually come up. Even when I was able to see something other than darkness, I discovered that the embedded videos were not functional at all in VR mode.

While the functionality is potentially there to create an interactive 360 video tour in Thinglink and view it within a VR Headset, it’s simply not practical at this point. It’s a niche use case, sure, but one that seems within grasp. If the developers can work out the kinks, this platform would really be a gamechanger. For now, interactive 360 video will have to stay on the flat screen for me.

VR/360 Video at Streaming Media West

While attending the Streaming Media West conference this year, I had the opportunity to check out a panel on the state of 360 video and VR. The panel featured representatives from different parts of the video production industry: journalism, education, marketing, etc. What stood out to me most was the diverse application and varied amount of use cases they shared, and how those applications worked around some of the common challenges native to the platform.

Raj Moorjani, Product Manager at Disney-ABC, discussed how they’ve been using 360 video in their news department as a way to bring viewers deeper into a story. While it’s not fit for all the content they produce, Moorjani found that sometimes it was most effective to simply share the almost raw or unedited video; to simply give viewers the sense of really being where the story was happening. The quick turnaround helped them keep up with the fast pace of the news.

For more highly produced content, it can be difficult to justify the effort and cost while VR headsets are still not widely adopted. Scott Squires, Creative Director and Co-Founder of production studio Pixvana, pointed out that there was a growing market for enterprise training, where you have more control over the end user having the hardware. Having produced training videos in 360 for waiters on a cruise ship, Squires found that the retention rate for the material was much better than with traditional video. He noted Wal-Mart is even deploying 17,000 headsets to its stores for employee training.

In the consumer space, there’s been a slow adoption of the technology, but the panelists see that speeding up with recent improvements to the hardware. The Oculus Go, a standalone VR headset released this year, received praise for its accessibility and value. The previously arduous stitching and editing workflows have largely been smoothed out as well. However, even with technical advancements, there is still a lack of compelling content for most consumers. Squires predicts that as the tools become even easier to use, that amateur production and home movies could be a huge selling point.

Having only experimented with 360 video over the past year here at Duke, I found it validating that even those who are producing it professionally were grappling with the same challenges Though we’re still a far from widespread adoption, I’ve found there’s a growing enthusiasm for its potential as we learn more about how to best work with this technology. For more, check out the full panel here.

October 2018 Adobe Creative Cloud Update Part 1: Adobe Premiere Pro

It’s fall, pumpkin spice is in the air, the holidays are Christmas decorations are going up, and software giant has just released updates to their entire Creative Cloud suite of applications.  Because the updates are so extensive, I’ve decided to do a multi-part series of DDMC entries that focuses on the new changes in detail for Premiere Pro, After Effects, Photoshop/Lightroom, and a new app Premiere Rush.  I just downloaded Rush today to my phone to put it through it’s paces so I’m saving that application for last but my first rundown of Premiere Pro’s new features is ready to go!

END TO END VR 180

Premiere Pro supports full native video editing for 180 VR content with the addition of a virtual screening room for collaboration.  Specific focal points can be tagged and identified in the same way you would in your boring 2D content.  Before you had to remove your headset to do any tagging but now you can keep your HMD (Head Mounted Display) on and keep cutting.  I’m just wetting my feet with VR but I can see how this could revolutionize the workflow for production houses integrating VR into their production workflow.  Combined with the robust networking features in Premiere Pro and symbiotic nature of the Adobe suite of applications this seems like a nice way to work on VR projects with a larger collaborative scope.

DISPLAY COLOR MANAGEMENT

Adobe has integrated a smart new feature that takes some of the guesswork out of setting your editing station color space.  Premiere Pro can now establish the color space of your particular monitor and adjust itself accordingly to compensate for color irregularities across the suite.  Red stays red no matter if it’s displayed in Premiere Pro, After Effects, or Photoshop!

INTELLIGENT AUDIO CLEANUP

Premiere Pro can now scan your audio and clean it up using two new sliders in the Essential Sound panel.  DeNoise and DeReverb allow you to remove background audio and reverb from your sound respectively.  Is it a replacement for quality sound capture on site?  No.  But it does add an extra level of simplicity that I’ve only experienced in Final Cut Pro so I’m happy about this feature.

PERFORMANCE IMPROVEMENTS

Premiere Pro is faster all around but if you’re cutting on a Mac you should experience a notable boost due to the new hardware based endcoding and decoding for H.264 and HEVC codecs.  Less rendering time is better rendering time.

SELECTIVE COLOR GRADING

Lumetri Color tools and grades are becoming more fine tuned.  This is a welcome addition as Adobe discontinued Speedgrade and folded it into Premiere Pro a while ago.  All your favorite Lumetri looks still remain but video can be adjusted to fit the color space of any still photo or swatch you like.  Colors can also be isolated and targeted for adjustment which is cool if you want to change a jacket, eye, or sky color.

EXPANDED FORMAT SUPPORT

Adobe Premiere now supports ARRI Alexa LF, Sony Venice V2, and the HEIF (HEIC) capture format used by iPhone 8 and iPhone X.

DATA DRIVEN INFOGRAPHICS

Because of the nature of my work as a videographer for an institution of higher education this feature actually has me the most excited.  Instrutional designers are constantly looking for ways to “jazz up” their boring tables into something visually engaging.  Now there is a whole slew of visual options with data driven infographic.  All you have to provide is the data in spreadsheet form then you can drag and drop in on one of the many elegant templates to build lower thirds, animated pie charts, and more.  It’s a really cool feature I plan to put through it’s paces on a few projects in place of floating prefabricated pie charts.

All these new additions make Adobe Premiere Pro a solid one stop editing platform but combined with the rest of the Adobe suite, one can easily see the endless pool of creative options that make it an industry standard!

Stay tuned for Part II:  Adobe Rush!

Oculus Announces New Educational Pilot Program and VR Experiences

In August 2018, Oculus announced a new education program that would distribute some of its Rift and Go headsets to a select group of educational institutes in Taiwan, Seattle, and Japan. In addition to access to the technology, the program is also focused on training both students and teachers on how to develop for the platform and use it in the classroom.

Most interestingly, the Japan program is focusing on using VR for distance learning and increasing student access to coursework and other educational materials. While there seems to be a huge potential for innovation in this space, its not clear from the announcement exactly how the headsets would affect access to the coursework as described. The Oculus Go doesn’t seem equipped for navigating a learning management system, and the Oculus Rift already requires a PC that would supposedly to be sufficient on its own. While the benefit of a headset here seems nebulous, I’m eager to see practical application of this program.

In addition to these programs, Oculus also published a few new educational apps for its headsets. TitanicVR and Hoover Dam: Industrial VR are both immersive experiences that allow you to tour the respective structures and learn about their history and operation.

On the Oculus Go, I was able to try out Breaking Boundaries in Science, a new app that explores the scientific contributions of Jane Goodall, Marie Curie and Grace Hopper. Available for free, the app places you in cartoon recreations of their workspaces, be it a camp site in Gombe or Curie’s lab in Paris. Using a teleportation system to move around, you can examine different objects in the room and listen to audio clips about their significance. While the use of VR is a bit superfluous to the educational impact, the novelty and production value of the app seems like a great way to get kids interested in the history of these women.

Insta360 ONE X

Insta360 steps into the world of action cameras with a big upgrade of their flagship Insta360 ONE.

Here are some high of the features that together are generating a lot of buzz for this device:

  • Retains $399 price
  • Insta360’s trademark FlowState stabilization seems exceptional based on the sample videos shown on the company’s website
  • Optional 5-meter and 30-meter clear housing for diving/ watersports
  • Optional disappearing selfie stick
  • 5.7K 30fps, 4K 50fps, 3K 100fps
  • Optional airplane-shaped “drifter” you can insert the device in and toss for dynamic action shots

Full details: https://www.insta360.com/product/insta360-onex/

 

 

Wireless Streaming from the Oculus Go

We’ve recently been exploring the potential of 360 video production and how it can best be utilized for our future projects. To view the 360 video, we’ve been using an Oculus Go which is a wireless VR headset – no computer or phone required. Ideally, we could just hand over the Go to a viewer and they could immediately watch one of our videos. One challenge we found was the Go does not currently allow a way for those outside the headset to see what the viewer currently sees (though apparently this feature is in development). With a bit of googling and trial and error, we successfully mirrored the display on a computer.

A quick proof of concept can be viewed here: https://warpwire.duke.edu/w/lD8CAA/

I mostly worked from this guide on pixvana, but to quickly summarize:

  1. I downloaded the Android Debug Bridge (adb) and saved the folder in my user folder on my MacPro.
  2. I made sure my copy of VLC Media Player was up to date.
  3. I put the Oculus Go in Developer mode (which you’ll need to set up a organization account with Oculus to do).
  4. I made sure the Go and my computer were on the same WiFi network.
  5. With the Go plugged into my computer via USB, I obtained the Go’s IP address by typing into the terminal “adb shell ip route”.
  6. I entered the command “adb tcpip 555”.
  7. I unplugged the Oculus Go.
  8. I entered the command “adb connect ‘IPADDRESS'” with IPADDRESS being the same as the one found in step 5.
  9. I entered the command./adb exec-out "while true; do screenrecord --bit-rate=2m --output-format=h264 --time-limit 180 -; done" | "/Applications/VLC.app/Contents/MacOS/VLC" --demux h264 --h264-fps=60 --clock-jitter=0 -

From there, VLC displayed the streaming video output from the Oculus Go. There was noticable lag (3 seconds or more) but otherwise it worked pretty seamlessly. The only trouble is it’s tough to view the mirrored stream on the desktop if you still have the headset on!

I also tested an app called Vysor. Vysor largely eliminates the terminal commands and is more easy to use but plays an ad every 30 minutes. However, I did notice the lag is significantly less noticeable. A paid upgrade will also allow for higher-quality mirroring and a shareable link for people to view the stream remotely.

Go Pro Fusion 360 Camera

One of our goals on the DDMC forum over the past couple years has been to keep pace with innovations in 360 camera technology in their application at Duke. We’ve covered milestones from the still awesome Insta360 Nano introduced in early 2017 all the way to the new 6-lens Insta360 Pro 360, which opens the door to 8k in the prosumer sphere. Since GoPro is such an important player in the world of portable action cameras, we wanted to note their foray into the 360 camera space with their new Go Pro Fusion (~$700.00). While the claim GoPro makes that the Fusion is “arguably the most versatile creative tool ever made” is, well, arguable, it is an interesting camera and worth considering if you’re planning on purchasing a 360 camera.

As you would expect from a company built around sports footage, one of the benefits of the Go Pro is its durable design and the thinking that has gone into how the Fusion can function as an ergonomic accessory for someone engaged in physical activity. For example, it has a solid hand-held design and can be extended using a disappearing selfie stick that most reviewers seem to appreciate. In addition, it can be voice activated to make it easy when operating the unit from a distance.

Benefits

  • Durable housing
  • Selfie stick attachment disappears when aligned with camera body
  • Built-in image stabilization (non-gimbal)
  • High res (5.2K)
  • “OverCapture” is a well-conceived framework for accessing and exporting segments of video in post production
  • Voice control

Drawbacks

  • One of the significant differences between the Fusion and other 360 cameras is the requirement for two SD cards, and the fact that each of the two lenses writes their footage separately to each card. This means that in order for you to obtain a full 360 video you’ll need to use the editing software, which adds time and difficulty to the process of creating a video. Takes a long time to export footage. Between 20 and 45 minutes per minute of footage in GoPro Fusion Studio.
  • Large files sizes for exported videos: ~4.5 GB/ minute at 5.2K in Pro Res format (~ 1 GB/minute at 4K H.264)
  • The unit can get hot when operating
  • Mobile software hasn’t been getting great reviews

Here is a fairly through review of the Fusion that delves into many of the nuances of the Fusion and could be helpful if you’re considering making a purchase: