Using Thinglink to Create an Interactive 360 Video Experience

As long as I’ve been working with 360 video, one element has always been out of reach: interactivity. Particularly when viewed through a headset, the immersive nature of 360 video lends itself well to exploration and curiosity. The challenge has always been how to add that interactivity. Neither working with an external vendor, nor developing a in-house solution seemed worthwhile for our needs. However, the tool Thinglink now offers an intuitive way not only to augment media with interactive annotations, but to link that various media to each other.

Thinglink, as described previously, is a web platform that allows the user to add interactive pop-up graphics onto photos and videos, in 2D or in 360. Duke is piloting the technology, so I took the opportunity to test both the creation and publishing of 360 video through Thinglink.

The creation part couldn’t have been simpler (and in its pursuit of simplicity also feels a bit light on features). I was able to upload a custom 360 video without trouble, and immediately start adding annotated tags. You can see my test video here. There are four primary forms of tags:

  • Labels add a simple text box that are best used for… labeling things. This would be useful in a language learning context where you might want to add, say, the Spanish word for “tree” near a tree visible in the video.
  • Text/Media are fancier versions of labels which includes room for a title, description, photo, or external link. This is for something where you might want to add a little more context for what you are tagging.
  • Embeds allow you to insert embed codes. This would typically be a video (from either YouTube or Duke’s own Warpwire) but could include surveys or any other platform that provides you an HTML embed code to add to your website.
  • Tour Links allow you to connect individual tagged videos/photos together. If I wanted to provide a tour of the first floor of the Technology Engagement Center, for example, I could start with a video from the main lobby. For the various rooms and hallways visible from the lobby, I could then add an icon that when clicked on moves the viewer to a new video from the perspective of the icon that they clicked.

Adding all of these is as simple as clicking within the video, selecting what kind of tag you want, and then filling in the blanks. My only real gripe here is a lack of customization. You can’t change the size of the icons, though you can design and upload your own if you like. The overall design is also extremely limited. You can’t change text fonts, sizes etc. There is a global color scheme which just comes down to a background color, text color, button background and button text color. In the “Advanced” settings, you can reset the initial direction POV that the 360 video starts in, and you can also toggle “Architectural mode” with eliminates the the fish-eye POV at the expense of less overall visibility. While viewing

All in all, it’s incredibly easy to set up and use. Sharing is also pretty straightforward, provided you don’t intend to view the video in an actual VR headset. You can generate a shareable link that is public, unlisted, or only visible to your organization. You can even generate an embed code to place the Thinglink viewer within a website. What I was most curious about, however, was if I could properly view a Thinglink 360 video with our Oculus Go headset. In this regard, there’s a lot of room for improvement.

In principal, this use case is perfectly functional. I was able to access one of Thinglink’s demo 360 videos from within the Oculus Go headset and view and interact with the video with no trouble. The headset recognized the Thinglink video was a 360 video and automatically switched to that mode. A reticule in the center of my field of vision worked as mouse, in that if I hovered directly over a tag icon, it would “click” and activate the icon, negating the need for an external controller. The only issue was that the window that was activated when I “clicked” on an icon would sometimes be behind me and I had no idea anything had happened.

When I tried to view and access my own video, however, I had a lot of trouble. From a simple logistics standpoint, the shareable Thinglink URLs are fairly long and are tedious to input when in a VR headset (I made mine into a TinyURL which slightly helped). When I was finally able to access the video, it worked fine in 2D mode but when I clicked on the goggles icon to put the video into VR Headset mode I was met with a simple black screen. The same went for trying to view the video in this mode on my phone or on desktop. I found that after several minutes of waiting, an image from the video would eventually come up. Even when I was able to see something other than darkness, I discovered that the embedded videos were not functional at all in VR mode.

While the functionality is potentially there to create an interactive 360 video tour in Thinglink and view it within a VR Headset, it’s simply not practical at this point. It’s a niche use case, sure, but one that seems within grasp. If the developers can work out the kinks, this platform would really be a gamechanger. For now, interactive 360 video will have to stay on the flat screen for me.

ThingLink Pilot at Duke Has Potential for 360 Video, Images

Duke Learning Innovation recently launched a new pilot of a tool called ThingLink. ThingLink offers the ability to annotate images and videos using other images, videos, and text to create visually compelling, interactive experiences. One  core use case for ThingLink is to start with a graphic (such as a map) or a photograph as a base and place buttons in strategic places that users can click to expose more information. ThingLinks can also link to other ThingLinks to create structured learning experiences.ThingLink Example

The screenshot above is from an example project on ThingLink’s “Featured” page by Encounter Edu. In this example, viewers can click on the “+” signs to reveal more information about each portion of the carbon cycle.

While creation of learning objects like these could have wide value for education, one aspect of ThinkLink we think DDMC-ers might find intriguing is its AR/ VR authoring capabilities. A challenge for 360 video, even with professionally produced material, can be that viewers sometimes feel lost clicking around trying to figure out what to look at next. With a tool like ThingLink’s VR editor, you can curate the experience by creating guideposts, and in doing so provide your users with a potentially more rewarding experience as they engage with 360 videos and images.

OIT Media Technologies production team is going to be reviewing ThingLink’s VR/ AR capabilities and posting their findings to the blog.

If you or others on your team would like to test ThingLink out, you apply to be a part of the pilot here: https://duke.qualtrics.com/jfe/form/SV_6R07iAqB2jeXYGh

Links

 

Meeting Owl Review

We had an opportunity to test the Meeting Owl from OwlLabs this past week and wanted to share our thoughts on this unique conference room technology. The $799 webcam, mic, and speaker all-in-one unit is intended to sit at the center of the conference room table. What makes the Meeting Owl worth nearly $800? If I were reviewing the device simply on the speaker and mic array, I’d say this isn’t all that exciting of an offering. There are plenty of <$200 mic/speaker combos that would perform as well or better. But, the Meeting Owl’s unique 360 camera at the top that makes the unit stand out from its peers.

When sharing video, the device segments the camera feed into zones. At the top, there is a side-to-side 360-degree view of the room, and below is either one, two or three “active speaker” zones intelligently selected by the Meeting Owl. So, when two people in the room start talking, the camera segments lower area of the camera feed to accommodate the conversation. Overall, we found the intelligence of the camera to be rather good. Infrequently, it would pause a bit too long on a speaker that had stopped talking, or incorrectly divided up the lower section, prioritizing the wrong person… but considering the alternative is physically moving the camera… it’s a nice feature that livens up the meeting experience.

Pros:

  • Incredibly easy to setup and configure (under 10 minutes)
  • 360 camera works as advertised
  • Good quality internal mics
  • Platform agnostic (works with Skype, WebEx, Zoom, Meetings, etc.)

Cons:

  • The image quality isn’t great (it’s a 720p sensor, so the sections are only standard definition, or worse, and it shows)
  • Split screen can be distracting when in overdrive (sometimes it moves too slowly, other times it seems to move too quickly… this may be improved with a firmware update)
  • At $799, OwlLabs is in the Logitech Meetup zone. While the products are rather different, each has their pros and cons depending upon the expectations of the user.

Closing Thoughts:

Overall, we enjoyed the product and can see it being deployed in a range of spaces. It also signals a new era in intelligent conferencing technologies. The local group at Duke that purchased the device also has plans to deploy this in a classroom where Zoom will be used for hybrid teaching sessions (some students local, others remote). It will be interesting to see how the far side reacts to the automated pan/tilt of the camera and if it can keep up with some of our most active faculty. My primary complaint about the device is that the image is too blurry. Also, the 360 lens tends to have the faces centered in the lower image area. Ideally, it would crop to a few inches above the top of the head of the active speakers(s). Perhaps we’ll see an HD or 4K version in the future that addresses a few of these shortcomings.

VR/360 Video at Streaming Media West

While attending the Streaming Media West conference this year, I had the opportunity to check out a panel on the state of 360 video and VR. The panel featured representatives from different parts of the video production industry: journalism, education, marketing, etc. What stood out to me most was the diverse application and varied amount of use cases they shared, and how those applications worked around some of the common challenges native to the platform.

Raj Moorjani, Product Manager at Disney-ABC, discussed how they’ve been using 360 video in their news department as a way to bring viewers deeper into a story. While it’s not fit for all the content they produce, Moorjani found that sometimes it was most effective to simply share the almost raw or unedited video; to simply give viewers the sense of really being where the story was happening. The quick turnaround helped them keep up with the fast pace of the news.

For more highly produced content, it can be difficult to justify the effort and cost while VR headsets are still not widely adopted. Scott Squires, Creative Director and Co-Founder of production studio Pixvana, pointed out that there was a growing market for enterprise training, where you have more control over the end user having the hardware. Having produced training videos in 360 for waiters on a cruise ship, Squires found that the retention rate for the material was much better than with traditional video. He noted Wal-Mart is even deploying 17,000 headsets to its stores for employee training.

In the consumer space, there’s been a slow adoption of the technology, but the panelists see that speeding up with recent improvements to the hardware. The Oculus Go, a standalone VR headset released this year, received praise for its accessibility and value. The previously arduous stitching and editing workflows have largely been smoothed out as well. However, even with technical advancements, there is still a lack of compelling content for most consumers. Squires predicts that as the tools become even easier to use, that amateur production and home movies could be a huge selling point.

Having only experimented with 360 video over the past year here at Duke, I found it validating that even those who are producing it professionally were grappling with the same challenges Though we’re still a far from widespread adoption, I’ve found there’s a growing enthusiasm for its potential as we learn more about how to best work with this technology. For more, check out the full panel here.

October 2018 Adobe Creative Cloud Update Part 1: Adobe Premiere Pro

It’s fall, pumpkin spice is in the air, the holidays are Christmas decorations are going up, and software giant has just released updates to their entire Creative Cloud suite of applications.  Because the updates are so extensive, I’ve decided to do a multi-part series of DDMC entries that focuses on the new changes in detail for Premiere Pro, After Effects, Photoshop/Lightroom, and a new app Premiere Rush.  I just downloaded Rush today to my phone to put it through it’s paces so I’m saving that application for last but my first rundown of Premiere Pro’s new features is ready to go!

END TO END VR 180

Premiere Pro supports full native video editing for 180 VR content with the addition of a virtual screening room for collaboration.  Specific focal points can be tagged and identified in the same way you would in your boring 2D content.  Before you had to remove your headset to do any tagging but now you can keep your HMD (Head Mounted Display) on and keep cutting.  I’m just wetting my feet with VR but I can see how this could revolutionize the workflow for production houses integrating VR into their production workflow.  Combined with the robust networking features in Premiere Pro and symbiotic nature of the Adobe suite of applications this seems like a nice way to work on VR projects with a larger collaborative scope.

DISPLAY COLOR MANAGEMENT

Adobe has integrated a smart new feature that takes some of the guesswork out of setting your editing station color space.  Premiere Pro can now establish the color space of your particular monitor and adjust itself accordingly to compensate for color irregularities across the suite.  Red stays red no matter if it’s displayed in Premiere Pro, After Effects, or Photoshop!

INTELLIGENT AUDIO CLEANUP

Premiere Pro can now scan your audio and clean it up using two new sliders in the Essential Sound panel.  DeNoise and DeReverb allow you to remove background audio and reverb from your sound respectively.  Is it a replacement for quality sound capture on site?  No.  But it does add an extra level of simplicity that I’ve only experienced in Final Cut Pro so I’m happy about this feature.

PERFORMANCE IMPROVEMENTS

Premiere Pro is faster all around but if you’re cutting on a Mac you should experience a notable boost due to the new hardware based endcoding and decoding for H.264 and HEVC codecs.  Less rendering time is better rendering time.

SELECTIVE COLOR GRADING

Lumetri Color tools and grades are becoming more fine tuned.  This is a welcome addition as Adobe discontinued Speedgrade and folded it into Premiere Pro a while ago.  All your favorite Lumetri looks still remain but video can be adjusted to fit the color space of any still photo or swatch you like.  Colors can also be isolated and targeted for adjustment which is cool if you want to change a jacket, eye, or sky color.

EXPANDED FORMAT SUPPORT

Adobe Premiere now supports ARRI Alexa LF, Sony Venice V2, and the HEIF (HEIC) capture format used by iPhone 8 and iPhone X.

DATA DRIVEN INFOGRAPHICS

Because of the nature of my work as a videographer for an institution of higher education this feature actually has me the most excited.  Instrutional designers are constantly looking for ways to “jazz up” their boring tables into something visually engaging.  Now there is a whole slew of visual options with data driven infographic.  All you have to provide is the data in spreadsheet form then you can drag and drop in on one of the many elegant templates to build lower thirds, animated pie charts, and more.  It’s a really cool feature I plan to put through it’s paces on a few projects in place of floating prefabricated pie charts.

All these new additions make Adobe Premiere Pro a solid one stop editing platform but combined with the rest of the Adobe suite, one can easily see the endless pool of creative options that make it an industry standard!

Stay tuned for Part II:  Adobe Rush!

Oculus Announces New Educational Pilot Program and VR Experiences

In August 2018, Oculus announced a new education program that would distribute some of its Rift and Go headsets to a select group of educational institutes in Taiwan, Seattle, and Japan. In addition to access to the technology, the program is also focused on training both students and teachers on how to develop for the platform and use it in the classroom.

Most interestingly, the Japan program is focusing on using VR for distance learning and increasing student access to coursework and other educational materials. While there seems to be a huge potential for innovation in this space, its not clear from the announcement exactly how the headsets would affect access to the coursework as described. The Oculus Go doesn’t seem equipped for navigating a learning management system, and the Oculus Rift already requires a PC that would supposedly to be sufficient on its own. While the benefit of a headset here seems nebulous, I’m eager to see practical application of this program.

In addition to these programs, Oculus also published a few new educational apps for its headsets. TitanicVR and Hoover Dam: Industrial VR are both immersive experiences that allow you to tour the respective structures and learn about their history and operation.

On the Oculus Go, I was able to try out Breaking Boundaries in Science, a new app that explores the scientific contributions of Jane Goodall, Marie Curie and Grace Hopper. Available for free, the app places you in cartoon recreations of their workspaces, be it a camp site in Gombe or Curie’s lab in Paris. Using a teleportation system to move around, you can examine different objects in the room and listen to audio clips about their significance. While the use of VR is a bit superfluous to the educational impact, the novelty and production value of the app seems like a great way to get kids interested in the history of these women.

Insta360 ONE X

Insta360 steps into the world of action cameras with a big upgrade of their flagship Insta360 ONE.

Here are some high of the features that together are generating a lot of buzz for this device:

  • Retains $399 price
  • Insta360’s trademark FlowState stabilization seems exceptional based on the sample videos shown on the company’s website
  • Optional 5-meter and 30-meter clear housing for diving/ watersports
  • Optional disappearing selfie stick
  • 5.7K 30fps, 4K 50fps, 3K 100fps
  • Optional airplane-shaped “drifter” you can insert the device in and toss for dynamic action shots

Full details: https://www.insta360.com/product/insta360-onex/

 

 

Wireless Streaming from the Oculus Go

We’ve recently been exploring the potential of 360 video production and how it can best be utilized for our future projects. To view the 360 video, we’ve been using an Oculus Go which is a wireless VR headset – no computer or phone required. Ideally, we could just hand over the Go to a viewer and they could immediately watch one of our videos. One challenge we found was the Go does not currently allow a way for those outside the headset to see what the viewer currently sees (though apparently this feature is in development). With a bit of googling and trial and error, we successfully mirrored the display on a computer.

A quick proof of concept can be viewed here: https://warpwire.duke.edu/w/lD8CAA/

I mostly worked from this guide on pixvana, but to quickly summarize:

  1. I downloaded the Android Debug Bridge (adb) and saved the folder in my user folder on my MacPro.
  2. I made sure my copy of VLC Media Player was up to date.
  3. I put the Oculus Go in Developer mode (which you’ll need to set up a organization account with Oculus to do).
  4. I made sure the Go and my computer were on the same WiFi network.
  5. With the Go plugged into my computer via USB, I obtained the Go’s IP address by typing into the terminal “adb shell ip route”.
  6. I entered the command “adb tcpip 555”.
  7. I unplugged the Oculus Go.
  8. I entered the command “adb connect ‘IPADDRESS'” with IPADDRESS being the same as the one found in step 5.
  9. I entered the command./adb exec-out "while true; do screenrecord --bit-rate=2m --output-format=h264 --time-limit 180 -; done" | "/Applications/VLC.app/Contents/MacOS/VLC" --demux h264 --h264-fps=60 --clock-jitter=0 -

From there, VLC displayed the streaming video output from the Oculus Go. There was noticable lag (3 seconds or more) but otherwise it worked pretty seamlessly. The only trouble is it’s tough to view the mirrored stream on the desktop if you still have the headset on!

I also tested an app called Vysor. Vysor largely eliminates the terminal commands and is more easy to use but plays an ad every 30 minutes. However, I did notice the lag is significantly less noticeable. A paid upgrade will also allow for higher-quality mirroring and a shareable link for people to view the stream remotely.

Go Pro Fusion 360 Camera

One of our goals on the DDMC forum over the past couple years has been to keep pace with innovations in 360 camera technology in their application at Duke. We’ve covered milestones from the still awesome Insta360 Nano introduced in early 2017 all the way to the new 6-lens Insta360 Pro 360, which opens the door to 8k in the prosumer sphere. Since GoPro is such an important player in the world of portable action cameras, we wanted to note their foray into the 360 camera space with their new Go Pro Fusion (~$700.00). While the claim GoPro makes that the Fusion is “arguably the most versatile creative tool ever made” is, well, arguable, it is an interesting camera and worth considering if you’re planning on purchasing a 360 camera.

As you would expect from a company built around sports footage, one of the benefits of the Go Pro is its durable design and the thinking that has gone into how the Fusion can function as an ergonomic accessory for someone engaged in physical activity. For example, it has a solid hand-held design and can be extended using a disappearing selfie stick that most reviewers seem to appreciate. In addition, it can be voice activated to make it easy when operating the unit from a distance.

Benefits

  • Durable housing
  • Selfie stick attachment disappears when aligned with camera body
  • Built-in image stabilization (non-gimbal)
  • High res (5.2K)
  • “OverCapture” is a well-conceived framework for accessing and exporting segments of video in post production
  • Voice control

Drawbacks

  • One of the significant differences between the Fusion and other 360 cameras is the requirement for two SD cards, and the fact that each of the two lenses writes their footage separately to each card. This means that in order for you to obtain a full 360 video you’ll need to use the editing software, which adds time and difficulty to the process of creating a video. Takes a long time to export footage. Between 20 and 45 minutes per minute of footage in GoPro Fusion Studio.
  • Large files sizes for exported videos: ~4.5 GB/ minute at 5.2K in Pro Res format (~ 1 GB/minute at 4K H.264)
  • The unit can get hot when operating
  • Mobile software hasn’t been getting great reviews

Here is a fairly through review of the Fusion that delves into many of the nuances of the Fusion and could be helpful if you’re considering making a purchase: