Using Thinglink to Create an Interactive 360 Video Experience

As long as I’ve been working with 360 video, one element has always been out of reach: interactivity. Particularly when viewed through a headset, the immersive nature of 360 video lends itself well to exploration and curiosity. The challenge has always been how to add that interactivity. Neither working with an external vendor, nor developing a in-house solution seemed worthwhile for our needs. However, the tool Thinglink now offers an intuitive way not only to augment media with interactive annotations, but to link that various media to each other.

Thinglink, as described previously, is a web platform that allows the user to add interactive pop-up graphics onto photos and videos, in 2D or in 360. Duke is piloting the technology, so I took the opportunity to test both the creation and publishing of 360 video through Thinglink.

The creation part couldn’t have been simpler (and in its pursuit of simplicity also feels a bit light on features). I was able to upload a custom 360 video without trouble, and immediately start adding annotated tags. You can see my test video here. There are four primary forms of tags:

  • Labels add a simple text box that are best used for… labeling things. This would be useful in a language learning context where you might want to add, say, the Spanish word for “tree” near a tree visible in the video.
  • Text/Media are fancier versions of labels which includes room for a title, description, photo, or external link. This is for something where you might want to add a little more context for what you are tagging.
  • Embeds allow you to insert embed codes. This would typically be a video (from either YouTube or Duke’s own Warpwire) but could include surveys or any other platform that provides you an HTML embed code to add to your website.
  • Tour Links allow you to connect individual tagged videos/photos together. If I wanted to provide a tour of the first floor of the Technology Engagement Center, for example, I could start with a video from the main lobby. For the various rooms and hallways visible from the lobby, I could then add an icon that when clicked on moves the viewer to a new video from the perspective of the icon that they clicked.

Adding all of these is as simple as clicking within the video, selecting what kind of tag you want, and then filling in the blanks. My only real gripe here is a lack of customization. You can’t change the size of the icons, though you can design and upload your own if you like. The overall design is also extremely limited. You can’t change text fonts, sizes etc. There is a global color scheme which just comes down to a background color, text color, button background and button text color. In the “Advanced” settings, you can reset the initial direction POV that the 360 video starts in, and you can also toggle “Architectural mode” with eliminates the the fish-eye POV at the expense of less overall visibility. While viewing

All in all, it’s incredibly easy to set up and use. Sharing is also pretty straightforward, provided you don’t intend to view the video in an actual VR headset. You can generate a shareable link that is public, unlisted, or only visible to your organization. You can even generate an embed code to place the Thinglink viewer within a website. What I was most curious about, however, was if I could properly view a Thinglink 360 video with our Oculus Go headset. In this regard, there’s a lot of room for improvement.

In principal, this use case is perfectly functional. I was able to access one of Thinglink’s demo 360 videos from within the Oculus Go headset and view and interact with the video with no trouble. The headset recognized the Thinglink video was a 360 video and automatically switched to that mode. A reticule in the center of my field of vision worked as mouse, in that if I hovered directly over a tag icon, it would “click” and activate the icon, negating the need for an external controller. The only issue was that the window that was activated when I “clicked” on an icon would sometimes be behind me and I had no idea anything had happened.

When I tried to view and access my own video, however, I had a lot of trouble. From a simple logistics standpoint, the shareable Thinglink URLs are fairly long and are tedious to input when in a VR headset (I made mine into a TinyURL which slightly helped). When I was finally able to access the video, it worked fine in 2D mode but when I clicked on the goggles icon to put the video into VR Headset mode I was met with a simple black screen. The same went for trying to view the video in this mode on my phone or on desktop. I found that after several minutes of waiting, an image from the video would eventually come up. Even when I was able to see something other than darkness, I discovered that the embedded videos were not functional at all in VR mode.

While the functionality is potentially there to create an interactive 360 video tour in Thinglink and view it within a VR Headset, it’s simply not practical at this point. It’s a niche use case, sure, but one that seems within grasp. If the developers can work out the kinks, this platform would really be a gamechanger. For now, interactive 360 video will have to stay on the flat screen for me.

VR/360 Video at Streaming Media West

While attending the Streaming Media West conference this year, I had the opportunity to check out a panel on the state of 360 video and VR. The panel featured representatives from different parts of the video production industry: journalism, education, marketing, etc. What stood out to me most was the diverse application and varied amount of use cases they shared, and how those applications worked around some of the common challenges native to the platform.

Raj Moorjani, Product Manager at Disney-ABC, discussed how they’ve been using 360 video in their news department as a way to bring viewers deeper into a story. While it’s not fit for all the content they produce, Moorjani found that sometimes it was most effective to simply share the almost raw or unedited video; to simply give viewers the sense of really being where the story was happening. The quick turnaround helped them keep up with the fast pace of the news.

For more highly produced content, it can be difficult to justify the effort and cost while VR headsets are still not widely adopted. Scott Squires, Creative Director and Co-Founder of production studio Pixvana, pointed out that there was a growing market for enterprise training, where you have more control over the end user having the hardware. Having produced training videos in 360 for waiters on a cruise ship, Squires found that the retention rate for the material was much better than with traditional video. He noted Wal-Mart is even deploying 17,000 headsets to its stores for employee training.

In the consumer space, there’s been a slow adoption of the technology, but the panelists see that speeding up with recent improvements to the hardware. The Oculus Go, a standalone VR headset released this year, received praise for its accessibility and value. The previously arduous stitching and editing workflows have largely been smoothed out as well. However, even with technical advancements, there is still a lack of compelling content for most consumers. Squires predicts that as the tools become even easier to use, that amateur production and home movies could be a huge selling point.

Having only experimented with 360 video over the past year here at Duke, I found it validating that even those who are producing it professionally were grappling with the same challenges Though we’re still a far from widespread adoption, I’ve found there’s a growing enthusiasm for its potential as we learn more about how to best work with this technology. For more, check out the full panel here.

October 2018 Adobe Creative Cloud Update Part 1: Adobe Premiere Pro

It’s fall, pumpkin spice is in the air, the holidays are Christmas decorations are going up, and software giant has just released updates to their entire Creative Cloud suite of applications.  Because the updates are so extensive, I’ve decided to do a multi-part series of DDMC entries that focuses on the new changes in detail for Premiere Pro, After Effects, Photoshop/Lightroom, and a new app Premiere Rush.  I just downloaded Rush today to my phone to put it through it’s paces so I’m saving that application for last but my first rundown of Premiere Pro’s new features is ready to go!


Premiere Pro supports full native video editing for 180 VR content with the addition of a virtual screening room for collaboration.  Specific focal points can be tagged and identified in the same way you would in your boring 2D content.  Before you had to remove your headset to do any tagging but now you can keep your HMD (Head Mounted Display) on and keep cutting.  I’m just wetting my feet with VR but I can see how this could revolutionize the workflow for production houses integrating VR into their production workflow.  Combined with the robust networking features in Premiere Pro and symbiotic nature of the Adobe suite of applications this seems like a nice way to work on VR projects with a larger collaborative scope.


Adobe has integrated a smart new feature that takes some of the guesswork out of setting your editing station color space.  Premiere Pro can now establish the color space of your particular monitor and adjust itself accordingly to compensate for color irregularities across the suite.  Red stays red no matter if it’s displayed in Premiere Pro, After Effects, or Photoshop!


Premiere Pro can now scan your audio and clean it up using two new sliders in the Essential Sound panel.  DeNoise and DeReverb allow you to remove background audio and reverb from your sound respectively.  Is it a replacement for quality sound capture on site?  No.  But it does add an extra level of simplicity that I’ve only experienced in Final Cut Pro so I’m happy about this feature.


Premiere Pro is faster all around but if you’re cutting on a Mac you should experience a notable boost due to the new hardware based endcoding and decoding for H.264 and HEVC codecs.  Less rendering time is better rendering time.


Lumetri Color tools and grades are becoming more fine tuned.  This is a welcome addition as Adobe discontinued Speedgrade and folded it into Premiere Pro a while ago.  All your favorite Lumetri looks still remain but video can be adjusted to fit the color space of any still photo or swatch you like.  Colors can also be isolated and targeted for adjustment which is cool if you want to change a jacket, eye, or sky color.


Adobe Premiere now supports ARRI Alexa LF, Sony Venice V2, and the HEIF (HEIC) capture format used by iPhone 8 and iPhone X.


Because of the nature of my work as a videographer for an institution of higher education this feature actually has me the most excited.  Instrutional designers are constantly looking for ways to “jazz up” their boring tables into something visually engaging.  Now there is a whole slew of visual options with data driven infographic.  All you have to provide is the data in spreadsheet form then you can drag and drop in on one of the many elegant templates to build lower thirds, animated pie charts, and more.  It’s a really cool feature I plan to put through it’s paces on a few projects in place of floating prefabricated pie charts.

All these new additions make Adobe Premiere Pro a solid one stop editing platform but combined with the rest of the Adobe suite, one can easily see the endless pool of creative options that make it an industry standard!

Stay tuned for Part II:  Adobe Rush!

Oculus Announces New Educational Pilot Program and VR Experiences

In August 2018, Oculus announced a new education program that would distribute some of its Rift and Go headsets to a select group of educational institutes in Taiwan, Seattle, and Japan. In addition to access to the technology, the program is also focused on training both students and teachers on how to develop for the platform and use it in the classroom.

Most interestingly, the Japan program is focusing on using VR for distance learning and increasing student access to coursework and other educational materials. While there seems to be a huge potential for innovation in this space, its not clear from the announcement exactly how the headsets would affect access to the coursework as described. The Oculus Go doesn’t seem equipped for navigating a learning management system, and the Oculus Rift already requires a PC that would supposedly to be sufficient on its own. While the benefit of a headset here seems nebulous, I’m eager to see practical application of this program.

In addition to these programs, Oculus also published a few new educational apps for its headsets. TitanicVR and Hoover Dam: Industrial VR are both immersive experiences that allow you to tour the respective structures and learn about their history and operation.

On the Oculus Go, I was able to try out Breaking Boundaries in Science, a new app that explores the scientific contributions of Jane Goodall, Marie Curie and Grace Hopper. Available for free, the app places you in cartoon recreations of their workspaces, be it a camp site in Gombe or Curie’s lab in Paris. Using a teleportation system to move around, you can examine different objects in the room and listen to audio clips about their significance. While the use of VR is a bit superfluous to the educational impact, the novelty and production value of the app seems like a great way to get kids interested in the history of these women.

Insta360 ONE X

Insta360 steps into the world of action cameras with a big upgrade of their flagship Insta360 ONE.

Here are some high of the features that together are generating a lot of buzz for this device:

  • Retains $399 price
  • Insta360’s trademark FlowState stabilization seems exceptional based on the sample videos shown on the company’s website
  • Optional 5-meter and 30-meter clear housing for diving/ watersports
  • Optional disappearing selfie stick
  • 5.7K 30fps, 4K 50fps, 3K 100fps
  • Optional airplane-shaped “drifter” you can insert the device in and toss for dynamic action shots

Full details:



Wireless Streaming from the Oculus Go

We’ve recently been exploring the potential of 360 video production and how it can best be utilized for our future projects. To view the 360 video, we’ve been using an Oculus Go which is a wireless VR headset – no computer or phone required. Ideally, we could just hand over the Go to a viewer and they could immediately watch one of our videos. One challenge we found was the Go does not currently allow a way for those outside the headset to see what the viewer currently sees (though apparently this feature is in development). With a bit of googling and trial and error, we successfully mirrored the display on a computer.

A quick proof of concept can be viewed here:

I mostly worked from this guide on pixvana, but to quickly summarize:

  1. I downloaded the Android Debug Bridge (adb) and saved the folder in my user folder on my MacPro.
  2. I made sure my copy of VLC Media Player was up to date.
  3. I put the Oculus Go in Developer mode (which you’ll need to set up a organization account with Oculus to do).
  4. I made sure the Go and my computer were on the same WiFi network.
  5. With the Go plugged into my computer via USB, I obtained the Go’s IP address by typing into the terminal “adb shell ip route”.
  6. I entered the command “adb tcpip 555”.
  7. I unplugged the Oculus Go.
  8. I entered the command “adb connect ‘IPADDRESS'” with IPADDRESS being the same as the one found in step 5.
  9. I entered the command./adb exec-out "while true; do screenrecord --bit-rate=2m --output-format=h264 --time-limit 180 -; done" | "/Applications/" --demux h264 --h264-fps=60 --clock-jitter=0 -

From there, VLC displayed the streaming video output from the Oculus Go. There was noticable lag (3 seconds or more) but otherwise it worked pretty seamlessly. The only trouble is it’s tough to view the mirrored stream on the desktop if you still have the headset on!

I also tested an app called Vysor. Vysor largely eliminates the terminal commands and is more easy to use but plays an ad every 30 minutes. However, I did notice the lag is significantly less noticeable. A paid upgrade will also allow for higher-quality mirroring and a shareable link for people to view the stream remotely.

Go Pro Fusion 360 Camera

One of our goals on the DDMC forum over the past couple years has been to keep pace with innovations in 360 camera technology in their application at Duke. We’ve covered milestones from the still awesome Insta360 Nano introduced in early 2017 all the way to the new 6-lens Insta360 Pro 360, which opens the door to 8k in the prosumer sphere. Since GoPro is such an important player in the world of portable action cameras, we wanted to note their foray into the 360 camera space with their new Go Pro Fusion (~$700.00). While the claim GoPro makes that the Fusion is “arguably the most versatile creative tool ever made” is, well, arguable, it is an interesting camera and worth considering if you’re planning on purchasing a 360 camera.

As you would expect from a company built around sports footage, one of the benefits of the Go Pro is its durable design and the thinking that has gone into how the Fusion can function as an ergonomic accessory for someone engaged in physical activity. For example, it has a solid hand-held design and can be extended using a disappearing selfie stick that most reviewers seem to appreciate. In addition, it can be voice activated to make it easy when operating the unit from a distance.


  • Durable housing
  • Selfie stick attachment disappears when aligned with camera body
  • Built-in image stabilization (non-gimbal)
  • High res (5.2K)
  • “OverCapture” is a well-conceived framework for accessing and exporting segments of video in post production
  • Voice control


  • One of the significant differences between the Fusion and other 360 cameras is the requirement for two SD cards, and the fact that each of the two lenses writes their footage separately to each card. This means that in order for you to obtain a full 360 video you’ll need to use the editing software, which adds time and difficulty to the process of creating a video. Takes a long time to export footage. Between 20 and 45 minutes per minute of footage in GoPro Fusion Studio.
  • Large files sizes for exported videos: ~4.5 GB/ minute at 5.2K in Pro Res format (~ 1 GB/minute at 4K H.264)
  • The unit can get hot when operating
  • Mobile software hasn’t been getting great reviews

Here is a fairly through review of the Fusion that delves into many of the nuances of the Fusion and could be helpful if you’re considering making a purchase:

Insta360 Pro: 360 Video in 8K

When viewing 360 video with a VR headset, a high resolution can make the difference between an immersive experience and a blurry novelty. We’ve recently been working with the Insta360 Pro which is capable of filming in 8K and it’s produced some of the sharpest 360 video I’ve seen yet.

Like other 360 cameras we’ve tested recently (such as the Garmin VIRB) operating the Insta360 Pro is a relatively simple procedure of point and shoot (or in the case of filming in 360, just shoot). After a minute-long boot-up sequence, you just navigate to the video icon in the camera’s menu screen and hit enter. An Android/iOS app will also allow you to remotely control the recording. In addition to recording in 8K, you can also configure the camera to record in 3D/stereoscopic 360, or up to 120 frames per second, though not all at once. Prioritizing a high FPS or 3D means reducing the resolution to 6K or 4K.

Once turned on, the camera’s cooling fan will start running which is quite noisy. This could be an issue for video where you’ll want to use the spatial audio. However, upgrading the camera’s firmware will allow you to turn the camera off while recording for fifteen minutes at a time. You have the option of immediately recording again, though I’d be wary of the camera overheating.

At lower resolutions, each of the videos from the six lenses will be stitched in real time in-camera. But for 8K, you’ll need to bring the videos into Insta360’s proprietary stitching software, which requires the camera’s serial number and a user registration to download and operate. Though it’s a bit of a hassle to get set up, I found the actual stitching process straightforward while still allowing for a lot of customization. It allows for batch exports, compression to lower resolutions, and offers a low-res preview of the final video.

For the project we’re working on, we wanted viewing the 360 videos to be both immersive and accessible. The solution we found was to load the video onto an Oculus Go, a wireless VR headset. At $200, it seemed like the best compromise to get a full 360 experience. While the 4K and 3D videos have looked great, we haven’t been able to playback 8K video on the device. This remains one of the biggest challenges to working with 8K video at this point, let alone 8K 360 video: there’s simply not many places to actually view it. For now, I’m already impressed with the quality of the Insta360 Pro’s 4K output, even if it’s not the full capability of the camera.

VR, 360° Video, and Cinematic Language

With the introduction of more accessible technology and the pace of innovation, the interest around VR and 360° video is as palpable as ever.  The practical and creative possibilities of these new technologies are undeniably exciting.  As a lifelong video producer and editor, I’m especially curious about how immersive experiences interact with our current understanding of cinematic language and visual storytelling.

There are plenty of resources about how to shoot and edit 360° video but far less about how to think about shooting and editing 360° video. As content creators, it’s important to be familiar with equipment and postproduction workflows. It’s equally important to think about how those tools can be used to best communicate the themes, stories or experiences we are trying to create.

An obvious place to look for inspiration is the community of artists and filmmakers experimenting in this emerging field. The people who are throwing it up against walls, breaking it apart, putting it back together and seeing what works and what doesn’t.

So far, the most useful guide I’ve found is the work and writing of Jessica Brillhart, former VR filmmaker at Google. Her series of essays, In The Blink of a Mind, is a thoughtful exploration of how to start thinking critically about these fresh mediums in the face of over a century of cinematic convention. You can read the essays, along with some others, on Medium and view her work on her website.

The research team eleVR produced a practical and equally helpful series of articles. They make a compelling case for the role of editing in VR and dig into ideas about how cinematography choices apply to 360° video.

Finally, as with traditional cinema, film festivals are a good place to see how people are approaching the technology in interesting ways and look for inspiration. Sundance, Tribeca, South by Southwest and many other festivals now include an “immersive” category. One piece I’ll highlight is Notes on Blindness, which premiered at Sundance in 2016.  It’s beautiful and approaches the idea of immersion and storytelling in an unconventional way.  Especially in the world of academia and pedagogy, it’s exciting to think about how these experiences could introduce students to not only new or unique physical spaces but psychology spaces while delivering compelling content at the same time.

As with any newly introduced medium or genre, the conventions and boundaries are still being discovered and continue to evolve. I, for one, can’t wait to see where it goes.

Please point out other good writing or cool projects in the comments. I’m always looking for new ideas to chew on.