Zhi Yun Weebil Lab Overview

In October, our media production team picked up a new tool: the Zhi Yun Weebil Lab camera stabilizer kit. In addition to some basic testing, I had the opportunity to put the stabilizer to work in producing “Posters Actually,” a parody video to promote the 2020 Research Computing Symposium. In that time, I’ve found the Weebil Lab to be an easy to use, if difficult to master, tool for video production.

We partly chose the Weebil Lab because online user reviews indicated it was a good fit for our Sony aIII DSLR camera and it has indeed been the perfect size for the device. Once the camera is properly seated and balanced on the gimbal, it fits very snugly. This does mean affixing an audio interface, shotgun mic, or led light to the camera is not-feasible. Should you need to, there’s a 1/4-20 screw thread at the base of the gimbal where you can add a bracket for these things – though you would need to be careful not to partially block the motion of the gimbal.

Balancing the camera on the stabilizer, often a notoriously difficult process, is rather straightforward. Zhi Yun provides step-by-step instructions for balancing each axis, which only took a minute or two by the time I knew what I was doing. When done properly, I rarely needed to worry about the calibration during the shoot.

Operating the Weebil Lab is a bit of an art. There are six different shooting modes, but really two primary modes. L for “Locking” mode and PF for “Pan Follow” mode. PF will follow the motion stabilizer of the while keeping the other axes locked. L will not follow the motion of the stabilizer and instead keep the camera fixed in its initial perspective. Additional, buttons for “Following” mode and “PhoneGo” mode essentially allow you to do whip-pans at varying speeds. Knowing when and how to use these various modes, in addition to using a directional joystick to move the camera, is crucial to achieving the full potential of this device. For the most part, I was happy to just leave it set to Locking Mode, and using trigger for Following mode when I needed to adjust the angle of the camera.

Understanding those operations, I better realized that a camera stabilizer is not a substitute for an actual SteadiCam and trained cameraperson. Filming an extended tracking shot, with a moving subject and turns around corners, will still take a lot of practice and coordination if you want your camera pointed in the right direction.

In addition to the stabilizer itself, we opted to get the Creator Package which came with a variety of accessories for the device. Notably, this included a Follow Focus motor and belt, a belt and monopod, and a phone holder attachment. In practice, I found these all nice to have even though I didn’t really use them in the field. I found the camera’s auto-focus good enough to keep up with what I was filming, though the focus motor would’ve allowed me more precise control. The belt and monopod are helpful for extended filming, particularly when you don’t have a place to set the camera down for a moment, but I found it a bit cumbersome to use for a short shoot in an enclosed space.

The phone holder, which screws snugly onto the gimbal’s base, is basically essential if you want to use a mobile device to control the gimbal. Not only does the app provide a live preview from the camera, but it also allows for some more sophisticated cinematography. Like keyframing in an editing software, you can set starting and ending orientations and have the gimbal fill in the path between. This works great with time-lapses, which you can also program using the app. As far as these kinds of apps go, I found the connection steady and easy to pair.

Overall, the Weebil Lab will an essential tool in my video projects going forward. Even without choreographed camera moves and pans, I found it liberating to not have to worry about setting up a tripod and lumbering around with it. I was able to move through the shoot much quicker and put the camera in places I normally wouldn’t be able to.

Behind the Scenes of “Posters, Actually”

This year, I had the privilege of working with Mark Delong to bring his annual poster symposium deadline video to life. You can watch the whole video here: https://youtu.be/OGDSXK5crd8

Mark had a particularly ambitious vision for this year’s video, so I thought it would be worthwhile to discuss our creative process and how we tackled various production challenges.

We began development in October, when Mark provided a ten-page script for the project, with multiple scenes and characters. More than just a simple song parody, he envisioned what amounted to a short film – one that matched, scene for scene, the Billy Mack plotline from 2001 movie Love Actually. While we would eventually narrow the scope of the script, it was clear early on that I would need to ensure the production value matched Mark’s cinematic vision. Among other things, this included filming for a wider aspect ratio (2.55:1 versus the typical 16:9), using our DSLR for better depth of field, and obtaining a camera stabilizer so I could add some movement to the shots.

The first two things were relatively straightforward. I’d use our Sony aIII to film in 4k and crop the video to the desired aspect ratio. We didn’t have a stabilizer, so I did a little research and our team ended up purchasing the Zhi Yun Weebil Lab package. In this review post, I go into more detail regarding our experience using it. Having not had the opportunity to work with a gimbal like this before, I enjoyed the opportunity to experiment with the new tool.

Our first day filming was at the WXDU radio station at the Rubenstein Arts Center. They were kind enough to let us use their podcast recording studio which was the perfect set for the Tina and Tom scene. I quickly realized the first challenge in recording with the stabilizer would be capturing good audio. The size of the stabilizer simply didn’t allow me to affix a shotgun mic to my camera and I didn’t have anyone else to work a boom mic for me. Ultimately, I decided to run two cameras – a stationary 4k Sony Camcorder that would capture audio and provide some basic shot coverage, and then roam with the stabilized DSLR. Between running two cameras, directing the performers, and making sure we captured everything we needed, I was spinning a lot of plates. To combat this, we filmed the scene multiple times to ensure we had redundant takes on every line which provided a much-needed safety net in editing.

We filmed every other shot on green screen at the Technology Engagement Center. Though at first simpler than shooting a three-person dialogue scene, it came with its own challenges. Principally, contrary to most green screen filming we do, the intention here was to make the performers look like they were on a real set. This meant anticipating the angle and lighting conditions of the background we’d place them on. Though it wouldn’t be seamless, the goofy nature of the video would hopefully allow us some leeway in terms of how realistic everything needed to look. Since I was moving the camera, the hardest part was making the background move in a natural parallax behind Mark. This was easy enough when the camera stayed at the same distance but almost impossible to get right when I moved the camera toward him. For this reason,  at the poster symposium scene I faded the composited elements behind Mark to just a simple gradient, justified by the dreamy premise of this part of the video.

Perhaps the biggest challenge was not related to video at all. For the song parody, we recorded using a karaoke backing track we found on YouTube. However, the track had built-in backing vocals that were almost impossible to remove. Luckily, we had our own rock star on staff, Steve Toback, who was able to create a soundalike track from scratch using GarageBand. His version ended up being so good that when we uploaded the final video to YouTube, the track triggered an automated copyright claim.

Were I to do it all over again, there’s a few things I would try to do differently. While running the stabilizer, I would try to be more conscious of the camera’s auto-focus, as it would sometimes focus on the microphones in front of the performer, rather than the performer themselves. I sometimes forgot I’d be cropping the video to a wider aspect ratio and framed the shot for a 16:9 image, so I would try to remind myself to shoot a little wider than I might normally. Overall though, I’m satisfied with how everything turned out. I’m grateful for all the support during the production, particularly to Mark and Steve, without whom none of this would have been possible.

Yamaha USB/Bluetooth Speakerphone

The Yamaha YVC-200 is a compact USB/Bluetooth Speaker phone loaded with features and has great audio quality.

Here’s an audio test using it over USB:

It has a rechargeable battery that will charge via USB when connected to your computer (or a USB outlet) so it can be used with your phone via bluetooth. It has mute and phone pickup and hang up and even comes with a handy carrying case.

 

 

 

Video Working Group: One City Center

Yesterday, the Duke Video Working Group met downtown for a tour through the One City Center. This meeting ran a bit different from ones I’ve attended before, where we might have a topic presented. We explored the nice new space before gathering in a meeting room for a casual discussion.

When we sat down, we took some time to review videos that people are currently working on. We’ve looked at videos as a group before, but this was more centered around having time to discuss and critique them. It was interesting to hear different perspectives about why certain things were done, or how things could be improved.

Video Working Group: January

The video working group meeting was about project management and organization. It inspired a lengthy back and forth discussion about best practices!

Julie showed us how her department organizes their footage and gave us a little handout with her folders listed.

In the end we talked on this for nearly the whole hour, and spent the last ten minutes or so reviewing other people’s videos.

CampusVision DDMC Session

This past Thursday, Jack D’Ardenne provided the Duke Digital Media Community (DDMC) with an overview of Duke’s Internet Protocol Television (IPTV) offering, called CampusVision. The platform features approximately 135 DirecTV channels and several Duke internal channels from Duke Athletics and Duke Chapel. While IPTV is the primary purpose of CampusVision, it’s also capable of a range of signage and AV related tasks. Specifically, with the more expensive of the two CampusVision players, it’s capable of acting as a rudimentary AV switcher which could come in handy in locations where you may want to watch the next basketball game… yet don’t want to install an expensive or complicated AV system to manage the area. Also, CampusVision is capable of emergency notification, so in theory, you could switch over your displays when an alert goes out. Visit the CampusVision page to request additional information on the platform.

Livestreaming from the Insta 360 Pro

While we’ve worked with the Insta 360 Pro fairly extensively in the past, we hadn’t yet tested it’s capability for livestreaming. In particular, I was curious about viewing the livestream from within our VR headset, the Oculus Go.

Though there’s a few ways you could set up the livestream, I found the following to be the most reliable. You can follow along this process with this video capture of setting up the stream.  After connecting the camera to my local wifi, I updated the WiFi settings on the camera to be in Access Point (AP)  mode. I then connected the camera via cable to an ethernet port, which generate a new IP address for the camera. I plugged that IP address into the camera control app on my laptop which was on the same local wifi network and got connected to the camera. I could’ve theoretically streamed just over WiFi without plugging into the Ethernet, but I found the connection wasn’t strong enough when I later actually went to livestream. I could also use the camera control app on an iPad or other mobile device, but using a laptop to setup the livestream was much easier since I could access both the camera application and the livestream host on the same device at the same time.

With the camera control app on the laptop connected to the camera, I then went over to YouTube to set up the livestream host. YouTube makes this really easy – there’s an icon right on the homepage that allows you to “Go Live.” From here, I set up the stream. I named it, and made sure the stream was unlisted so that only I knew where to access it. YouTube provided me a URL and key code to plug into my camera control app. Back in the camera control app, I made sure it was in Custom Rtmp server, and plugged in the stream URL and key from YouTube. I ran the video feed at 4k, 30FPS, 15Mbps bit rate. I then hit the “Live” button to send the signal to YouTube. After a few moments, the feed came through, I toggled on the 360 video option,  and I could then Go Live from YouTube to take the stream public. From real life to the live feed, I estimated about a 10-15 second lag.

To access the stream from in the Oculus Go, like most things in a VR headset, is straightforward if not exactly seamless. Within the headset, I opened the YouTube app, searched for my channel, and accessed the stream from my videos there. I could alternatively input the URL manually into the browser, but that process is a bit tedious when wearing using the headset. Watching a 15-second old version of myself from within a VR headset is probably the closest thing I’ve ever had to out of body experience.

 

Using Thinglink to Create an Interactive 360 Video Experience

As long as I’ve been working with 360 video, one element has always been out of reach: interactivity. Particularly when viewed through a headset, the immersive nature of 360 video lends itself well to exploration and curiosity. The challenge has always been how to add that interactivity. Neither working with an external vendor, nor developing a in-house solution seemed worthwhile for our needs. However, the tool Thinglink now offers an intuitive way not only to augment media with interactive annotations, but to link that various media to each other.

Thinglink, as described previously, is a web platform that allows the user to add interactive pop-up graphics onto photos and videos, in 2D or in 360. Duke is piloting the technology, so I took the opportunity to test both the creation and publishing of 360 video through Thinglink.

The creation part couldn’t have been simpler (and in its pursuit of simplicity also feels a bit light on features). I was able to upload a custom 360 video without trouble, and immediately start adding annotated tags. You can see my test video here. There are four primary forms of tags:

  • Labels add a simple text box that are best used for… labeling things. This would be useful in a language learning context where you might want to add, say, the Spanish word for “tree” near a tree visible in the video.
  • Text/Media are fancier versions of labels which includes room for a title, description, photo, or external link. This is for something where you might want to add a little more context for what you are tagging.
  • Embeds allow you to insert embed codes. This would typically be a video (from either YouTube or Duke’s own Warpwire) but could include surveys or any other platform that provides you an HTML embed code to add to your website.
  • Tour Links allow you to connect individual tagged videos/photos together. If I wanted to provide a tour of the first floor of the Technology Engagement Center, for example, I could start with a video from the main lobby. For the various rooms and hallways visible from the lobby, I could then add an icon that when clicked on moves the viewer to a new video from the perspective of the icon that they clicked.

Adding all of these is as simple as clicking within the video, selecting what kind of tag you want, and then filling in the blanks. My only real gripe here is a lack of customization. You can’t change the size of the icons, though you can design and upload your own if you like. The overall design is also extremely limited. You can’t change text fonts, sizes etc. There is a global color scheme which just comes down to a background color, text color, button background and button text color. In the “Advanced” settings, you can reset the initial direction POV that the 360 video starts in, and you can also toggle “Architectural mode” with eliminates the the fish-eye POV at the expense of less overall visibility. While viewing

All in all, it’s incredibly easy to set up and use. Sharing is also pretty straightforward, provided you don’t intend to view the video in an actual VR headset. You can generate a shareable link that is public, unlisted, or only visible to your organization. You can even generate an embed code to place the Thinglink viewer within a website. What I was most curious about, however, was if I could properly view a Thinglink 360 video with our Oculus Go headset. In this regard, there’s a lot of room for improvement.

In principal, this use case is perfectly functional. I was able to access one of Thinglink’s demo 360 videos from within the Oculus Go headset and view and interact with the video with no trouble. The headset recognized the Thinglink video was a 360 video and automatically switched to that mode. A reticule in the center of my field of vision worked as mouse, in that if I hovered directly over a tag icon, it would “click” and activate the icon, negating the need for an external controller. The only issue was that the window that was activated when I “clicked” on an icon would sometimes be behind me and I had no idea anything had happened.

When I tried to view and access my own video, however, I had a lot of trouble. From a simple logistics standpoint, the shareable Thinglink URLs are fairly long and are tedious to input when in a VR headset (I made mine into a TinyURL which slightly helped). When I was finally able to access the video, it worked fine in 2D mode but when I clicked on the goggles icon to put the video into VR Headset mode I was met with a simple black screen. The same went for trying to view the video in this mode on my phone or on desktop. I found that after several minutes of waiting, an image from the video would eventually come up. Even when I was able to see something other than darkness, I discovered that the embedded videos were not functional at all in VR mode.

While the functionality is potentially there to create an interactive 360 video tour in Thinglink and view it within a VR Headset, it’s simply not practical at this point. It’s a niche use case, sure, but one that seems within grasp. If the developers can work out the kinks, this platform would really be a gamechanger. For now, interactive 360 video will have to stay on the flat screen for me.

New AI-Based Transcription Service Otter.ai

Some of Duke’s Communications staff have been experimenting lately with Otter.ai, a new transcription service that offers 600 minutes per month, and seem to be enjoying it. Otter, which was started by an ex Google engineer in early 2019, is an interesting move forward in the captioning and ASR space. It’s focus seems to be less on captioning, such as Rev.com (a widely used service at Duke) and more on live recording of meetings via your browser and making searchable transcripts available in a collaborative, teams-based environment. I had some problems in utilizing Otter to produce a caption file, but it does seem like Otter could be useful for simple transcription workflows, and the idea of using something like Otter to record all your meetings poses some interesting possibilities and questions.

Otter.ai

 

Below is a summary of what I found in my initial testing:

  • High accuracy, comparable to other vendors we’ve tested recently utilizing the newest ASR engines
  • Interesting collaboration feature set
  • Can record your meeting right from within the browser
  • Nice free allotment—600 free mins/ month (6000/month for the pro plan, education pricing $5.00/month)
  • Includes speaker identification
  • If your goal is captions and not just transcriptions, Otter is more limited–only seems to supported export of captions in .srt format (not .vtt, which some of our users, including the Duke Libraries, prefer)
  • The .srt I exported in my test file was was grouped by paragraph, not by line, and so it wouldn’t be possible to use the .srt with one of our video publishing systems like Warpwire or Panopto without extensive editing to chunk the file up by line.

Kontek Visits the TEC

Last month, Kontek visited the Technology Engagement Center (TEC) on Duke’s campus to provide an updated overview of their services, introduce new faces at the company, and to detail the updated organizational structure of the company. It was also a bittersweet opportunity to say farewell to Billy Morris, a longtime Senior Account Manager for Kontek on his journey to retirement. Wes Newman kicked off the conversation, discussing how the organization has changed and grown over the years and how he has empowered his team to be the best Kontek. Marques Manning, Director, UX Design & Technology then spoke about the specific changes that have taken place, with the introduction of more robuts commissioning standards, improved internal and external communications, and a raised standards when it comes to the UX experience for customers.