This past week, Crestron visited Duke… virtually, to highlight some new products and to provide an overview of upcoming changes to various Crestron platforms. The key takeaways were, CH5 (Crestron + HTML5) is here, and we’ll soon be able to leverage the platform for more dynamic visuals on touch panels and mobile devices. Also, the transition throws off the shackles of Crestron’s dependency upon Flash, a very good thing. Second, 4 series processors are making their way to market… but before you throw all of your 3 series processors in the trash, you may soon discover that the 4 series is more of an evolution of the 3 series vs. the major transformation that came along with the 2 series to 3 series shift. The first professional device released is the MC4, a followup to the residential focused MC4-R. Finally, we chatted about what we’d like to see in the coming years (perhaps DMPS units with NVX built-in, or an entry-level DMPS unit with dual matrixed DM/HDBT outputs?). It’s always fun to speculate, but one thing is clear, VGA is dead.
Adobe Premiere has been adding features that make collaborating on projects much easier. They explain the workings a bit in this video:
Apparently this is being used in the editorial process in feature films, and was made following the suggestions of the teams that worked on films like the latest Terminator.
It’ll be interesting to see if this can work smoothly.
This month’s Duke Video Working Group topic centered around visual misinformation and the work that the Duke Reporter’s Lab is doing to address a media landscape where truth is harder and harder to discern. Joel Luther showcased how schema like Claim Review can help create a common language for fact checking and identifying mistruths in the media. Particularly interesting was how, utilizing machine learning, platforms are being developed that can provide real-time automated factchecking. Since politicians repeat themselves so often, we can create AI models that recognize a statement as it is being said and then display previously cited sources that prove, disprove, or clarify that claim to the viewer.
We also discussed the role of deepfakes and digital manipulation of video. Using some basic editing tools, a bad actor can distort an otherwise normal video of someone to make them appear drunk or unflattering. With some advanced tools involving machine learning, a bad actor can map a famous person’s face on to almost anyone. While this deepfake technology has not yet reached the point of being totally seamless, many universities and institutions are pursuing not only how to create the “perfect deepfake” but how to identify them as well. In the meantime, this technology has only emboldened others to debate the veracity of any kind of video. If any video could be fake, how will we know when something is actually real?
Insta360 just launched their latest 360 camera, the ONE R. It’s actually a modular system and not a single, self-contained camera. Only time will tell, but it seems like the ONE R could be an innovative approach to solving the problem of how to pack the burgeoning features we are seeing in the action and 360 camera spaces into a workable form factor. Certainly Insta360 seems to have doubled down their focus on the using 360 as coverage for standard 16:9 action shots.
The ONE R starts with a battery base and a touch screen that sits on top (it can be installed forwards or backwards depending on the use case) next to an empty space that could include one of the following:
- A 5.7K 360 camera
- A 4K action camera that records at 60fps for 4K and 200fps for 1080p
- A 5.3K wide-angle (14.4mm equivalent) mod co-developed with Leica that has a 1-inch sensor (30fps for 5.3K, 60fps for 4K, and 120fps for 1080p) This module was developed with the help of camera company Leica.
Key features include:
- Insta360’s FlowState stabilization is a key part of all three modules.
- Waterproof to 16 feet, despite the module design
- Aerial mod that makes it possible to hide your drone from footage
- External mic support
- Various remote control options, including Apple Watch, voice, and a GPS enabled smart remote
- Selfie stick
- Motion tracking to lock in on subjects
- Tons of software/ post production options like bullet time, time lapse, slo mo, etc.
We’re not seeing a ton of immediate academic use cases for features such as the above, but will certainly keep the ONE R in mind if the right project arises.
In October, our media production team picked up a new tool: the Zhi Yun Weebil Lab camera stabilizer kit. In addition to some basic testing, I had the opportunity to put the stabilizer to work in producing “Posters Actually,” a parody video to promote the 2020 Research Computing Symposium. In that time, I’ve found the Weebil Lab to be an easy to use, if difficult to master, tool for video production.
We partly chose the Weebil Lab because online user reviews indicated it was a good fit for our Sony aIII DSLR camera and it has indeed been the perfect size for the device. Once the camera is properly seated and balanced on the gimbal, it fits very snugly. This does mean affixing an audio interface, shotgun mic, or led light to the camera is not-feasible. Should you need to, there’s a 1/4-20 screw thread at the base of the gimbal where you can add a bracket for these things – though you would need to be careful not to partially block the motion of the gimbal.
Balancing the camera on the stabilizer, often a notoriously difficult process, is rather straightforward. Zhi Yun provides step-by-step instructions for balancing each axis, which only took a minute or two by the time I knew what I was doing. When done properly, I rarely needed to worry about the calibration during the shoot.
Operating the Weebil Lab is a bit of an art. There are six different shooting modes, but really two primary modes. L for “Locking” mode and PF for “Pan Follow” mode. PF will follow the motion stabilizer of the while keeping the other axes locked. L will not follow the motion of the stabilizer and instead keep the camera fixed in its initial perspective. Additional, buttons for “Following” mode and “PhoneGo” mode essentially allow you to do whip-pans at varying speeds. Knowing when and how to use these various modes, in addition to using a directional joystick to move the camera, is crucial to achieving the full potential of this device. For the most part, I was happy to just leave it set to Locking Mode, and using trigger for Following mode when I needed to adjust the angle of the camera.
Understanding those operations, I better realized that a camera stabilizer is not a substitute for an actual SteadiCam and trained cameraperson. Filming an extended tracking shot, with a moving subject and turns around corners, will still take a lot of practice and coordination if you want your camera pointed in the right direction.
In addition to the stabilizer itself, we opted to get the Creator Package which came with a variety of accessories for the device. Notably, this included a Follow Focus motor and belt, a belt and monopod, and a phone holder attachment. In practice, I found these all nice to have even though I didn’t really use them in the field. I found the camera’s auto-focus good enough to keep up with what I was filming, though the focus motor would’ve allowed me more precise control. The belt and monopod are helpful for extended filming, particularly when you don’t have a place to set the camera down for a moment, but I found it a bit cumbersome to use for a short shoot in an enclosed space.
The phone holder, which screws snugly onto the gimbal’s base, is basically essential if you want to use a mobile device to control the gimbal. Not only does the app provide a live preview from the camera, but it also allows for some more sophisticated cinematography. Like keyframing in an editing software, you can set starting and ending orientations and have the gimbal fill in the path between. This works great with time-lapses, which you can also program using the app. As far as these kinds of apps go, I found the connection steady and easy to pair.
Overall, the Weebil Lab will an essential tool in my video projects going forward. Even without choreographed camera moves and pans, I found it liberating to not have to worry about setting up a tripod and lumbering around with it. I was able to move through the shoot much quicker and put the camera in places I normally wouldn’t be able to.
Mark had a particularly ambitious vision for this year’s video, so I thought it would be worthwhile to discuss our creative process and how we tackled various production challenges.
We began development in October, when Mark provided a ten-page script for the project, with multiple scenes and characters. More than just a simple song parody, he envisioned what amounted to a short film – one that matched, scene for scene, the Billy Mack plotline from 2001 movie Love Actually. While we would eventually narrow the scope of the script, it was clear early on that I would need to ensure the production value matched Mark’s cinematic vision. Among other things, this included filming for a wider aspect ratio (2.55:1 versus the typical 16:9), using our DSLR for better depth of field, and obtaining a camera stabilizer so I could add some movement to the shots.
The first two things were relatively straightforward. I’d use our Sony aIII to film in 4k and crop the video to the desired aspect ratio. We didn’t have a stabilizer, so I did a little research and our team ended up purchasing the Zhi Yun Weebil Lab package. In this review post, I go into more detail regarding our experience using it. Having not had the opportunity to work with a gimbal like this before, I enjoyed the opportunity to experiment with the new tool.
Our first day filming was at the WXDU radio station at the Rubenstein Arts Center. They were kind enough to let us use their podcast recording studio which was the perfect set for the Tina and Tom scene. I quickly realized the first challenge in recording with the stabilizer would be capturing good audio. The size of the stabilizer simply didn’t allow me to affix a shotgun mic to my camera and I didn’t have anyone else to work a boom mic for me. Ultimately, I decided to run two cameras – a stationary 4k Sony Camcorder that would capture audio and provide some basic shot coverage, and then roam with the stabilized DSLR. Between running two cameras, directing the performers, and making sure we captured everything we needed, I was spinning a lot of plates. To combat this, we filmed the scene multiple times to ensure we had redundant takes on every line which provided a much-needed safety net in editing.
We filmed every other shot on green screen at the Technology Engagement Center. Though at first simpler than shooting a three-person dialogue scene, it came with its own challenges. Principally, contrary to most green screen filming we do, the intention here was to make the performers look like they were on a real set. This meant anticipating the angle and lighting conditions of the background we’d place them on. Though it wouldn’t be seamless, the goofy nature of the video would hopefully allow us some leeway in terms of how realistic everything needed to look. Since I was moving the camera, the hardest part was making the background move in a natural parallax behind Mark. This was easy enough when the camera stayed at the same distance but almost impossible to get right when I moved the camera toward him. For this reason, at the poster symposium scene I faded the composited elements behind Mark to just a simple gradient, justified by the dreamy premise of this part of the video.
Perhaps the biggest challenge was not related to video at all. For the song parody, we recorded using a karaoke backing track we found on YouTube. However, the track had built-in backing vocals that were almost impossible to remove. Luckily, we had our own rock star on staff, Steve Toback, who was able to create a soundalike track from scratch using GarageBand. His version ended up being so good that when we uploaded the final video to YouTube, the track triggered an automated copyright claim.
Were I to do it all over again, there’s a few things I would try to do differently. While running the stabilizer, I would try to be more conscious of the camera’s auto-focus, as it would sometimes focus on the microphones in front of the performer, rather than the performer themselves. I sometimes forgot I’d be cropping the video to a wider aspect ratio and framed the shot for a 16:9 image, so I would try to remind myself to shoot a little wider than I might normally. Overall though, I’m satisfied with how everything turned out. I’m grateful for all the support during the production, particularly to Mark and Steve, without whom none of this would have been possible.
The Yamaha YVC-200 is a compact USB/Bluetooth Speaker phone loaded with features and has great audio quality.
Here’s an audio test using it over USB:
It has a rechargeable battery that will charge via USB when connected to your computer (or a USB outlet) so it can be used with your phone via bluetooth. It has mute and phone pickup and hang up and even comes with a handy carrying case.
Yesterday, the Duke Video Working Group met downtown for a tour through the One City Center. This meeting ran a bit different from ones I’ve attended before, where we might have a topic presented. We explored the nice new space before gathering in a meeting room for a casual discussion.
When we sat down, we took some time to review videos that people are currently working on. We’ve looked at videos as a group before, but this was more centered around having time to discuss and critique them. It was interesting to hear different perspectives about why certain things were done, or how things could be improved.
The video working group meeting was about project management and organization. It inspired a lengthy back and forth discussion about best practices!
Julie showed us how her department organizes their footage and gave us a little handout with her folders listed.
In the end we talked on this for nearly the whole hour, and spent the last ten minutes or so reviewing other people’s videos.
This past Thursday, Jack D’Ardenne provided the Duke Digital Media Community (DDMC) with an overview of Duke’s Internet Protocol Television (IPTV) offering, called CampusVision. The platform features approximately 135 DirecTV channels and several Duke internal channels from Duke Athletics and Duke Chapel. While IPTV is the primary purpose of CampusVision, it’s also capable of a range of signage and AV related tasks. Specifically, with the more expensive of the two CampusVision players, it’s capable of acting as a rudimentary AV switcher which could come in handy in locations where you may want to watch the next basketball game… yet don’t want to install an expensive or complicated AV system to manage the area. Also, CampusVision is capable of emergency notification, so in theory, you could switch over your displays when an alert goes out. Visit the CampusVision page to request additional information on the platform.