In the era of Covid-19, people are scrambling to find great document camera alternatives for home and in-office use. There is a wide range of devices available, but they generally come in two flavors, cheap and problematic and expensive, full-featured, and bulky. Today, I’ll give a quick review of the HUE HD Pro, a ~$99 USB document camera that bridges that gap.

First, the HUE HD Pro connects to a computer using a standard USB-A connector, so if you’re a modern Mac user, you’ll need a USB-A to USB-C dongle to get this working, so plan accordingly. Once you plug the HUE in, you’re ready to go, no batteries needed. I was able to fire up Quicktime/Zoom/etc. and see the HUE without installing any video drivers as the device leverages the built-in UVC driver available on all Windows and macOS devices. The image was initially very soft, but after a quick turn of the focus ring (did I mention that this is a manual focus camera), it was tack sharp. It’s worth mentioning, as I just did, that the camera focus is completely manual. While that may sound like a negative, I actually prefer setting the focus, as the camera doesn’t constantly “track” (going out of focus and back into focus as with some less expensive webcams) to obtain a good quality image. Considering it’s a document camera, the manual focus works great, as you only need to set it from time to time, and it doesn’t re-focus when it sees your hand.

The HUE HD Pro also has a mic… which is, well, OK at best. It’s passable, but if you are going to be doing any serious, long-term, document camera intensive teaching, you’ll probably want to use a different mic, or upgrade to a USB lav mic (ahem! – You don’t buy this device for the mic, but considering you can also use the HUE HD Pro as a webcam (yep, just point it up from your document and manually refocus… and presto! You have a webcam!), the mic makes sense. Generally speaking, I wouldn’t buy this device if my primary goal was to use it as a webcam. There are plenty of higher quality webcams on the market, with autofocus, which has a better image and a lower cost. You buy the HUE HD Pro for the long flexible neck. But, it would work as a webcam in a pinch or if you are attempting to be ultra-mobile.

Image Quality: While I found the image very clear when sharing a hand-drawn diagram, I did notice that the camera picked up a good bit of flicker (aka, flicker happens when the alternating on/off of LED lights doesn’t match the frame rate of a camera). Overall, it wasn’t a major problem, but it was noticeable, and there really isn’t a way to eliminate the flicker, unless you are willing to swap out the lightbulbs in your environment (aka, not going to happen).

I’ve demonstrated this device about a dozen times, and the feedback is usually, “That’s exactly what I need, perfect… thanks.” It’s a simple device that performs a simple task, but as an educational professional will say, sometimes it’s the simple solution that solves the core underlying issue. Some AV professionals say that document cameras are going the way of VGA… but I still see a wide range of applications where a simple camera, sharing a hand-drawn diagram, is the best and cheapest option to convey a concept. I’ll take a document camera over an advanced touch screen most days.


  • No additional drivers required: The HUE HD Pro uses the native UVC driver included with Windows and macOS devices
  • Cost: At ~$99, this is a well-priced USB document camera considering how flexible the device can be
  • The image quality is good and when sharing written notes in a Zoom session, it looks great
  • Oh, and it has a small LED light, which is nice (note con below regarding the light)
  • It just works… and is simple enough for anyone to understand. Even > I < was able to use it!


  • I wish the articulating arm was longer, like 6” longer. Sometimes it is challenging to get an oversized sheet of paper in the camera frame, requiring that I place the HUE HD Pro on a book to “zoom out” as it really doesn’t have a zoom
  • Regularly, I found myself twisting the HUE HD Pro’s neck in odd ways to get my documents in the correct orientation. It would have been nice if there was an option to flip the image sensor on the device natively with a tactile button push (this may be possible using their included software, but I generally hesitate to install such software as it’s usually not supported all that well)
  • The light is… well, OK at best. They are not very bright and the color temperature of the LED lights isn’t ideal for every situation, but I’m a color temperature matching perfectionist
  • Flicker, if you have LED lights in your teaching space, you may see a noticeable strobing flicker. This can be problematic for users that are very sensitive to flicker. The guinea pigs, errr… remote Zoom users, didn’t mention it when I was demoing the hardware… but it’s there. I see it, and a few other AV folks would see it… but it’s by no means a deal-breaker

Help Clean Up Background Noise

As we’re all working with instructors that are recording at home, we can count on some of them not recording in the best surroundings for audio. We’ve tried several noise reduction plugins but I recently got the AudioDenoise 2 from CrumplePop and like its super simple interface. It has less “robotic” sound than other plug ins (like the one built into Final Cut) and feel that it’s worth the $99 price tag. Here’s an example:

Motion Array

I came across a pack of Adobe After Effects assets from Motion Array. They had a free trial option so I downloaded one of their title packs to see if it would work for a project I was working on.

The pack I downloaded as my trial pick didn’t end up working for my needs since the text editing options weren’t really to my liking – that said, they have a large library of options and what they have can be used for lots of things for a simple but pretty neat effect. For example, I just put a video from webdam in the background and layered the title on top for a nice clean look.


An actual paid plan is about $30/month, but worth a look if you’re new to After Effects or interested in a bit of visual motivation. There are free products all over the site as well, you just have to look for them.

Crestron Visits Duke, Virtually

This past week, Crestron visited Duke… virtually, to highlight some new products and to provide an overview of upcoming changes to various Crestron platforms. The key takeaways were, CH5 (Crestron + HTML5) is here, and we’ll soon be able to leverage the platform for more dynamic visuals on touch panels and mobile devices. Also, the transition throws off the shackles of Crestron’s dependency upon Flash, a very good thing. Second, 4 series processors are making their way to market… but before you throw all of your 3 series processors in the trash, you may soon discover that the 4 series is more of an evolution of the 3 series vs. the major transformation that came along with the 2 series to 3 series shift. The first professional device released is the MC4, a followup to the residential focused MC4-R. Finally, we chatted about what we’d like to see in the coming years (perhaps DMPS units with NVX built-in, or an entry-level DMPS unit with dual matrixed DM/HDBT outputs?). It’s always fun to speculate, but one thing is clear, VGA is dead.

Adobe Premiere: Productions

Adobe Premiere has been adding features that make collaborating on projects much easier. They explain the workings a bit in this video:

Apparently this is being used in the editorial process in feature films, and was made following the suggestions of the teams that worked on films like the latest Terminator.

It’ll be interesting to see if this can work smoothly.

Video Working Group: Visual Misinformation

This month’s Duke Video Working Group topic centered around visual misinformation and the work that the Duke Reporter’s Lab is doing to address a media landscape where truth is harder and harder to discern. Joel Luther showcased how schema like Claim Review can help create a common language for fact checking and identifying mistruths in the media. Particularly interesting was how, utilizing machine learning, platforms are being developed that can provide real-time automated factchecking. Since politicians repeat themselves so often, we can create AI models that recognize a statement as it is being said and then display previously cited sources that prove, disprove, or clarify that claim to the viewer.

We also discussed the role of deepfakes and digital manipulation of video. Using some basic editing tools, a bad actor can distort an otherwise normal video of someone to make them appear drunk or unflattering. With some advanced tools involving machine learning, a bad actor can map a famous person’s face on to almost anyone. While this deepfake technology has not yet reached the point of being totally seamless, many universities and institutions are pursuing not only how to create the “perfect deepfake” but how to identify them as well. In the meantime, this technology has only emboldened others to debate the veracity of any kind of video. If any video could be fake, how will we know when something is actually real?

New Insta360 ONE R

Insta360 just launched their latest 360 camera, the ONE R. It’s actually a modular system and not a single, self-contained camera. Only time will tell, but it seems like the ONE R could be an innovative approach to  solving the problem of how to pack the burgeoning features we are seeing in the action and 360 camera spaces into a workable form factor. Certainly Insta360 seems to have doubled down their focus on the using 360 as coverage for standard 16:9 action shots.

The ONE R starts with a battery base and a touch screen that sits on top (it can be installed forwards or backwards depending on the use case) next to an empty space that could include one of the following:

  • A 5.7K 360 camera
  • A 4K action camera that records at 60fps for 4K and 200fps for 1080p
  • A 5.3K wide-angle (14.4mm equivalent) mod co-developed with Leica that has a 1-inch sensor (30fps for 5.3K, 60fps for 4K, and 120fps for 1080p) This module was developed with the help of camera company Leica.



Key features include:

  • Insta360’s FlowState stabilization is a key part of all three modules.
  • Waterproof to 16 feet, despite the module design
  • Aerial mod that makes it possible to hide your drone from footage
  • External mic support
  • Various remote control options, including Apple Watch, voice, and a GPS enabled smart remote
  • Selfie stick
  • Motion tracking to lock in on subjects
  • Tons of software/ post production options like bullet time, time lapse, slo mo, etc.

We’re not seeing a ton of immediate academic use cases for features such as the above, but will certainly keep the ONE R in mind if the right project arises.


Zhi Yun Weebil Lab Overview

In October, our media production team picked up a new tool: the Zhi Yun Weebil Lab camera stabilizer kit. In addition to some basic testing, I had the opportunity to put the stabilizer to work in producing “Posters Actually,” a parody video to promote the 2020 Research Computing Symposium. In that time, I’ve found the Weebil Lab to be an easy to use, if difficult to master, tool for video production.

We partly chose the Weebil Lab because online user reviews indicated it was a good fit for our Sony aIII DSLR camera and it has indeed been the perfect size for the device. Once the camera is properly seated and balanced on the gimbal, it fits very snugly. This does mean affixing an audio interface, shotgun mic, or led light to the camera is not-feasible. Should you need to, there’s a 1/4-20 screw thread at the base of the gimbal where you can add a bracket for these things – though you would need to be careful not to partially block the motion of the gimbal.

Balancing the camera on the stabilizer, often a notoriously difficult process, is rather straightforward. Zhi Yun provides step-by-step instructions for balancing each axis, which only took a minute or two by the time I knew what I was doing. When done properly, I rarely needed to worry about the calibration during the shoot.

Operating the Weebil Lab is a bit of an art. There are six different shooting modes, but really two primary modes. L for “Locking” mode and PF for “Pan Follow” mode. PF will follow the motion stabilizer of the while keeping the other axes locked. L will not follow the motion of the stabilizer and instead keep the camera fixed in its initial perspective. Additional, buttons for “Following” mode and “PhoneGo” mode essentially allow you to do whip-pans at varying speeds. Knowing when and how to use these various modes, in addition to using a directional joystick to move the camera, is crucial to achieving the full potential of this device. For the most part, I was happy to just leave it set to Locking Mode, and using trigger for Following mode when I needed to adjust the angle of the camera.

Understanding those operations, I better realized that a camera stabilizer is not a substitute for an actual SteadiCam and trained cameraperson. Filming an extended tracking shot, with a moving subject and turns around corners, will still take a lot of practice and coordination if you want your camera pointed in the right direction.

In addition to the stabilizer itself, we opted to get the Creator Package which came with a variety of accessories for the device. Notably, this included a Follow Focus motor and belt, a belt and monopod, and a phone holder attachment. In practice, I found these all nice to have even though I didn’t really use them in the field. I found the camera’s auto-focus good enough to keep up with what I was filming, though the focus motor would’ve allowed me more precise control. The belt and monopod are helpful for extended filming, particularly when you don’t have a place to set the camera down for a moment, but I found it a bit cumbersome to use for a short shoot in an enclosed space.

The phone holder, which screws snugly onto the gimbal’s base, is basically essential if you want to use a mobile device to control the gimbal. Not only does the app provide a live preview from the camera, but it also allows for some more sophisticated cinematography. Like keyframing in an editing software, you can set starting and ending orientations and have the gimbal fill in the path between. This works great with time-lapses, which you can also program using the app. As far as these kinds of apps go, I found the connection steady and easy to pair.

Overall, the Weebil Lab will an essential tool in my video projects going forward. Even without choreographed camera moves and pans, I found it liberating to not have to worry about setting up a tripod and lumbering around with it. I was able to move through the shoot much quicker and put the camera in places I normally wouldn’t be able to.

Behind the Scenes of “Posters, Actually”

This year, I had the privilege of working with Mark Delong to bring his annual poster symposium deadline video to life. You can watch the whole video here:

Mark had a particularly ambitious vision for this year’s video, so I thought it would be worthwhile to discuss our creative process and how we tackled various production challenges.

We began development in October, when Mark provided a ten-page script for the project, with multiple scenes and characters. More than just a simple song parody, he envisioned what amounted to a short film – one that matched, scene for scene, the Billy Mack plotline from 2001 movie Love Actually. While we would eventually narrow the scope of the script, it was clear early on that I would need to ensure the production value matched Mark’s cinematic vision. Among other things, this included filming for a wider aspect ratio (2.55:1 versus the typical 16:9), using our DSLR for better depth of field, and obtaining a camera stabilizer so I could add some movement to the shots.

The first two things were relatively straightforward. I’d use our Sony aIII to film in 4k and crop the video to the desired aspect ratio. We didn’t have a stabilizer, so I did a little research and our team ended up purchasing the Zhi Yun Weebil Lab package. In this review post, I go into more detail regarding our experience using it. Having not had the opportunity to work with a gimbal like this before, I enjoyed the opportunity to experiment with the new tool.

Our first day filming was at the WXDU radio station at the Rubenstein Arts Center. They were kind enough to let us use their podcast recording studio which was the perfect set for the Tina and Tom scene. I quickly realized the first challenge in recording with the stabilizer would be capturing good audio. The size of the stabilizer simply didn’t allow me to affix a shotgun mic to my camera and I didn’t have anyone else to work a boom mic for me. Ultimately, I decided to run two cameras – a stationary 4k Sony Camcorder that would capture audio and provide some basic shot coverage, and then roam with the stabilized DSLR. Between running two cameras, directing the performers, and making sure we captured everything we needed, I was spinning a lot of plates. To combat this, we filmed the scene multiple times to ensure we had redundant takes on every line which provided a much-needed safety net in editing.

We filmed every other shot on green screen at the Technology Engagement Center. Though at first simpler than shooting a three-person dialogue scene, it came with its own challenges. Principally, contrary to most green screen filming we do, the intention here was to make the performers look like they were on a real set. This meant anticipating the angle and lighting conditions of the background we’d place them on. Though it wouldn’t be seamless, the goofy nature of the video would hopefully allow us some leeway in terms of how realistic everything needed to look. Since I was moving the camera, the hardest part was making the background move in a natural parallax behind Mark. This was easy enough when the camera stayed at the same distance but almost impossible to get right when I moved the camera toward him. For this reason,  at the poster symposium scene I faded the composited elements behind Mark to just a simple gradient, justified by the dreamy premise of this part of the video.

Perhaps the biggest challenge was not related to video at all. For the song parody, we recorded using a karaoke backing track we found on YouTube. However, the track had built-in backing vocals that were almost impossible to remove. Luckily, we had our own rock star on staff, Steve Toback, who was able to create a soundalike track from scratch using GarageBand. His version ended up being so good that when we uploaded the final video to YouTube, the track triggered an automated copyright claim.

Were I to do it all over again, there’s a few things I would try to do differently. While running the stabilizer, I would try to be more conscious of the camera’s auto-focus, as it would sometimes focus on the microphones in front of the performer, rather than the performer themselves. I sometimes forgot I’d be cropping the video to a wider aspect ratio and framed the shot for a 16:9 image, so I would try to remind myself to shoot a little wider than I might normally. Overall though, I’m satisfied with how everything turned out. I’m grateful for all the support during the production, particularly to Mark and Steve, without whom none of this would have been possible.