Quick and Easy Color Correction Using Video Scopes

The color correction tools built into most editing software are obviously useful for fixing glaring problems with variables like exposure and white balance, but spending a few minutes applying simple correction can make even decent looking video pop. Video scopes can be intimidating at first, but, once understood, they make color correction a breeze and eliminate second guessing. There are plenty of introductory primers to what video scopes are and how they work. I like this one, for example.

Checking video scopes is a regular part of my post production process, and I almost always end up making at least minor tweaks. Everyone has their own approach to color correction, but I’ll share my own basic, default workflow here as an example.

I begin by adjusting luminance using the waveform monitor. I first set the white (top line) and black (bottom line) levels. I can then adjust the midtones as needed to get an even spread of points throughout the scope.

Next, I adjust the saturation level if needed to add some vibrance to the image, and, finally, I check the color using the vectorscope. To make this step easier, I zoom in on parts of the image to isolate useful colors for correction (whites, blacks, and skin tones). I can then adjust the color to sit where it belongs on the scope (center for shadows and highlights and the skin tone line for the skin tones).

And that’s it! The process only takes a minute or two and can make a good image look even better.

Video Working Group: Production Advice from Blueline

Producers from the digital agency Blueline presented to Duke’s Video Working Group this month about their video production process and experience working with universities. There were a lot of highlights, but I’ve tried to consolidate their comments and takeaways here.

Video is hard, they stated. There’s a lot of moving parts, expectations, and things that can go wrong. “We don’t make perfect videos,” Tucker, a video director at the agency, insisted. However, having good gear, plenty of time, and the right team can make video a little easier to produce. While many producers are familiar with the “one-person-band” production strategy, Blueline relies on the varied skillsets of its team of directors, editors, colorists, and other creators to achieve it’s vision for each project.

All of those creators, and the decisions they make, are in service to that unique vision. Blueline tries to match technical choices to the idea for each story. Gimbals, for instance, can make a shot look smooth and professional. A shaky camera shot can provide an energy of improvisation or excitement. As a video producer, you need to start with your story and then see what creative choices (along with any practical circumstances) best support that story.

Finding the right story to tell is often one of the biggest challenges. When starting a new project, their team does an extensive amount of pre-production work. This usually begins with clarifying expectations with the client and determining what inspirations or references they might have for the final product. Almost always, your client is the expert both on the story but the audience as well. After learning as much as you can from them, it pays to do a lot of independent research. This could be reading articles and books about the subject, or ideally a pre-interview with the subject that allows them to both give you direction but also build a relationship with you as their storyteller.

Through this pre-production process, you should be able to define a clear message that the viewer can take away from the piece. In turn, you’ll want to find great characters, people passionate about that message, who will captivate the camera. If your schedule allows it, starting production with the interviews and A-Roll can allow you to be more proactive when it comes to B-Roll later. This can be integral to building an arc and finding the right pacing for the piece. Once you’ve defined your message and found a character who can convey it, you can then structure the rest of the video to move towards that takeaway.

Hunter, a producer at Blueline, discussed natural sound as a great way of modulating that pace. Natural sound, he pointed out, is almost always tied to an action which helps immerse the viewer in the environment of the video. Rather than just telling the story, you’re inviting your viewers to experience it with your subjects.

Once all the pieces are edited and assembled, the folks at Blueline recommended knowing when to walk away and come back. After immersing yourself in a piece, it’s easy to become to close to the material. Giving yourself some space, as well as asking peers for their feedback, can be essential for finding the right final edit.

Video is hard, Tucker and Hunter reminded us again. But it can be a little easier with friends.


What’s New in Camtasia 2019

Another year, another Camtasia release. My thoughts on my experience testing the new features:

  • Audio Leveling – this seems to be the marquee feature, or at least the first listed in TechSmith’s marketing. Basically, you set a project to autolevel all the various audio media in your project to the same target (-18 LUFS – this can’t be adjusted now but may be add in a future release). This will not normalize the audio within various media, so loud and soft parts will stay the same relative to each other in the same clip. This feature is mainly aimed at users who are recording in multiple places, and possibly with different microphone inputs. If you’re recording with a consistent, professional setup, this feature probably won’t add much value to you.
  • Cursor Smoothing – I’m not sure who was complaining that their cursor was moving too much on screen but this effect will algorithmically remove cursor shakiness and replace it with smooth movements based on where you click and leave the cursor on the screen.
  • Custom Keyboard – TechSmith added 10x the amount of keyboard shortcuts, so power users can now set their own shortcuts for things like zooming, adding annotations, muting audio, adding custom animations etc. (For super power users, they’ve also added some Macbook Pro Touch Bar support, allowing you to scrub through the timeline, split clips, and jump between edits).
  • Add Logos to Themes – At Duke, we have a video branding package that allows to easily add branded lower-thirds to videos in FCPX and Premiere. This feature allows you to create a similar effect in Camtasia where you could add a logo like the Chapel bug and make the video feel that much more professional.
  • Batch Export (Mac) – This is a really great addition if you’re creating dozens of videos as we do in production on online courses. After you’re done editing (or if you’re exporting screen captures of slides to be imported into another editing program), you can now just add all the relevant projects to a queue and export them all with the same settings.
  • Hide Desktop Icons (Mac) – When setting up your recording, just toggle an option to make all the icons on your desktop invisible! Very handy for clutter-prone users like myself. Note: you have to set this before doing your recording. Unlike removing the cursor, this is not something you can adjust after the recording is complete.

Those were the highlights for me, but there’s also some new updates to text formatting, device frames, visual effects, etc. Also, I learned TechSmith has its own video review tool! Cool. If you’re at Duke and looking for a video review tool, you can reach out to oit-mt-production@duke.edu and we can set you up with a trial of our preferred platform LookAt.io.

Recording an Interview with Zoom

For one of our online courses, we wanted to include some video testimonials with former students to discuss how the class prepared them for the real world. The only problem was that some of former students we wished to talk to lived in California – not particularly conducive for a quick recording session in our studio on campus. Instead, we used the video conferencing tool Zoom to facilitate the call and I used Camtasia to do a screen recording of the interview. While the concept is simple, I found some tips that can make the execution feel a bit more professional.

First, the basics of remote video recording still apply. The subject sat at a desk that faced a window which provided a lot of natural light. It was also around 7am in his time zone so it was pretty quiet as well.

In some scenarios, to get the best possible video quality, I’ll ask the subject to record themselves with an application like Quicktime and then send me the video file. While this helps bypass the compression of streaming video and screen-capture, it comes with a couple drawbacks. First, I as the video producer don’t have direct control over the actual recording process which is a risk. Second, subjects are usually doing you a favor just by agreeing to the interview, and the less you ask of them the better.

Ruling this option out, there’s two other choices. Using Zoom’s built-in recording tool, or using a third-party screen capture tool like Camtasia. They each have their plusses and minuses. Zoom’s built-in tool allows the user to simply hit record within the interface and save the file either to their local computer or the cloud. This will generate both a video file and an audio-only file. However, if the meeting unexpectedly shuts down or the conversion process is interrupted, the recording files could become corrupted and non-recoverable. With Camtasia, the recording is isolated from the conferencing tool so I can better trust that it will record successfully, even if the call drops.

Recording with Camtasia does present another problem. If anything shows up on my screen, be it an email notification, or my mouse moving and activating the Zoom room tools, that is all recorded as well. Zoom’s local recording tool will capture just the video feed.

For the purposes of this video, I would just be showing the subject and would edit out the interviewer’s questions. For this reason, I wanted to make sure that Zoom only gave me the video feed of my subject and did not automatically switch video feeds based on who was talking, which it does by default as part of the Active Speaker layout. By using the Pin function, I can pin the subject’s video feed to my interface so that I will only be seeing the subject’s video, whether I record by screen capture or by local recording. This won’t affect other participants’ views, but it’s also important to note that it would not affect the cloud recording view either.

While facilitating the interview, I muted my microphone to ensure no accidental sounds might come from my end. And because we would be editing out the interviewer’s questions, we coached the subject to rephrase each question in his answer. For example, if we asked “Why is programming important to you?” the subject might start their response with “Programming is important to me because…”

Ultimately, it was just a simple matter of starting the video conference, pinning the subject’s video, and hitting record on Camtasia. From there I could just sit back while the interviewer and subject spoke. Like a lot of video production, proper planning and research will make your job a lot easier when it’s actually time to turn the camera on.

Producing a Video Interview

Recently, I had the opportunity to make a short profile video about a robotics graduate student here at Duke, Victoria Nneji. The goal of the video was to compel middle school students to start thinking about college and their future by sharing Victoria’s story.

This production was also a good opportunity for me to work with our new DSLR camera. The filming process was a big change of pace compared to producing scripted lectures in the green screen studio. Here’s a couple thoughts and takeaways on how the production went:

While the DSLR had a much better depth-of-field and clarity to the image, I didn’t truly appreciate the limitations of working with it until the day of the shoot. Since the camera has no zoom capability, there’s much less flexibility in where you can best place the camera and frame your shot. This was doubly difficult in a scenario where I was also running a secondary camera to capture a wide, two-person shot. Most of the set-up time for the shoot was spent trying to find the right placement for both cameras and the two subjects. Luckily, the in-room overhead lighting worked great, otherwise I’d still be trying to set up the shoot.

Additionally, I neglected to consider that this camera will overheat after about 30 minutes and to try to plan the shoot around that consideration. While we completed the interview without much trouble, I wasn’t able to get as much b-roll with the camera after the interview as I would’ve liked.

In lieu of more extensive on-site b-roll, I was extremely lucky to find some relevant footage as part of Duke’s public video folder which will remain a permanent bookmark for future video projects. The YouTube Audio Library, as always, was a good resource as well for some introductory music.

Were I to do anything differently, I’d try to add a third camera to the setup and feature more of Emerson, the interviewer. For a video aimed at middle-schoolers, I think it would be good to feature her more prominently. I’d also try to get more footage of the robots in action.

Many thanks to Victoria for sharing her story and to David Stein for coordinating the project.

Live U Portable Encoder Combines Cellular and WiFi

One portable field encoder that looks like a powerful way to deliver a live broadcast is the Live U Solo. The live U has options to interface directly with Facebook Live as well as a number of other destinations. It supports a number of different connection protocols, including ethernet, wifi, and has two slots for 3G/4G cellular modems. Any of these signals can be bonded together so you essentially get an aggregate of all the connections the device can manage, capping at a bit rate of 5.5Mbps. This makes the Live U ideal for any situation in which you would otherwise be relying on a single connection point you were worried might not operate reliably on its own.

An option with SDI retails for about $1500.00, and there is an HDMI only version for $995.00.


Basic Logo Animation With Adobe Illustrator and After Effects

For a recent project I was assigned the responsibility of shooting and editing a short 1 minute promotion for the Technology Engagement Center.  Initially I came up with a nifty electric laser title for the piece but it came off as potentially intimidating to the target audience of faculty, staff, and students in the Duke community who aren’t that tech savvy.  Instead, it was requested that I take the existing logo and get creative with it.  No problem.  The initial logo was designed in Adobe Illustrator.  It’s a fairly simple and straightforward design with four overlapping hexagons and a title at the bottom.Illustrator works in layers with each element occupying its own layer with a respective transfer mode that affects how that layer interacts visually with the layers beneath it.  If the elements were “flattened” into one layer each overlapping region of the hexagons would be its own shape.  This wouldn’t do for my application and would also result in my needing to animate seven shapes (three overlapping regions) instead of the initial four.  I noted that the layer transfer mode was “Multiply” with the color of the topmost layer multiplying the color values of the layer beneath it.  This comes in handy later so note this in your own projects if you copy this workflow!  The next step after noting the characteristics of the logo was to export for After Effects.  I exported each layer separately.

I exported utilizing the PSD export option as  that option yields the option to utilize layers.  You could export separate PNGs but I know that After Effects handles PSD files fine.  You must use CMYK  and check “write layers” as an option.  The other settings were fine.  Now it’s time to open Adobe After Effects!

I created a new comp in After Effects that reflected the size of the video that I’m using: 1280 x 720.  I then imported my Photoshop layers into the project panel then dragged them down into the comp.  Each layer popped up perfectly sized and in position.  Now it was time to animate.  This was quite honestly the easiest part but it can be more complicated based on what you do.  I had five layers.  One for each hexagon and one layer for the text which I decided to animate as one object.

First I changed my transfer mode for the hexagon layers to multiply to copy the same visual effect that existed in the Illustrator file.  Told you that information was going to be handy!

I left the bottom text layer and hexagon layer modes as normal as there was no need for them to interact with anything behind them.  I wanted to give the illusion of a “fly in” effect so I created position and size key-frames for each hexagon about 3 seconds in.  I then went to the beginning of the comp and enlarged each heaxagon significantly and moved them off screen with each hexagon going to a different quadrant of the screen.  Four hexagons.  Four quadrants.  Simple.

Lastly I did a horizontal blur and opacity fade in on the bottom text layer to bring in the text.  Here’s the result in animated GIF format.

That’s it!  The entire process (assuming that your files aren’t flattened and too complex) took only about 30 minutes from start to finish.  Given you can get as complex as you like with your logos when you get them into After Effects, but the process is still the same and straightforward.  Try it out and let me know how it works out for you!

New Machine Caption Options Look Interesting

We wrote in April of last year about the impact of new AI and machine learning advances in the video world, and specifically around captioning. A little less than a year later, we’re starting to see the first packaged services being offered that leverage these technologies and make them available to end users. We’ve recently evaluated a couple options that merit a look:


Syncwords offers machine transcriptions/ captions for $0.60/per minute, and $1.35/ minute for human corrected transcriptions. We tested this service recently and the quality was impressive. Only a handful of words needed adjustment on the 5 minute test file we used, and none of them seemed likely to significantly interfere with comprehension. The recording quality of our test file was fairly high (low noise, words clearly audible, enunciated clearly).

Turnaround time for machine transcriptions is about 1/3 of the media run time on average. For human corrected transcriptions, the advertised turnaround time is 3-4 business days, but the company says the average is less than 2 days. Rush human transcription option is $1.95 with a guaranteed turnaround of 2 business days and, according to the company, average delivery within a day.

Syncwords also notes edu and quantity discounts are available for all of these services, so please inquire with them if interested.


Sonix is a subscription-based service with three tiers: single-User ($11.25 per month and $6.00 per recorded hour/ $0.10/minute), Multi-User ($16.50 per user/month and $5.00 per recorded hour) , and Enterprise ($49.50 per user/month, pricing available upon request).  You can find information about the differences among the tiers here: https://sonix.ai/pricing

The videos in the folder below show the results of our testing of these two services together with the built in speech-to-text engine currently utilized by Panopto. To be fair, the service currently integrated with Panopto is free with our Panopto license, and for Panopto to license the more current technology would likely increase their and our costs. We do wonder, however, whether it is simply a matter of time before the currently state-of-the art services such as featured here become more of a commodity:



Teleprompt.me is a Free Voice-Controlled Teleprompter Web-App

For anyone looking for a quick and easy web-based teleprompter, Teleprompt.me is a great tool. The folks at Lifehacker put together an informative video about it.

In short:

  • Voice-control works similar to mobile app PromptSmart – using voice recognition to automatically scroll through text.
  • Only works in Chrome
  • Allows you to flip text so you can output to a mirror-based teleprompter setup.
  • Only works with voice-prompts – there’s no auto-scroll to move it at a certain constant speed.

Sony PTZ Cameras

Sony visited Duke University’s Technology Engagement Center this past week to review their pan/tilt/zoom (PTZ) camera offerings. Starting at the entry level, Sony showcased the SRG120, ideal for small conference rooms or classrooms on a budget. The optics held up well compared to Sony’s more expensive offerings, but one limitation of the SRG120 is that it can’t be mounted upside down, not a primary concern but something to consider. The SRG360SHE is a mid-tier camera ideally suited for larger event spaces where flexibility is key. The SRG360SHE can send content over an IP network connection, 3G-SDI and HDMI at the same time. The image quality was very clear and the movements were smooth. Rounding out Sony’s top-of-the-line offering, the BRCX1000 is a 4K studio quality PTZ camera ideally suited for production environments where image quality is king. While the $9000+ price tag may scare off many AV folks, when comparing it to the cost of hiring an outside group to film events or a second videographer for multi-cam events, the return on investment can be measured in months.

What PTZ camera review would be complete without control interfaces. Sony demonstrated their new PTZ camera remote controller, the RMIP500. It’s clear Sony has learned from their previous controllers as the PMIP500 has a number of features, such as the ability to lock out areas of the control, that will make controlling your cameras a real joy. It can connect to 100 PTZ cameras and is incredibly customizable. The RMIP10 is Sony’s entry-level control device.

Finally, Sony demonstrated two of their 4K professional monitor. Yes, these are the displays true videographers use when filming their next movie for their legendary clarity and color accuracy. It’s hard to think of a use case at the University side of things, but this is the type of display I’d expect to see in a medical environment where image quality is literally a life or death situation.