Using Particular to Build Particle Systems in After Effects

For a recent project, I was tasked with a designing a screensaver that had an ethereal pulsing background (like a less busy version of this video). It had to be 1 minute, loopable without any hiccups when it restarted, and also change color over the duration of the project. In researching how to accomplish this, nearly every resource I found pointed towards one tool: Particular.

Particular is an Adobe After Effects plugin made by Red Giant that gives the user tremendous power in designing and controlling particle systems. It can be used to create anything from the screensaver-type effects described above, to magic wand-esque flourishes (this video has a lot of great examples, though I doubt they used this tool), to a variety of other cool effects. One of my favorites was the ability to dissolve a text object into millions of floating particles similar to this version of the IMAX logo. As is usually the case in graphic design and video production, once I started looking for particle systems in every day media, I started seeing it everywhere.

I found the interface surprisingly intuitive with an incredible amount of depth. Particular includes a “Designer” window which allows you to build the effects from scratch or customize a pre-set template. Here, you can adjust the Emitter (where the particles originate from), the Particles themselves, and even an Auxiliary system where the particles generate their own particle systems. All of the effects can stack and interact with each other in very complex ways. Just be sure your computer’s processor is ready to deal with rendering thousands of uniquely animated objects!

Working with this tool, I frequently found myself thinking “wow, I didn’t know how easy it was to make something this sophisticated and cool.” While the plug-in usually sells for $399, an academic license is available for $199.

Remote Directing With Zoom

I needed to produce a short video about my department’s role in building the new Karsh Alumni & Visitor’s Center at Duke. One problem, I was 3000 miles away from Durham. Zoom to the rescue. The producer for the project, Mich Donovan had the great idea of mounting his iPhone to the camera so that I could see pretty much what his camera was seeing and I was able to provide feedback in real time to the actors and Mich to make sure we got the shots we needed for the project. There were a few glitches when we went outside making sure we had cell service and almost running out of battery (next time we’ll have an external USB battery), but all in all it was a tremendous success.

Comparing Machine Transcription Options from Rev and Sonix

As part of our continuing exploration of new options for transcription and captioning, two members of our media production team tested the automated services offered by both Rev and Sonix. We submitted the same audio and video files to each service and compared the results. Overall, both services were surprisingly accurate and easy to use. Sonix, in particular, offers some unique exporting options that could be especially useful to media producers. Below is an outline of our experience and some thoughts on potential uses.

Accuracy

The quality and accuracy of the transcription seemed comparable. Both produced transcripts with about the same number of errors. Though errors occurred at similar rates, they interestingly almost always occurred in different places. All of the transcripts would need cleaning up for official use but would work just fine for editing or review purposes. The slight edge might go to Rev here. It did a noticeably better job at distinguishing and identifying unique speakers, punctuating, and in general (but not always) recognizing names and acronyms.  

Interface

When it came time to share and edit the transcripts, both services offered similar web-based collaborative tools. The tools feature basic word processing functions and allow multiple users to highlight, strikethrough, and attach notes to sections of text. After it’s recent updates, the Rev interface is slightly cleaner and more streamlined. Again, the services are pretty much even in this category.

Export Options

This is where things get interesting. Both services allow users to export transcripts as documents (Microsoft Word, Text File, and, for Sonix, PDF) and captions (SubRip and WebVTT). However, Sonix offers some unique export options. When exporting captions, Rev automatically formats the length and line breaks of the subtitles and produces reliable results. Sonix, on the other hand, provides several options for formatting captions including character length, time duration, number of lines, and whether or not to include speaker names. The downside was that using the default settings for caption exporting in Sonix led to cluttered, clunky results, but the additional options would be useful for those looking for more control of how their captions are displayed.

Sonix also allows two completely different export options. First, users can export audio or video files that include only highlighted sections of the transcript or exclude strikethroughs. Basically, you can produce a very basic audio or video edit by editing the transcript text. It unfortunately does not allow users to move or rearrange sections of media and the edits are all hard cuts so it’s a rather blunt instrument, but it could be useful for rough cuts or those with minimal editing skills.

Sonix also provides the option of exporting XML files that are compatible with Adobe Audition, Adobe Premiere, and Final Cut Pro. When imported into the editing software these work like edit decision lists that automatically cut and label media in a timeline. We tried this with two different audio files intended for a podcast, and it worked great. This has the potential to be useful for more complicated and collaborative post-production workflows, an online equivalent of an old school “paper edit”. Again, the big drawback here is the inability to rearrange the text. It could save time when cutting down raw footage, but a true paper edit would still require editing the transcript with timecode in a word processing program.

And the winner is…

Everyone. Both Rev and Sonix offer viable and cost-effective alternatives to traditional human transcription. Though the obvious compromise in accuracy exists, it is much less severe than you might expect. Official transcripts or captions could be produced with some light editing, and, from a media production perspective, quick and cheap transcripts can be an extremely useful tool in the post-production process. Those looking to try a new service or stick with the one they’re familiar with can be confident that they’re getting the highest quality machine transcription available with either company. As more features get added and improved, like those offered by Sonix, this could become a helpful tool throughout the production process.

Quick and Easy Color Correction Using Video Scopes

The color correction tools built into most editing software are obviously useful for fixing glaring problems with variables like exposure and white balance, but spending a few minutes applying simple correction can make even decent looking video pop. Video scopes can be intimidating at first, but, once understood, they make color correction a breeze and eliminate second guessing. There are plenty of introductory primers to what video scopes are and how they work. I like this one, for example.

Checking video scopes is a regular part of my post production process, and I almost always end up making at least minor tweaks. Everyone has their own approach to color correction, but I’ll share my own basic, default workflow here as an example.

I begin by adjusting luminance using the waveform monitor. I first set the white (top line) and black (bottom line) levels. I can then adjust the midtones as needed to get an even spread of points throughout the scope.

Next, I adjust the saturation level if needed to add some vibrance to the image, and, finally, I check the color using the vectorscope. To make this step easier, I zoom in on parts of the image to isolate useful colors for correction (whites, blacks, and skin tones). I can then adjust the color to sit where it belongs on the scope (center for shadows and highlights and the skin tone line for the skin tones).

And that’s it! The process only takes a minute or two and can make a good image look even better.

Video Working Group: Production Advice from Blueline

Producers from the digital agency Blueline presented to Duke’s Video Working Group this month about their video production process and experience working with universities. There were a lot of highlights, but I’ve tried to consolidate their comments and takeaways here.

Video is hard, they stated. There’s a lot of moving parts, expectations, and things that can go wrong. “We don’t make perfect videos,” Tucker, a video director at the agency, insisted. However, having good gear, plenty of time, and the right team can make video a little easier to produce. While many producers are familiar with the “one-person-band” production strategy, Blueline relies on the varied skillsets of its team of directors, editors, colorists, and other creators to achieve it’s vision for each project.

All of those creators, and the decisions they make, are in service to that unique vision. Blueline tries to match technical choices to the idea for each story. Gimbals, for instance, can make a shot look smooth and professional. A shaky camera shot can provide an energy of improvisation or excitement. As a video producer, you need to start with your story and then see what creative choices (along with any practical circumstances) best support that story.

Finding the right story to tell is often one of the biggest challenges. When starting a new project, their team does an extensive amount of pre-production work. This usually begins with clarifying expectations with the client and determining what inspirations or references they might have for the final product. Almost always, your client is the expert both on the story but the audience as well. After learning as much as you can from them, it pays to do a lot of independent research. This could be reading articles and books about the subject, or ideally a pre-interview with the subject that allows them to both give you direction but also build a relationship with you as their storyteller.

Through this pre-production process, you should be able to define a clear message that the viewer can take away from the piece. In turn, you’ll want to find great characters, people passionate about that message, who will captivate the camera. If your schedule allows it, starting production with the interviews and A-Roll can allow you to be more proactive when it comes to B-Roll later. This can be integral to building an arc and finding the right pacing for the piece. Once you’ve defined your message and found a character who can convey it, you can then structure the rest of the video to move towards that takeaway.

Hunter, a producer at Blueline, discussed natural sound as a great way of modulating that pace. Natural sound, he pointed out, is almost always tied to an action which helps immerse the viewer in the environment of the video. Rather than just telling the story, you’re inviting your viewers to experience it with your subjects.

Once all the pieces are edited and assembled, the folks at Blueline recommended knowing when to walk away and come back. After immersing yourself in a piece, it’s easy to become to close to the material. Giving yourself some space, as well as asking peers for their feedback, can be essential for finding the right final edit.

Video is hard, Tucker and Hunter reminded us again. But it can be a little easier with friends.

 

What’s New in Camtasia 2019

Another year, another Camtasia release. My thoughts on my experience testing the new features:

  • Audio Leveling – this seems to be the marquee feature, or at least the first listed in TechSmith’s marketing. Basically, you set a project to autolevel all the various audio media in your project to the same target (-18 LUFS – this can’t be adjusted now but may be add in a future release). This will not normalize the audio within various media, so loud and soft parts will stay the same relative to each other in the same clip. This feature is mainly aimed at users who are recording in multiple places, and possibly with different microphone inputs. If you’re recording with a consistent, professional setup, this feature probably won’t add much value to you.
  • Cursor Smoothing – I’m not sure who was complaining that their cursor was moving too much on screen but this effect will algorithmically remove cursor shakiness and replace it with smooth movements based on where you click and leave the cursor on the screen.
  • Custom Keyboard – TechSmith added 10x the amount of keyboard shortcuts, so power users can now set their own shortcuts for things like zooming, adding annotations, muting audio, adding custom animations etc. (For super power users, they’ve also added some Macbook Pro Touch Bar support, allowing you to scrub through the timeline, split clips, and jump between edits).
  • Add Logos to Themes – At Duke, we have a video branding package that allows to easily add branded lower-thirds to videos in FCPX and Premiere. This feature allows you to create a similar effect in Camtasia where you could add a logo like the Chapel bug and make the video feel that much more professional.
  • Batch Export (Mac) – This is a really great addition if you’re creating dozens of videos as we do in production on online courses. After you’re done editing (or if you’re exporting screen captures of slides to be imported into another editing program), you can now just add all the relevant projects to a queue and export them all with the same settings.
  • Hide Desktop Icons (Mac) – When setting up your recording, just toggle an option to make all the icons on your desktop invisible! Very handy for clutter-prone users like myself. Note: you have to set this before doing your recording. Unlike removing the cursor, this is not something you can adjust after the recording is complete.

Those were the highlights for me, but there’s also some new updates to text formatting, device frames, visual effects, etc. Also, I learned TechSmith has its own video review tool! Cool. If you’re at Duke and looking for a video review tool, you can reach out to oit-mt-production@duke.edu and we can set you up with a trial of our preferred platform LookAt.io.

Recording an Interview with Zoom

For one of our online courses, we wanted to include some video testimonials with former students to discuss how the class prepared them for the real world. The only problem was that some of former students we wished to talk to lived in California – not particularly conducive for a quick recording session in our studio on campus. Instead, we used the video conferencing tool Zoom to facilitate the call and I used Camtasia to do a screen recording of the interview. While the concept is simple, I found some tips that can make the execution feel a bit more professional.

First, the basics of remote video recording still apply. The subject sat at a desk that faced a window which provided a lot of natural light. It was also around 7am in his time zone so it was pretty quiet as well.

In some scenarios, to get the best possible video quality, I’ll ask the subject to record themselves with an application like Quicktime and then send me the video file. While this helps bypass the compression of streaming video and screen-capture, it comes with a couple drawbacks. First, I as the video producer don’t have direct control over the actual recording process which is a risk. Second, subjects are usually doing you a favor just by agreeing to the interview, and the less you ask of them the better.

Ruling this option out, there’s two other choices. Using Zoom’s built-in recording tool, or using a third-party screen capture tool like Camtasia. They each have their plusses and minuses. Zoom’s built-in tool allows the user to simply hit record within the interface and save the file either to their local computer or the cloud. This will generate both a video file and an audio-only file. However, if the meeting unexpectedly shuts down or the conversion process is interrupted, the recording files could become corrupted and non-recoverable. With Camtasia, the recording is isolated from the conferencing tool so I can better trust that it will record successfully, even if the call drops.

Recording with Camtasia does present another problem. If anything shows up on my screen, be it an email notification, or my mouse moving and activating the Zoom room tools, that is all recorded as well. Zoom’s local recording tool will capture just the video feed.

For the purposes of this video, I would just be showing the subject and would edit out the interviewer’s questions. For this reason, I wanted to make sure that Zoom only gave me the video feed of my subject and did not automatically switch video feeds based on who was talking, which it does by default as part of the Active Speaker layout. By using the Pin function, I can pin the subject’s video feed to my interface so that I will only be seeing the subject’s video, whether I record by screen capture or by local recording. This won’t affect other participants’ views, but it’s also important to note that it would not affect the cloud recording view either.

While facilitating the interview, I muted my microphone to ensure no accidental sounds might come from my end. And because we would be editing out the interviewer’s questions, we coached the subject to rephrase each question in his answer. For example, if we asked “Why is programming important to you?” the subject might start their response with “Programming is important to me because…”

Ultimately, it was just a simple matter of starting the video conference, pinning the subject’s video, and hitting record on Camtasia. From there I could just sit back while the interviewer and subject spoke. Like a lot of video production, proper planning and research will make your job a lot easier when it’s actually time to turn the camera on.

Producing a Video Interview

Recently, I had the opportunity to make a short profile video about a robotics graduate student here at Duke, Victoria Nneji. The goal of the video was to compel middle school students to start thinking about college and their future by sharing Victoria’s story.

This production was also a good opportunity for me to work with our new DSLR camera. The filming process was a big change of pace compared to producing scripted lectures in the green screen studio. Here’s a couple thoughts and takeaways on how the production went:

While the DSLR had a much better depth-of-field and clarity to the image, I didn’t truly appreciate the limitations of working with it until the day of the shoot. Since the camera has no zoom capability, there’s much less flexibility in where you can best place the camera and frame your shot. This was doubly difficult in a scenario where I was also running a secondary camera to capture a wide, two-person shot. Most of the set-up time for the shoot was spent trying to find the right placement for both cameras and the two subjects. Luckily, the in-room overhead lighting worked great, otherwise I’d still be trying to set up the shoot.

Additionally, I neglected to consider that this camera will overheat after about 30 minutes and to try to plan the shoot around that consideration. While we completed the interview without much trouble, I wasn’t able to get as much b-roll with the camera after the interview as I would’ve liked.

In lieu of more extensive on-site b-roll, I was extremely lucky to find some relevant footage as part of Duke’s public video folder which will remain a permanent bookmark for future video projects. The YouTube Audio Library, as always, was a good resource as well for some introductory music.

Were I to do anything differently, I’d try to add a third camera to the setup and feature more of Emerson, the interviewer. For a video aimed at middle-schoolers, I think it would be good to feature her more prominently. I’d also try to get more footage of the robots in action.

Many thanks to Victoria for sharing her story and to David Stein for coordinating the project.

Live U Portable Encoder Combines Cellular and WiFi

One portable field encoder that looks like a powerful way to deliver a live broadcast is the Live U Solo. The live U has options to interface directly with Facebook Live as well as a number of other destinations. It supports a number of different connection protocols, including ethernet, wifi, and has two slots for 3G/4G cellular modems. Any of these signals can be bonded together so you essentially get an aggregate of all the connections the device can manage, capping at a bit rate of 5.5Mbps. This makes the Live U ideal for any situation in which you would otherwise be relying on a single connection point you were worried might not operate reliably on its own.

An option with SDI retails for about $1500.00, and there is an HDMI only version for $995.00.

https://www.amazon.com/LiveU-Wireless-Streaming-Encoder-Facebook/dp/B077ZCS3RV

Basic Logo Animation With Adobe Illustrator and After Effects

For a recent project I was assigned the responsibility of shooting and editing a short 1 minute promotion for the Technology Engagement Center.  Initially I came up with a nifty electric laser title for the piece but it came off as potentially intimidating to the target audience of faculty, staff, and students in the Duke community who aren’t that tech savvy.  Instead, it was requested that I take the existing logo and get creative with it.  No problem.  The initial logo was designed in Adobe Illustrator.  It’s a fairly simple and straightforward design with four overlapping hexagons and a title at the bottom.Illustrator works in layers with each element occupying its own layer with a respective transfer mode that affects how that layer interacts visually with the layers beneath it.  If the elements were “flattened” into one layer each overlapping region of the hexagons would be its own shape.  This wouldn’t do for my application and would also result in my needing to animate seven shapes (three overlapping regions) instead of the initial four.  I noted that the layer transfer mode was “Multiply” with the color of the topmost layer multiplying the color values of the layer beneath it.  This comes in handy later so note this in your own projects if you copy this workflow!  The next step after noting the characteristics of the logo was to export for After Effects.  I exported each layer separately.

I exported utilizing the PSD export option as  that option yields the option to utilize layers.  You could export separate PNGs but I know that After Effects handles PSD files fine.  You must use CMYK  and check “write layers” as an option.  The other settings were fine.  Now it’s time to open Adobe After Effects!

I created a new comp in After Effects that reflected the size of the video that I’m using: 1280 x 720.  I then imported my Photoshop layers into the project panel then dragged them down into the comp.  Each layer popped up perfectly sized and in position.  Now it was time to animate.  This was quite honestly the easiest part but it can be more complicated based on what you do.  I had five layers.  One for each hexagon and one layer for the text which I decided to animate as one object.

First I changed my transfer mode for the hexagon layers to multiply to copy the same visual effect that existed in the Illustrator file.  Told you that information was going to be handy!

I left the bottom text layer and hexagon layer modes as normal as there was no need for them to interact with anything behind them.  I wanted to give the illusion of a “fly in” effect so I created position and size key-frames for each hexagon about 3 seconds in.  I then went to the beginning of the comp and enlarged each heaxagon significantly and moved them off screen with each hexagon going to a different quadrant of the screen.  Four hexagons.  Four quadrants.  Simple.

Lastly I did a horizontal blur and opacity fade in on the bottom text layer to bring in the text.  Here’s the result in animated GIF format.

That’s it!  The entire process (assuming that your files aren’t flattened and too complex) took only about 30 minutes from start to finish.  Given you can get as complex as you like with your logos when you get them into After Effects, but the process is still the same and straightforward.  Try it out and let me know how it works out for you!