Using Adobe Premiere Rush for Simple Video Editing

In supporting DIY video creation on campus, one of the most frequent issues is how to best edit the video you’ve filmed. While Macs have iMovie built in, there’s no such equivalent software in Windows. While Final Cut Pro X and Adobe Premiere are both available at the Multimedia Project Studio, they can also be overwhelming to new users. Addressing both issues is Adobe Premiere Rush, available as part of the Creative Cloud. Not only is it available on Macs and PC with the same interface and user experience, it’s also available for free on Androids and iOS mobile devices.

It’s features and workflow are no-frills essentials. You select and import the video clips you would like to edit, rearrange and trim them on a timeline, add some graphics and transitions, then export to your resolution of choice. For instances in which you need to cut together some shots from your iPhone, or remove a section from a Zoom recording, using Rush is a way to quickly make the needed edits without getting into logistics required with more advanced software. And if you ever do get ambitious about your project, Rush allows you to import your project to Premiere as well.

To learn more, LinkedIn Learning offers an hour-long course of the software.

PowToons Creates PowErful Security Message

Duke’s IT security offices are rolling out three new videos this fall as part of a strategic effort to expand security training for staff with access to sensitive Duke data.

The three videos, available in the Duke Learning Management System, take about 12 minutes to view and are designed to help Duke staff understand and recognize common security threats to Duke, utilize tools and techniques to reduce security risk, and understand how to protect information and report security incidents.

The training — developed by Cara Bonnett, Shelly Epps, Jay Gallman and Gaylynn Fassler — started with an initial draft of the script to make it as concise as possible. The goal was to present the content in a clear, understandable way, and to incorporate a balance of professional, relatable images that would speak to a diverse multi-generational population. The team used the Powtoon video platform, with voice-over recorded using a Blue Yeti microphone.

The first drafts of the videos were reviewed by both university and Duke Health security teams, with additional consultation with partners in branding/communications, accessibility and Learning & Organizational Development. The videos were loaded into the Duke LMS, along with a bank of questions used in a “knowledge check” required to successfully complete the training.

The team invited OIT and DHTS staff to participate in a pilot of the training and provide feedback via a short Qualtrics survey. More than 450 staff took the training, and the resulting feedback will be incorporated before rolling out the training to the broader Duke community this fall.

Gaylynn Fassler a member of the production team

 

Video Working Group: Visual Misinformation

This month’s Duke Video Working Group topic centered around visual misinformation and the work that the Duke Reporter’s Lab is doing to address a media landscape where truth is harder and harder to discern. Joel Luther showcased how schema like Claim Review can help create a common language for fact checking and identifying mistruths in the media. Particularly interesting was how, utilizing machine learning, platforms are being developed that can provide real-time automated factchecking. Since politicians repeat themselves so often, we can create AI models that recognize a statement as it is being said and then display previously cited sources that prove, disprove, or clarify that claim to the viewer.

We also discussed the role of deepfakes and digital manipulation of video. Using some basic editing tools, a bad actor can distort an otherwise normal video of someone to make them appear drunk or unflattering. With some advanced tools involving machine learning, a bad actor can map a famous person’s face on to almost anyone. While this deepfake technology has not yet reached the point of being totally seamless, many universities and institutions are pursuing not only how to create the “perfect deepfake” but how to identify them as well. In the meantime, this technology has only emboldened others to debate the veracity of any kind of video. If any video could be fake, how will we know when something is actually real?

360 Video in 2020

Insta360 One R

Insta360 One R

We’ve been experimenting in the 360 video / VR headset space for a couple years now and it’s been fascinating to follow the trend in real time. In particular, we’ve been working with the Insta360 Pro and the Oculus Go headset to explore academic use cases for these immersive video projects. As we start a new year, recent announcements from both Insta360 and Oculus point towards a diminishing interest in this use case and for 360 video in general.

As mentioned in a recent blog post, Insta360’s new camera is the One R. It encourages you to “adapt to the action” with two ways to shoot: as a 360 cam or as a 4k 60fps wide-angle lens. It features a AI-driven tracking algorithm to automatically follow moving subjects in your shots. The Auto Frame algorithm automatically detects which portions of a 360 shot would work best within a 16:9 frame. In almost every marketed feature, there’s a subtext of using the 360 camera as powerful tool for outputting 16:9 video. Coming from one of the leaders in the 360 camera space, this focus isn’t particularly encouraging for the long-term consumption of 360 video.

The viewing of 360 video was always at its most immersive in a headset, which has proved to be one of the biggest boundaries to wider adoption, since most viewers are unlikely to even have a headset, let alone find it and put it on just to watch a video. As such, the standalone $200 Oculus Go seemed a natural solution for businesses who could produce their own 360 content and simply hand over an Oculus Go headset to their client. Recently, however, Oculus dropped the Go from its Oculus for Business platform, suggesting their Oculus Quest is the best solution for most business VR needs. This development sees Oculus leaning more towards support for full Virtual Reality, and less towards immersive 360 video playback.

While certainly not gone from the conversation, excitement and application for 360 video seems to be waning from a couple years ago. We’ll continue to search for use cases and projects that show the potential of this technology, so please reach out to the DDMC if you find any exciting possibilities.

New Insta360 ONE R

Insta360 just launched their latest 360 camera, the ONE R. It’s actually a modular system and not a single, self-contained camera. Only time will tell, but it seems like the ONE R could be an innovative approach to  solving the problem of how to pack the burgeoning features we are seeing in the action and 360 camera spaces into a workable form factor. Certainly Insta360 seems to have doubled down their focus on the using 360 as coverage for standard 16:9 action shots.

The ONE R starts with a battery base and a touch screen that sits on top (it can be installed forwards or backwards depending on the use case) next to an empty space that could include one of the following:

  • A 5.7K 360 camera
  • A 4K action camera that records at 60fps for 4K and 200fps for 1080p
  • A 5.3K wide-angle (14.4mm equivalent) mod co-developed with Leica that has a 1-inch sensor (30fps for 5.3K, 60fps for 4K, and 120fps for 1080p) This module was developed with the help of camera company Leica.

 

 

Key features include:

  • Insta360’s FlowState stabilization is a key part of all three modules.
  • Waterproof to 16 feet, despite the module design
  • Aerial mod that makes it possible to hide your drone from footage
  • External mic support
  • Various remote control options, including Apple Watch, voice, and a GPS enabled smart remote
  • Selfie stick
  • Motion tracking to lock in on subjects
  • Tons of software/ post production options like bullet time, time lapse, slo mo, etc.

We’re not seeing a ton of immediate academic use cases for features such as the above, but will certainly keep the ONE R in mind if the right project arises.

 

Behind the Scenes of “Posters, Actually”

This year, I had the privilege of working with Mark Delong to bring his annual poster symposium deadline video to life. You can watch the whole video here: https://youtu.be/OGDSXK5crd8

Mark had a particularly ambitious vision for this year’s video, so I thought it would be worthwhile to discuss our creative process and how we tackled various production challenges.

We began development in October, when Mark provided a ten-page script for the project, with multiple scenes and characters. More than just a simple song parody, he envisioned what amounted to a short film – one that matched, scene for scene, the Billy Mack plotline from 2001 movie Love Actually. While we would eventually narrow the scope of the script, it was clear early on that I would need to ensure the production value matched Mark’s cinematic vision. Among other things, this included filming for a wider aspect ratio (2.55:1 versus the typical 16:9), using our DSLR for better depth of field, and obtaining a camera stabilizer so I could add some movement to the shots.

The first two things were relatively straightforward. I’d use our Sony aIII to film in 4k and crop the video to the desired aspect ratio. We didn’t have a stabilizer, so I did a little research and our team ended up purchasing the Zhi Yun Weebil Lab package. In this review post, I go into more detail regarding our experience using it. Having not had the opportunity to work with a gimbal like this before, I enjoyed the opportunity to experiment with the new tool.

Our first day filming was at the WXDU radio station at the Rubenstein Arts Center. They were kind enough to let us use their podcast recording studio which was the perfect set for the Tina and Tom scene. I quickly realized the first challenge in recording with the stabilizer would be capturing good audio. The size of the stabilizer simply didn’t allow me to affix a shotgun mic to my camera and I didn’t have anyone else to work a boom mic for me. Ultimately, I decided to run two cameras – a stationary 4k Sony Camcorder that would capture audio and provide some basic shot coverage, and then roam with the stabilized DSLR. Between running two cameras, directing the performers, and making sure we captured everything we needed, I was spinning a lot of plates. To combat this, we filmed the scene multiple times to ensure we had redundant takes on every line which provided a much-needed safety net in editing.

We filmed every other shot on green screen at the Technology Engagement Center. Though at first simpler than shooting a three-person dialogue scene, it came with its own challenges. Principally, contrary to most green screen filming we do, the intention here was to make the performers look like they were on a real set. This meant anticipating the angle and lighting conditions of the background we’d place them on. Though it wouldn’t be seamless, the goofy nature of the video would hopefully allow us some leeway in terms of how realistic everything needed to look. Since I was moving the camera, the hardest part was making the background move in a natural parallax behind Mark. This was easy enough when the camera stayed at the same distance but almost impossible to get right when I moved the camera toward him. For this reason,  at the poster symposium scene I faded the composited elements behind Mark to just a simple gradient, justified by the dreamy premise of this part of the video.

Perhaps the biggest challenge was not related to video at all. For the song parody, we recorded using a karaoke backing track we found on YouTube. However, the track had built-in backing vocals that were almost impossible to remove. Luckily, we had our own rock star on staff, Steve Toback, who was able to create a soundalike track from scratch using GarageBand. His version ended up being so good that when we uploaded the final video to YouTube, the track triggered an automated copyright claim.

Were I to do it all over again, there’s a few things I would try to do differently. While running the stabilizer, I would try to be more conscious of the camera’s auto-focus, as it would sometimes focus on the microphones in front of the performer, rather than the performer themselves. I sometimes forgot I’d be cropping the video to a wider aspect ratio and framed the shot for a 16:9 image, so I would try to remind myself to shoot a little wider than I might normally. Overall though, I’m satisfied with how everything turned out. I’m grateful for all the support during the production, particularly to Mark and Steve, without whom none of this would have been possible.

iPhone 11 Announced with Improvements to Camera

Iphone 11 Pro Camera

They say the best camera is the one you have with you. With Apple’s upgrades to the camera in the recently announced iPhone 11 series, this adage may be more true than ever.

For most of our production on online courses, we mostly use a Sony Handycam for it’s versatility, and DSLR for interviews or other beauty shots. However, in the course of filming, I often find myself reaching for my iPhone 8 to supplement that footage. For a course on Nanotechnology, I used the slo-mo feature to capture how liquid nitrogen can make everyday objects more fragile. For some behind-the-scenes b-roll, I found the built-in stabilization allowed me to capture extended tracking shots with few hiccups.

The iPhone 11’s improved camera now makes a strong case for filming on a phone in many scenarios. The Verge has a great write-up of the specifics, but the highlights to me are:

  • Wide-Angle Lens on Base Model – I’ve often found myself in rather small settings where I simply couldn’t get back far enough with our traditional cameras to get everything I needed in one shot. Here, in lieu of investing a dedicated wide-angle lens for the DSLR, I could try subbing in my iPhone to get the one wide-shot I need.
  • Recording on Multiple Camera on 11 Pro – This is a great solution for when you need to shoot first and ask questions later. Though it will surely take up a lot of storage place, having more flexibility in post-production is always a good thing.
  • Audio Zoom on 11 Pro – I always recommend that videographers using an iPhone use an external mic to capture dialogue. If this feature can isolate audio coming from a central on-camera subject, that could make impromptu video interviews much more feasible.

Using Particular to Build Particle Systems in After Effects

For a recent project, I was tasked with a designing a screensaver that had an ethereal pulsing background (like a less busy version of this video). It had to be 1 minute, loopable without any hiccups when it restarted, and also change color over the duration of the project. In researching how to accomplish this, nearly every resource I found pointed towards one tool: Particular.

Particular is an Adobe After Effects plugin made by Red Giant that gives the user tremendous power in designing and controlling particle systems. It can be used to create anything from the screensaver-type effects described above, to magic wand-esque flourishes (this video has a lot of great examples, though I doubt they used this tool), to a variety of other cool effects. One of my favorites was the ability to dissolve a text object into millions of floating particles similar to this version of the IMAX logo. As is usually the case in graphic design and video production, once I started looking for particle systems in every day media, I started seeing it everywhere.

I found the interface surprisingly intuitive with an incredible amount of depth. Particular includes a “Designer” window which allows you to build the effects from scratch or customize a pre-set template. Here, you can adjust the Emitter (where the particles originate from), the Particles themselves, and even an Auxiliary system where the particles generate their own particle systems. All of the effects can stack and interact with each other in very complex ways. Just be sure your computer’s processor is ready to deal with rendering thousands of uniquely animated objects!

Working with this tool, I frequently found myself thinking “wow, I didn’t know how easy it was to make something this sophisticated and cool.” While the plug-in usually sells for $399, an academic license is available for $199.

Remote Directing With Zoom

I needed to produce a short video about my department’s role in building the new Karsh Alumni & Visitor’s Center at Duke. One problem, I was 3000 miles away from Durham. Zoom to the rescue. The producer for the project, Mich Donovan had the great idea of mounting his iPhone to the camera so that I could see pretty much what his camera was seeing and I was able to provide feedback in real time to the actors and Mich to make sure we got the shots we needed for the project. There were a few glitches when we went outside making sure we had cell service and almost running out of battery (next time we’ll have an external USB battery), but all in all it was a tremendous success.

Comparing Machine Transcription Options from Rev and Sonix

As part of our continuing exploration of new options for transcription and captioning, two members of our media production team tested the automated services offered by both Rev and Sonix. We submitted the same audio and video files to each service and compared the results. Overall, both services were surprisingly accurate and easy to use. Sonix, in particular, offers some unique exporting options that could be especially useful to media producers. Below is an outline of our experience and some thoughts on potential uses.

Accuracy

The quality and accuracy of the transcription seemed comparable. Both produced transcripts with about the same number of errors. Though errors occurred at similar rates, they interestingly almost always occurred in different places. All of the transcripts would need cleaning up for official use but would work just fine for editing or review purposes. The slight edge might go to Rev here. It did a noticeably better job at distinguishing and identifying unique speakers, punctuating, and in general (but not always) recognizing names and acronyms.  

Interface

When it came time to share and edit the transcripts, both services offered similar web-based collaborative tools. The tools feature basic word processing functions and allow multiple users to highlight, strikethrough, and attach notes to sections of text. After it’s recent updates, the Rev interface is slightly cleaner and more streamlined. Again, the services are pretty much even in this category.

Export Options

This is where things get interesting. Both services allow users to export transcripts as documents (Microsoft Word, Text File, and, for Sonix, PDF) and captions (SubRip and WebVTT). However, Sonix offers some unique export options. When exporting captions, Rev automatically formats the length and line breaks of the subtitles and produces reliable results. Sonix, on the other hand, provides several options for formatting captions including character length, time duration, number of lines, and whether or not to include speaker names. The downside was that using the default settings for caption exporting in Sonix led to cluttered, clunky results, but the additional options would be useful for those looking for more control of how their captions are displayed.

Sonix also allows two completely different export options. First, users can export audio or video files that include only highlighted sections of the transcript or exclude strikethroughs. Basically, you can produce a very basic audio or video edit by editing the transcript text. It unfortunately does not allow users to move or rearrange sections of media and the edits are all hard cuts so it’s a rather blunt instrument, but it could be useful for rough cuts or those with minimal editing skills.

Sonix also provides the option of exporting XML files that are compatible with Adobe Audition, Adobe Premiere, and Final Cut Pro. When imported into the editing software these work like edit decision lists that automatically cut and label media in a timeline. We tried this with two different audio files intended for a podcast, and it worked great. This has the potential to be useful for more complicated and collaborative post-production workflows, an online equivalent of an old school “paper edit”. Again, the big drawback here is the inability to rearrange the text. It could save time when cutting down raw footage, but a true paper edit would still require editing the transcript with timecode in a word processing program.

And the winner is…

Everyone. Both Rev and Sonix offer viable and cost-effective alternatives to traditional human transcription. Though the obvious compromise in accuracy exists, it is much less severe than you might expect. Official transcripts or captions could be produced with some light editing, and, from a media production perspective, quick and cheap transcripts can be an extremely useful tool in the post-production process. Those looking to try a new service or stick with the one they’re familiar with can be confident that they’re getting the highest quality machine transcription available with either company. As more features get added and improved, like those offered by Sonix, this could become a helpful tool throughout the production process.