Video Render Tests on Macbook Pro – M2 Max Chip

Macbook Pro with M2 Max ChipWith our team due for an upgrade to our video editing workstations, we decided to try out the new MacBook Pro and compare it to our current ~2019 iMacs. Individual specs on each tested computer are listed below:

MacBook Pro Premiere/Final Cut exports tested with
MacBook Pro with M2 Max Chip
Base CPU clock speed – 3.5 GHz
12-Core CPU
38-Core GPU
96GB Unified Memory
16-core Neural Engine

iMac Premiere exports tested with:
iMac (Retina 5K, 27-inch 2019)
Processor – 3.6 GHz 8-Core Intel Core i9
Memory – 64GB 2667 MHz DDR4
Graphics – Radeon Pro 580X 8 GB

iMac Final Cut exports tested with:
iMac (Retina 5k< 27-inch, 2020)
Processor – 3.6 GHz 10-core Intel Core i9
Memory – 64GB 2667 MHz DDR4
Graphics – AMD Radeon Pro 5700 8 GB

Though the difference in RAM makes the comparison a bit apples and oranges, we found the overall performance to be a significant improvement above that caveat. Our primary evaluation criteria was through render times from both Final Cut Pro X and Adobe Premiere Pro..

In each scenario, we exported the same 10-minute long 4K clip to a 1080p file with H.264 codec.

Project file is on an external hard drive
Exporting from FCPX to Desktop
MacBook – 3:23
iMac – 7:12

Exporting from Premiere to Hard Drive
MacBook – 1:31
iMac – 5:25

Project is on the network attached storage
Exporting from FCPX to Desktop
MacBook – 5:57
iMac – 7:12

Exporting from Premiere to NAS
MacBook – 2:10
iMac – 4:04

Taking a look at the Activity Monitor. We found the following difference in CPU/GPU performance while exporting from Premiere on network attached storage:

MacBook
CPU% average was around 145%
GPU% average was around 85%

iMac
CPU% average was around 190%
GPU% average was around 98%

Another Script-Based A/V Editing Option: The Camtasia-Audiate Integration

Descript, a new video and audio creation and editing tool, has been making waves recently on campus with its ability to generate a script file for your project, and allowing you to edit the video simply by making changes to the script. One of the more useful things you can do with this approach is to automate the elimination of awkward “um’s” and pauses, adjust the speed and pacing of your project, and tighten things up quickly in other ways to make your media more listenable.

Some of you might not know, however, that there is a similar approach for those who already use Camtasia. Like Descript, Techsmith’s Audiate utilizes cloud-based speech-to-text technology to generate a script for your audio project. And like Descript, the changes you can make in Audiate, which include many of the things you can do in Descript, get exported back to your audio file without your having to touch that file in a timeline based editor. While Audiate in itself is geared for podcasters, those who are working with video or screen animation can get the full package via Techsmith’s roundtrip integration between Camtasia and Audiate. One cool feature of this integration we wanted to point out is that if you are working with a screen animation where you are using a cursor, Camtasia/ Audiate animates the movement of the cursor between cut points so your viewers don’t see it randomly jumping around on the screen. See below for a demo of Camtasia/ Audiate in action.

Pricing for Audiate at first glance seems to be about the same for Descript, so if you are working with video and are not already a Camtasia user, it probably makes the most sense to use Descript. However, there are discounts available for the Camtasia/ Audiate package.

Since interest seems to be growing in these types of tools and workflows, we would love to hear from you if you’ve tried either of them, and would especially be interested to hear how you think Camtasia/ Audiate stacks up to Descript for your use cases.

Dynamic Video Group Overview – Production Studio in Durham

I had the pleasure of checking out a local production studio called Dynamic Video Group. For the Academic Media Production Team, this will be a great resource to point folks towards who don’t fall under our typical purview or availability.

Their “studio | space” model allows clients to book by the hour. Selecting from a variety of backgrounds (green screen, white, brick, etc), the client can show up with a script and/or slides in hand and work with a studio manager to record on one or more 4K cameras. The studio is equipped with a teleprompter, screen capture options, and soon a lightboard. They can also facilitate live-streaming for recording high quality remote interviews over Zoom etc. The studio can bring on freelance editors if needed, but most of their clients prefer to get the raw recorded files and handle on their end. Similarly, they’re in touch with graphic designers,  and make-up folks should the need arise. Overall, seemed pretty flexible and adaptable to whatever you could throw at them.

With the pandemic, they’re shifting a lot of focus to virtual events, which is reflected in their virtual event studio model. Essentially, it’s an upscale zoom room where they can bring up the grid of participants, display the chat, spotlight guests on a dedicated monitor, etc. This all runs into a control room on site where they can moderate the stream, live switch between cameras, and provide technical support. Their new HybridLink model will even allow them to bring up to 4 cameras on location and send the signal back to their studio control room, bypassing the need for a mobile control room setup.

If you have any questions or plan to work with Dynamic, please get in touch with us at oit-mt-info@duke.edu.

Using Adobe Premiere Rush for Simple Video Editing

In supporting DIY video creation on campus, one of the most frequent issues is how to best edit the video you’ve filmed. While Macs have iMovie built in, there’s no such equivalent software in Windows. While Final Cut Pro X and Adobe Premiere are both available at the Multimedia Project Studio, they can also be overwhelming to new users. Addressing both issues is Adobe Premiere Rush, available as part of the Creative Cloud. Not only is it available on Macs and PC with the same interface and user experience, it’s also available for free on Androids and iOS mobile devices.

It’s features and workflow are no-frills essentials. You select and import the video clips you would like to edit, rearrange and trim them on a timeline, add some graphics and transitions, then export to your resolution of choice. For instances in which you need to cut together some shots from your iPhone, or remove a section from a Zoom recording, using Rush is a way to quickly make the needed edits without getting into logistics required with more advanced software. And if you ever do get ambitious about your project, Rush allows you to import your project to Premiere as well.

To learn more, LinkedIn Learning offers an hour-long course of the software.

PowToons Creates PowErful Security Message

Duke’s IT security offices are rolling out three new videos this fall as part of a strategic effort to expand security training for staff with access to sensitive Duke data.

The three videos, available in the Duke Learning Management System, take about 12 minutes to view and are designed to help Duke staff understand and recognize common security threats to Duke, utilize tools and techniques to reduce security risk, and understand how to protect information and report security incidents.

The training — developed by Cara Bonnett, Shelly Epps, Jay Gallman and Gaylynn Fassler — started with an initial draft of the script to make it as concise as possible. The goal was to present the content in a clear, understandable way, and to incorporate a balance of professional, relatable images that would speak to a diverse multi-generational population. The team used the Powtoon video platform, with voice-over recorded using a Blue Yeti microphone.

The first drafts of the videos were reviewed by both university and Duke Health security teams, with additional consultation with partners in branding/communications, accessibility and Learning & Organizational Development. The videos were loaded into the Duke LMS, along with a bank of questions used in a “knowledge check” required to successfully complete the training.

The team invited OIT and DHTS staff to participate in a pilot of the training and provide feedback via a short Qualtrics survey. More than 450 staff took the training, and the resulting feedback will be incorporated before rolling out the training to the broader Duke community this fall.

Gaylynn Fassler a member of the production team

 

Video Working Group: Visual Misinformation

This month’s Duke Video Working Group topic centered around visual misinformation and the work that the Duke Reporter’s Lab is doing to address a media landscape where truth is harder and harder to discern. Joel Luther showcased how schema like Claim Review can help create a common language for fact checking and identifying mistruths in the media. Particularly interesting was how, utilizing machine learning, platforms are being developed that can provide real-time automated factchecking. Since politicians repeat themselves so often, we can create AI models that recognize a statement as it is being said and then display previously cited sources that prove, disprove, or clarify that claim to the viewer.

We also discussed the role of deepfakes and digital manipulation of video. Using some basic editing tools, a bad actor can distort an otherwise normal video of someone to make them appear drunk or unflattering. With some advanced tools involving machine learning, a bad actor can map a famous person’s face on to almost anyone. While this deepfake technology has not yet reached the point of being totally seamless, many universities and institutions are pursuing not only how to create the “perfect deepfake” but how to identify them as well. In the meantime, this technology has only emboldened others to debate the veracity of any kind of video. If any video could be fake, how will we know when something is actually real?

360 Video in 2020

Insta360 One R

Insta360 One R

We’ve been experimenting in the 360 video / VR headset space for a couple years now and it’s been fascinating to follow the trend in real time. In particular, we’ve been working with the Insta360 Pro and the Oculus Go headset to explore academic use cases for these immersive video projects. As we start a new year, recent announcements from both Insta360 and Oculus point towards a diminishing interest in this use case and for 360 video in general.

As mentioned in a recent blog post, Insta360’s new camera is the One R. It encourages you to “adapt to the action” with two ways to shoot: as a 360 cam or as a 4k 60fps wide-angle lens. It features a AI-driven tracking algorithm to automatically follow moving subjects in your shots. The Auto Frame algorithm automatically detects which portions of a 360 shot would work best within a 16:9 frame. In almost every marketed feature, there’s a subtext of using the 360 camera as powerful tool for outputting 16:9 video. Coming from one of the leaders in the 360 camera space, this focus isn’t particularly encouraging for the long-term consumption of 360 video.

The viewing of 360 video was always at its most immersive in a headset, which has proved to be one of the biggest boundaries to wider adoption, since most viewers are unlikely to even have a headset, let alone find it and put it on just to watch a video. As such, the standalone $200 Oculus Go seemed a natural solution for businesses who could produce their own 360 content and simply hand over an Oculus Go headset to their client. Recently, however, Oculus dropped the Go from its Oculus for Business platform, suggesting their Oculus Quest is the best solution for most business VR needs. This development sees Oculus leaning more towards support for full Virtual Reality, and less towards immersive 360 video playback.

While certainly not gone from the conversation, excitement and application for 360 video seems to be waning from a couple years ago. We’ll continue to search for use cases and projects that show the potential of this technology, so please reach out to the DDMC if you find any exciting possibilities.

New Insta360 ONE R

Insta360 just launched their latest 360 camera, the ONE R. It’s actually a modular system and not a single, self-contained camera. Only time will tell, but it seems like the ONE R could be an innovative approach to  solving the problem of how to pack the burgeoning features we are seeing in the action and 360 camera spaces into a workable form factor. Certainly Insta360 seems to have doubled down their focus on the using 360 as coverage for standard 16:9 action shots.

The ONE R starts with a battery base and a touch screen that sits on top (it can be installed forwards or backwards depending on the use case) next to an empty space that could include one of the following:

  • A 5.7K 360 camera
  • A 4K action camera that records at 60fps for 4K and 200fps for 1080p
  • A 5.3K wide-angle (14.4mm equivalent) mod co-developed with Leica that has a 1-inch sensor (30fps for 5.3K, 60fps for 4K, and 120fps for 1080p) This module was developed with the help of camera company Leica.

 

 

Key features include:

  • Insta360’s FlowState stabilization is a key part of all three modules.
  • Waterproof to 16 feet, despite the module design
  • Aerial mod that makes it possible to hide your drone from footage
  • External mic support
  • Various remote control options, including Apple Watch, voice, and a GPS enabled smart remote
  • Selfie stick
  • Motion tracking to lock in on subjects
  • Tons of software/ post production options like bullet time, time lapse, slo mo, etc.

We’re not seeing a ton of immediate academic use cases for features such as the above, but will certainly keep the ONE R in mind if the right project arises.

 

Behind the Scenes of “Posters, Actually”

This year, I had the privilege of working with Mark Delong to bring his annual poster symposium deadline video to life. You can watch the whole video here: https://youtu.be/OGDSXK5crd8

Mark had a particularly ambitious vision for this year’s video, so I thought it would be worthwhile to discuss our creative process and how we tackled various production challenges.

We began development in October, when Mark provided a ten-page script for the project, with multiple scenes and characters. More than just a simple song parody, he envisioned what amounted to a short film – one that matched, scene for scene, the Billy Mack plotline from 2001 movie Love Actually. While we would eventually narrow the scope of the script, it was clear early on that I would need to ensure the production value matched Mark’s cinematic vision. Among other things, this included filming for a wider aspect ratio (2.55:1 versus the typical 16:9), using our DSLR for better depth of field, and obtaining a camera stabilizer so I could add some movement to the shots.

The first two things were relatively straightforward. I’d use our Sony aIII to film in 4k and crop the video to the desired aspect ratio. We didn’t have a stabilizer, so I did a little research and our team ended up purchasing the Zhi Yun Weebil Lab package. In this review post, I go into more detail regarding our experience using it. Having not had the opportunity to work with a gimbal like this before, I enjoyed the opportunity to experiment with the new tool.

Our first day filming was at the WXDU radio station at the Rubenstein Arts Center. They were kind enough to let us use their podcast recording studio which was the perfect set for the Tina and Tom scene. I quickly realized the first challenge in recording with the stabilizer would be capturing good audio. The size of the stabilizer simply didn’t allow me to affix a shotgun mic to my camera and I didn’t have anyone else to work a boom mic for me. Ultimately, I decided to run two cameras – a stationary 4k Sony Camcorder that would capture audio and provide some basic shot coverage, and then roam with the stabilized DSLR. Between running two cameras, directing the performers, and making sure we captured everything we needed, I was spinning a lot of plates. To combat this, we filmed the scene multiple times to ensure we had redundant takes on every line which provided a much-needed safety net in editing.

We filmed every other shot on green screen at the Technology Engagement Center. Though at first simpler than shooting a three-person dialogue scene, it came with its own challenges. Principally, contrary to most green screen filming we do, the intention here was to make the performers look like they were on a real set. This meant anticipating the angle and lighting conditions of the background we’d place them on. Though it wouldn’t be seamless, the goofy nature of the video would hopefully allow us some leeway in terms of how realistic everything needed to look. Since I was moving the camera, the hardest part was making the background move in a natural parallax behind Mark. This was easy enough when the camera stayed at the same distance but almost impossible to get right when I moved the camera toward him. For this reason,  at the poster symposium scene I faded the composited elements behind Mark to just a simple gradient, justified by the dreamy premise of this part of the video.

Perhaps the biggest challenge was not related to video at all. For the song parody, we recorded using a karaoke backing track we found on YouTube. However, the track had built-in backing vocals that were almost impossible to remove. Luckily, we had our own rock star on staff, Steve Toback, who was able to create a soundalike track from scratch using GarageBand. His version ended up being so good that when we uploaded the final video to YouTube, the track triggered an automated copyright claim.

Were I to do it all over again, there’s a few things I would try to do differently. While running the stabilizer, I would try to be more conscious of the camera’s auto-focus, as it would sometimes focus on the microphones in front of the performer, rather than the performer themselves. I sometimes forgot I’d be cropping the video to a wider aspect ratio and framed the shot for a 16:9 image, so I would try to remind myself to shoot a little wider than I might normally. Overall though, I’m satisfied with how everything turned out. I’m grateful for all the support during the production, particularly to Mark and Steve, without whom none of this would have been possible.

iPhone 11 Announced with Improvements to Camera

Iphone 11 Pro Camera

They say the best camera is the one you have with you. With Apple’s upgrades to the camera in the recently announced iPhone 11 series, this adage may be more true than ever.

For most of our production on online courses, we mostly use a Sony Handycam for it’s versatility, and DSLR for interviews or other beauty shots. However, in the course of filming, I often find myself reaching for my iPhone 8 to supplement that footage. For a course on Nanotechnology, I used the slo-mo feature to capture how liquid nitrogen can make everyday objects more fragile. For some behind-the-scenes b-roll, I found the built-in stabilization allowed me to capture extended tracking shots with few hiccups.

The iPhone 11’s improved camera now makes a strong case for filming on a phone in many scenarios. The Verge has a great write-up of the specifics, but the highlights to me are:

  • Wide-Angle Lens on Base Model – I’ve often found myself in rather small settings where I simply couldn’t get back far enough with our traditional cameras to get everything I needed in one shot. Here, in lieu of investing a dedicated wide-angle lens for the DSLR, I could try subbing in my iPhone to get the one wide-shot I need.
  • Recording on Multiple Camera on 11 Pro – This is a great solution for when you need to shoot first and ask questions later. Though it will surely take up a lot of storage place, having more flexibility in post-production is always a good thing.
  • Audio Zoom on 11 Pro – I always recommend that videographers using an iPhone use an external mic to capture dialogue. If this feature can isolate audio coming from a central on-camera subject, that could make impromptu video interviews much more feasible.