October 2018 Adobe Creative Cloud Update Part 1: Adobe Premiere Pro

It’s fall, pumpkin spice is in the air, the holidays are Christmas decorations are going up, and software giant has just released updates to their entire Creative Cloud suite of applications.  Because the updates are so extensive, I’ve decided to do a multi-part series of DDMC entries that focuses on the new changes in detail for Premiere Pro, After Effects, Photoshop/Lightroom, and a new app Premiere Rush.  I just downloaded Rush today to my phone to put it through it’s paces so I’m saving that application for last but my first rundown of Premiere Pro’s new features is ready to go!

END TO END VR 180

Premiere Pro supports full native video editing for 180 VR content with the addition of a virtual screening room for collaboration.  Specific focal points can be tagged and identified in the same way you would in your boring 2D content.  Before you had to remove your headset to do any tagging but now you can keep your HMD (Head Mounted Display) on and keep cutting.  I’m just wetting my feet with VR but I can see how this could revolutionize the workflow for production houses integrating VR into their production workflow.  Combined with the robust networking features in Premiere Pro and symbiotic nature of the Adobe suite of applications this seems like a nice way to work on VR projects with a larger collaborative scope.

DISPLAY COLOR MANAGEMENT

Adobe has integrated a smart new feature that takes some of the guesswork out of setting your editing station color space.  Premiere Pro can now establish the color space of your particular monitor and adjust itself accordingly to compensate for color irregularities across the suite.  Red stays red no matter if it’s displayed in Premiere Pro, After Effects, or Photoshop!

INTELLIGENT AUDIO CLEANUP

Premiere Pro can now scan your audio and clean it up using two new sliders in the Essential Sound panel.  DeNoise and DeReverb allow you to remove background audio and reverb from your sound respectively.  Is it a replacement for quality sound capture on site?  No.  But it does add an extra level of simplicity that I’ve only experienced in Final Cut Pro so I’m happy about this feature.

PERFORMANCE IMPROVEMENTS

Premiere Pro is faster all around but if you’re cutting on a Mac you should experience a notable boost due to the new hardware based endcoding and decoding for H.264 and HEVC codecs.  Less rendering time is better rendering time.

SELECTIVE COLOR GRADING

Lumetri Color tools and grades are becoming more fine tuned.  This is a welcome addition as Adobe discontinued Speedgrade and folded it into Premiere Pro a while ago.  All your favorite Lumetri looks still remain but video can be adjusted to fit the color space of any still photo or swatch you like.  Colors can also be isolated and targeted for adjustment which is cool if you want to change a jacket, eye, or sky color.

EXPANDED FORMAT SUPPORT

Adobe Premiere now supports ARRI Alexa LF, Sony Venice V2, and the HEIF (HEIC) capture format used by iPhone 8 and iPhone X.

DATA DRIVEN INFOGRAPHICS

Because of the nature of my work as a videographer for an institution of higher education this feature actually has me the most excited.  Instrutional designers are constantly looking for ways to “jazz up” their boring tables into something visually engaging.  Now there is a whole slew of visual options with data driven infographic.  All you have to provide is the data in spreadsheet form then you can drag and drop in on one of the many elegant templates to build lower thirds, animated pie charts, and more.  It’s a really cool feature I plan to put through it’s paces on a few projects in place of floating prefabricated pie charts.

All these new additions make Adobe Premiere Pro a solid one stop editing platform but combined with the rest of the Adobe suite, one can easily see the endless pool of creative options that make it an industry standard!

Stay tuned for Part II:  Adobe Rush!

Direct Share With Zoom Rooms

Let’s face it… wirelessly sharing content can be a pain. Even in very basic AV setup such as an Apple TV, you need to know the name of the Apple TV (which sometimes is named very differently from the room you are in), you join, and are then asked to enter a passcode. Sure, it’s only a minute or so, assuming you’re familiar with the setup, but it’s time spent not focusing on your presentation.

“One touch” connection solutions are available from a number of vendors, but they usually require downloading special software to enable that feature, or that the “one touch” is only one touch after an extensive setup. Why can’t it just be built into the core application? Why does it need to be so difficult?

Surprise, Zoom Rooms has that feature built in. Basically, you enter the physical room, open the Zoom app, press “Share screen” and presto, you’ve connected to the Zoom Room and sharing content in seconds. It’s that easy. No need to enter a ten digit code or find the Zoom Room in AirPlay… it just works.

This is really intended if you simply want to share your screen as you will lose some of Zoom’s more advanced functionality when in “quick connect” mode. For example, when you use “Share screen” you lose the ability to annotate, pause the share, or select only specific applications to share. That said, when additional functionality was needed (say I wanted to add a remote participant or use more advanced features) I could simply invite myself to the Zoom Room and Zoom handled the transition gracefully.

It’s the little things that will drive user adoption, and this is a nice feature.

Logitech Spotlight – The Evolution of the Pointer

I’m sure if we went far enough back in time, there was once a person in a cave teaching tribal hunting techniques while pointing at a cave drawing with a stick. And thus, the “pointer” was born. Sure, the stick became more uniform, it even evolved to collapse and fit neatly in a pocket protector! But, it was still, in essence, a stick. But, as classroom technology advanced, traditional pointers simply weren’t large enough to keep up with the ever-increasing screen sizes. Also, pointers required that the pointee (I’m not sure if that’s a word) be within three or four feet of the content. So, in the late 1990’s the laser pointer was and continues to be, all the rage for presenters.

But as with all things AV, here comes that pesky technology to throw a wrench in our perfect laser stick type device. While the laser pointer worked wonderfully with people physically in the room, it didn’t allow remote participants (via WebEx, Skype, Zoom, etc.) to join in the pointing fun. Remote participants were usually reduced to looking at the postage stamp sized video feed, occasionally seeing a bit of red flash on the screen. Even worse, the booming voice of “we can’t see what you’re pointing at!” never blended well with a well-choreographed presentation.

Enter the Logitech Spotlight. When I opened the package, I really didn’t understand what it was. I was expecting an elegant upgrade to their previous laser pointers, but it clearly didn’t have a laser. I thought “Gee, that’s an expensive PowerPoint forward/reverse device.” Clearly, I had no clue as to the power that was well masked in this seemingly benign device. I connected the fob to my computer and launched an Apple Keynote presentation, and nothing happened. Hmmm, so I broke down and read the instructions (to be clear, there was a VERY clear sticker on the device saying “Download software to activate highlight, magnify, and timmer… but who reads stickers these days?). After installing the Spotlight software from Logitech’s website and charging the remote via USB C (my MacBook Pro’s power supply worked like a charm), I was still a little stumped. OK, so I went into the software and programmed the forward and reverse buttons, and boom, we had a very nice little remote. But, what was this top button that looked like a laser pointer button? I tapped it, and nothing really happened. So I pressed if for a second or so and the spotlight feature appeared on the screen. “Oh, cool, so it’s like a virtual ‘laser’ pointer, but without the laser.” Then it hit me… this is actually a big deal.

By virtualizing the pointer, individuals viewing the presentation remotely can also follow along without having to ask the above mentioned “So, what are you pointing at?” The implications are wide-reaching in higher education. From Panopto classroom recordings to WebEx, and Zoom meetings, even Skype calls or YouTube videos can take advantage of this type of pointer when sharing content. Yes, many of these platforms have built-in virtual pointers, but that requires that the presenter is tied to the computer’s mouse. Even if you have a wireless mouse, you’re still tethered to a desktop or table surface. The Logitech Spotlight frees the presenter to walk anywhere in the room. The software is very customizable, so while you can use the very cool spotlight, you can also magnify an area, or set the device to work more like a traditional laser pointer.

But wait, there’s more!!! The Logitech Spotlight also offers a timer that will vibrate the remote to help keep your presentation on track with regards to time. This feature is a big bonus for those folks that have time-sensitive presentations. Finally, the remote can act as a wireless mouse to offer basic button pushing you might need during presentations (think start/stop videos, close a window kind of control). It’s great for basic control, but don’t throw your wireless mouse away as it’s really only intended for basic control, and it’s only as good as how steady your hand is.

If I have a criticism, it would be that the remote would infrequently “jitter” (not hit the exact spot I wanted to hit) or momentarily lose connectivity. This may be due to my penchant for upgrading my Mac to the latest and greatest OS before considering how it may impact the applications I use. Still, I found the device to be game changing if you live and die by the pointer.

 

Whitlock VIBE Roadshow – Durham 2018

If you don’t have three (plus… plus… plus…) days to dedicate to InfoComm, Whitlock’s VIBE is a great alternative for those looking for single-day localized tradeshow with a number of the big AV players in attendance (Sony, Shure, Microsoft, Biamp, Crestron, Logitech, Panasonic, Epson, HARMAN, etc. etc.). Sure, you don’t get the complete InfoComm experience with the massive booths, bags and bags of swag, “top secret” back room roadmaps, and crazy after-parties (from what I hear), but what VIBE lacks in scale, it makes up for in generous face time with your local and regional sales representatives and most importantly, interacting with your regional AV counterparts.

I started off my day by checking in and hitting the booths to see who was in attendance. It doesn’t seem to matter how actively I monitor technology blogs or company newsletters, I always see something, while touring the booths, that makes me go, “wow… that’s cool.” VIBE was no exception. While much of the coolness was more evolutionary vs. revolutionary, there were a few common trends. First, EVERYONE seems to have a small huddle-room solution, everyone. It’s clear that that segment is the low hanging fruit for many hardware manufacturers and they are trying to leverage what they do best in a sea of options and competition. The core advantage is, there is now a price point for just about every small AV install, and many of the options are dead simple. Second, if you aren’t on the AV over IP train (or at least have your ticket), you’re going to be left behind. For institutions where the AV department isn’t under IT, things could get a little ugly. If nothing else, NOW is the time to start having conversations with your networking folks about the future of AV. Third, hardware manufacturers are finally getting around to providing user-friendly software for AV technicians to monitor their equipment. Sure, software engineers and more advanced programmers have had this for years, but it’s finally getting down to the mid-level AV technicians and it offers invaluable insight into the hardware that’s deployed.

The highlight of the day was the VIBE Higher Ed Roundtable. This invite-only session was an opportunity for regional higher-education institutions to discuss “the top issues facing Higher Ed, including remote learning, BYOD, security, technology adoption, AI and Cloud solution management.” Whitlock would pose a topic, and the AV folks from the various universities would provide their input or thoughts on the subject. The common theme seemed to be that while few organizations are pushing the boundaries of AV, there is a subtle consolidation of technologies happening throughout the higher-education enterprise. In essence, fewer services are providing more capabilities.

If you missed Whitlock’s VIBE show, make sure to catch it next year. It’s a great event.

Kaptivo

Let’s face it… humans like articulating concepts by drawing on a wall. This behavior dates back over 64,000 years with some of the first cave paintings. While we’ve improved on the concept over the years, transitioning to clay tablets, and eventually blackboards and whiteboards, the basic idea has remained the same. Why do people like chalkboard/whiteboards? Simple, it’s a system you don’t need to learn (or you learned when you were a child), you can quickly add, adjust, and erase content, it’s multi-user, it doesn’t require power, never needs a firmware or operating system update, and it lasts for years. While I’ll avoid the grand “chalkboard vs. whiteboard” debate, we can all agree that the two communication systems are nearly identical, and are very effective in teaching environments. But, as classrooms transition from traditional learning environments (one professor teaching to a small to a medium number of students in a single classroom) to distance education and active learning environments, compounded by our rapid transition to digital platforms… the whiteboard has had a difficult time making the transition. There have been many (failed) attempts at digitizing the whiteboards, just check eBay. Most failed for a few key reasons. They were expensive, they required the user to learn a new system, they didn’t interface well with other technologies… oh, and did I mention that they were expensive?

Enter Kaptivo, a “short throw” webcam based platform for capturing and sharing whiteboard content. During our testing (Panopto sample), we found that the device was capable of capturing the whiteboard image, cleaning up the image with a bit of Kaptivo processing magic, and convert the content into an HDMI friendly format. The power of Kaptivo is in its simplicity. From a faculty/staff/student perspective, you don’t need to learn anything new… just write on the wall. But, that image can now be shared with our lecture capture system or any AV system you can think of (WebEx, Skype, Facebook, YouTube, etc.). It’s also worth noting that Kaptivo is also capable of sharing the above content with their own Kaptivo software. While we didn’t specifically test this product, it looked to be an elegant solution for organizations with limited resources.

The gotchas: Every new or interesting technology has a few gotchas. First, Kaptivo currently works with whiteboards (sorry chalkboard fans). Also, there isn’t any way to daisy chain Kaptivo or “stitch” multiple Kaptivo units together for longer whiteboards (not to mention how you would share such content). Finally, the maximum whiteboard size is currently 6′ x 4′, so that’s not all that big in a classroom environment.

At the end of the day, I could see this unit working well in a number of small collaborative learning environments, flipped classrooms and active learning spaces. We received a pre-production unit, so I’m anxious to see what the final product looks like and if some of the above-mentioned limitations can be overcomed. Overall, it’s a very slick device.

Elgato Link Cam

I’m always a little surprised when an inexpensive piece of AV that I’ve been secretly lusting after actually delivers on the audio and video goodness I seek. Elgato was nice enough to send us a demo unit of their Cam Link that I mentioned in a previous post. The Elgato Cam Link has one core function, and if you understand what it’s designed to do, it performs that function exceptionally well. Oh, and did I mention it’s cheap!

Every AV technician has been asked, “Why can’t I use my fancy new [insert $500+ camcorder or DLSR (with HDMI output)] with WebEx, Facebook, YouTube, Skype, GoToMeeting, etc.? Simple… because you need an HDMI to USB converter… and it’s not as simple as adding a $4 cable from Monoprice (for now). Until the Cam Link arrived on the market, that conversion process was either rather expensive at $300+ or complicated by the requirement of special drivers or software. The Cam Link is considerably more consumer focused in both price and ease of use.

So, what does Elgato’s device do?
In essence, the Cam Link takes an HDMI (High-Definition Multimedia Interface) signal and converts it to a UVC (USB Video Class) friendly output. So, you can connect an HDMI output from a higher end video camera, AV system, gaming system, laptop, etc. to this device, and the Cam Link will output the video and audio over USB to a computer. And because the Cam Link is UVC compliant, it functions without additional drivers with Windows, Mac, and Linux. Now, this functionality has been around for some time, but at a price… both financial and technical.

There have been a number of articles written about this device from a consumer electronics and gaming perspective, so I’ll focus on how the Cam Link could be used in higher education. I see this device being ideal for:

  • Improving your WebEx, Facebook Live, YouTube, Skype Sessions
    If you are looking to upgrade from a webcam, this is your device. You’ll be able to connect many consumer and professional cameras to the Elgato Cam Link (anything from a GoPro to a $5K+ Sony camera should work wonderfully). You will immediately notice an image quality improvement. Also, depending upon your camera, it may improve and/or simplify your audio capture options (I’ll leave that for another post).
  • Simple Video Conversion
    Yep, occasionally AV techs are asked to make backups (with permission) of VHS cassettes for use in a classroom. If you can find a VHS cassette player with HDMI out (a few now have 1080p upscaling built in). You may be able to throw away that old clunky capture device for good!
  • Content Capture
    I actually used the Cam Link to capture an iPad and iPhone signal for demo purposes, using a dongle, to my MacBook Pro. While not the primary reason to buy the device, it was nice that it had a number of alternative uses. But the Cam Link could also capture connect from a document camera, microscope, gaming system, etc.
  • AV Testing
    Many AV technicians regularly find themselves needing to connect an AV system for testing (“Are we receiving a signal?”). Lugging around a monitor and looking for power isn’t awesome. So, a technician could simply use the Cam Link, connected to their laptop, to check an HDMI output.

I should mention that the device is dangerously ultra portable, resembling a oversized USB thumb drive, so the “walk off” factor is high. It has one HDMI input, one USB output, and a single LED light that indicates that it’s receiving power from the USB drive (so no external power needed). Second, the device only works with a few of the most common HDMI resolutions, so not EVERY camera that supports HDMI will work with this device. ‘d say that about 80%+ of all video cameras should be compatible, but may require that you adjust the camera’s HDMI output. Finally, this device won’t capture High-bandwidth Digital Content Protection (HDCP) protected content, so if you think this is the perfect device to copy Blu-ray movies (or even a MacBook Pro in some situations)… think again. You’ll simply get a blank screen.

The Fine Print
The only gotcha I noticed is that the Elgato Cam Link crashed once (technically lost signal) in the 4+ hours of testing (a quick reboot resolved the issue). Not a critical concern, but something to consider.

Wolfvision Cynap


First announced at InfoComm 2015, the Wolfvision Cynap continues to add and enhance core features to the device to adapt to the changing wireless connectivity landscape. To categorize the Cynap as a wireless presentation and collaboration device is a disservice to the robust capabilities of what Wolfvision has created. The Cynap can also acts as a media player, provide web conferencing for Skype for Business, provides app-free, dongle-free mirroring, it can also stream mixed content to services like YouTube and Facebook, and offers robust recording capabilities. Also, it has basic whiteboard and annotation functionality. Finally, the Cynap can receive content from two HDMI inputs or you can stream content to the device as additional inputs (think digital signage), and that’s just the tip of the iceberg… literally.

It would take five DDMC posts to cover the core features of the Cynap. Unfortunately, that brings me to the core “gotcha” of the system. With such an advanced piece of hardware, comes complexity (aka feature fatigue) and cost. The device is outside the budget of a small/medium sized huddle room upgrade. Also, the device would need to exist in an environment where the user base is willing to self-train on the functionality of the Cynap, or offer an on-site trainer to train and evangelize the product. That said, if you found the right group of users that could take advantage of the vast capabilities of the Cynap, it could be an incredibly powerful tool.

 

Vaddio Visits the TEC

Earlier this month, Vaddio (now a division of Milestone AV Technologies as of April 2016) visited the TEC to provide an in-depth technical overview of their new RoboTRAK camera tracking system. The system, used in combination with many existing Vaddio cameras, functions by tracking a lanyard worn by the subject. When worn, the camera pans and tilts to follow the subject based on a wide range of technician configurable variables. Setup seemed straightforward, and the Vaddio team was able to have a functioning demo unit configured in under thirty minutes. The base tracking configuration seemed smooth and consistent. Beyond the standard system the RoboTRAK also allows classroom AV integration to further expand the in-room user-serviceable configuration. For example, with a bit of code added to your classroom AV system, the tracking could easily be disabled by default, requiring the guest to turn the tracking on for their sessions. Also, the technician could add “scenes” to the AV system to provide unique tracking capabilities, or to interface directly with the room. Needless to say, it’s very configurable.

Vaddio also showcased their ConferenceSHOT AV, a comprehensive camera, speaker, and microphone huddle-room AV package. The system has the ability to add two mic inputs, good video quality and a surprisingly high quality speaker that could be used in combination with a monitor or TV.

Finally, Vaddio provided a deep dive on the streaming capabilities of many of these devices and how they can be configured to meet a wide range of needs.

 

Say Hello to Solaborate’s Hello

Oh Kickstarter… how you love to torment us.ddmc_hello_2

 

Most AV technicians know that the world of software based video conferencing is rapidly expanding. Every tech company seems to have some form of home-grown video conferencing. Google has Hangouts and Duo, Microsoft (now) has Skype and Lync… I mean Skype for Business, Facebook has Messenger with video calling, Adobe has Connect, Cisco has WebEx, Apple has Facetime, and that’s the short list of conferencing connections we are asked to support.

Enter Solaborate
Solaborate has launched an interesting Kickstarter project called Hello. Basically, Hello acts as an endpoint for their Solaborate service, providing:

  • Video conferencing
  • Wireless screen sharing
  • Live broadcasting
  • Security surveillance with motion detection and more.

What caught my attention is that Solaborate plans to add Skype, Messenger, Hangouts, and WebEx support if they reach their $300,000 stretch goal. Considering they currently have $225,905 pledged on their original goal of $30,000, with 16 days to go, they may just make it. It’s important to note that this is a Kickstarter project… so take some or most of this with a grain of salt. But, if Hello lives up to the hype, it could be a very interesting device for small meeting spaces.

Follow Solaborate’s Hello Kickstarter at: https://goo.gl/3QwB55

Making Remote Interviews Face-to-Face

Often in our course production, instructors wish to include video interviews with guests as part of their materials. While doing this in the studio is straightforward, often these guests are not able to join us on campus. In these cases, we rely on tools like Google Hangouts or Skype to facilitate the conversation, which while convenient, adds some complexity to making a great video. For those with access to DukeWiki, we’ve documented some of that process here.

For a recent interview, I wanted to push our usual process further. If you’re familiar with Errol Morris documentaries, his interview subjects talk straight into the camera. Instead of the interviewer sitting off-camera, he uses a teleprompter-like device that displays the live video feed of the interviewer in front of the camera, which gives a deeper level of intimacy to the interview. With the tools we had, I wanted to see if we could do something similar.
IMG_2082

Here’s the set-up I ended up with. We have our usual 4K camera/iPad teleprompter set-up, only I’ve swapped the iPad out with one of our LED monitors to display the video feed coming from a Google Hangout hosted on my Macbook Pro. An iPad would’ve been ideal, but there’s no way to use an external webcam (here mounted right on top of the teleprompter glass), and the built-in cameras simply wouldn’t face the right way for our interviewee to see us on his side of the Hangout. We filmed the interviewees in our studio, and they looked straight ahead at the camera the whole time as intended. This got me half-way to my goal.

The other piece was capturing the video of our interview subject. This is always tricky since we can’t easily control the set-up on the other end of the call and there’s not always time to fully test things out before the call. The ideal scenario is the interviewee can capture themselves locally with Camtasia, Quicktime, or similar software, at the same time as they are in Google Hangout. A lot can go wrong there, so I wanted to make sure we captured the video feed on our end as well. Between the Macbook Pro and the video monitor, we ran the Macbook Pro’s output through a Ninja capture device, which had a speaker hooked up to it so our interviewee’s could hear our subject. While the video quality would be reduced due to streaming compression, at least we had it if something went wrong with plan A.

With this much technology working in tandem, there’s bound to be hiccups. In this case, the issue that got me was a bit of audio feedback. In future set-ups, I would want to swap the speaker with a speakerphone that would eliminate this. There were some instances where I re-recorded elements in the studio after the call ended so that I could have a cleaner take of the interviewers giving a question. I’d additionally encourage the interview subject to use a similar device or use iPhone-like headphones to get around the same issue on the other end of the call.

Altogether, it came out pretty well: