Biamp DDMC Session Summer 2019

It’s always nice to have a visit from Biamp in the summer. For those that aren’t in “the know,” Biamp is “a leading provider of professional AV equipment well-suited for a variety of applications, including conferencing, paging, and video,” or so says their website. In higher education, you’ll usually see their hardware tirelessly working away in a rack enclosure receiving, processing, and outputting audio for various of applications. For example, Biamp can take audio, process out some of the noises we generally don’t want to hear (HVAC hum or lighting buzz) and feed it out to a wide range of devices from Panopto to Zoom and beyond. It also applies advanced acoustic echo cancelation (AEC) to the various outputs to prevent that really annoying squeal you sometimes hear when you place a live mic too close to a speaker.

The session covered all of their new offerings, and they have a few. The highlights are:

  • SageVue 2.0 – This software will allow you to monitor your Biamp devices for uptime and to deploy firmware updates. The cost (free) is also perfect for higher education. In 2019, if you aren’t monitoring your AV hardware centrally, you’re doing it wrong.
  • Parlé microphones – Biamp has enhanced their microphone offering, after listening to feedback, and now offers a flatter mic with a considerably lower profile (architects will love them). For those places where hanging mics just aren’t going to work, Biamp has a solution… and it required some audio magic (additional mics) to make that happen.
  • Crowd Mics – If you’ve ever been in a 100+ auditorium where you have “mic runners” racing around to capture audience questions, Crowd Mics may be for you. This device allows guests to take advantage of their mobile phones to respond to questions. It also has an interesting queuing system that looks to make it a breeze to deploy. We’ll be keeping an eye on this as it rolls out.
  • TesiraXEL is an asymmetric power amplifiers from Biamp… and only Biamp could make amp exciting in 2019. It has an interesting universal approach to deployment that may make sense for schools where “hot swap-ability” is key. I’m no audio expert, but it sounded interesting.

Biamp was also kind enough to spend a little time reviewing some of our currently deployed audio programs, offering some game changing tips and tricks to eke out better audio in our classrooms and beyond.

 

The Rise and Fall of BYOD

The bring your own device (BYOD) meeting or teaching space has been a popular model for small and medium meeting and teaching spaces. With the rise of inexpensive and ultra-portable laptops and tablets, the traditional “local computer” has slowly lost favor in many spaces. The computer is expensive, requires significant maintenance, and is a prime target for malicious software. Also, users generally prefer using their own device as they know the ins and outs of the hardware and operating system they prefer. The BYOD model worked well when the guest was sharing a presentation or video to a local projector or monitor. But, as AV systems have grown to include unified communication (UC) systems (WebEx, Zoom, Skype, etc.), the pain points of BYOD have been magnified.

First, when hosting a meeting on a BYOD device, connecting your device to a projector or monitor is usually rather straightforward since standardizing on HDMI. Yes, you may still need a dongle, but that’s an easy hurdle in 2019. But, as we add UC, Zoom as an example, to the meeting, things get complicated. First, you need to connect the laptop to a local USB connection (this may require yet another dongle). This USB connection may carry the video feed from the in-room camera and the in-room audio feed. This may not sound complicated, but those feeds may not be obvious. For example, the camera feed could be labeled Vaddio, Magewell, or Crestron. With audio, it can be equally difficult to discover the audio input with labels such as USB Audio, Matrox, or Biamp. Sure, many reading this article may be familiar with what these do… but even as a digital media engineer, these labels can mean multiple things.

But, who cares… we are saving money while giving maximum AV flexibility, right? Errr, not really. Yes, those with the technical understanding of how the AV system works will be able to utilize all of the audiovisual capabilities… but for the rest of the world, there might as well not be an AV system in the space. Even worse, for those that have ever attended a meeting where it takes 10+ minutes to connect the local laptop to the correct mics, speakers, and camera, you may be losing money in the form of time, compounded by every person in attendance.

The Solution?
Soft codecs to the rescue! With the rise of UC soft codecs (Zoom Room, Microsoft Teams Rooms and BluJeans Rooms, etc.) you can integrate an inexpensive device (a less expensive computer) that is capable of performing a wide range of tasks. First, all of the in-room AV connects to the soft codec, so no fumbling for dongles or figuring out which audio, mic, speaker input/output is correct. Second, the soft codec monitors the space to ensure the hardware is functioning normally, breaking local AV groups out of break fix into a managed model. Third, with calendar integration, you can schedule meetings with a physical location. The icing on the cake is that most of these UC soft codecs offer wireless sharing… so you can toss your AppleTV, Solstice Pod, etc. out the window (OK, don’t do that… but it’s one less thing you need to buy during your next refresh). Oh, and don’t even get me started about accessibility and lecture capture!

We have a keen eye on soft codec system as a potential replacement to traditional classroom AV systems in the mid to long term… and so should you.

2019 Lecture Capture Survey

We’re excited to announce that our 2019 Lecture Capture Survey is complete. We had a chance to take a birds eye view of ten of the leading lecture capture tools and make some observations about general trends in this rapidly evolving product space.

We hope this information will be useful to you. Please feel free to reach out with any questions or comments to oit-mt-info@duke.edu.

A publicly accessible PDF version of the complete survey can be found here: https://duke.box.com/s/r50wv3sgqanxj7pq2x7xiud6vppldqfj

-OIT Media Technologies Team

Educational Video at Streaming Media West

This year, I had the opportunity to represent Duke at the Streaming Media West Conference by participating in the panel “Best Practices for Education & Training Video.” Having seen the growth and development of our online course production over the past six years, it was fascinating to see the approaches that other institutions were pursuing.

The University of Southern California has been streaming interactive lectures over Facebook Live. The approach utilizes a blend of green-screen lectures, interviews and discussions, and instantaneous feedback from viewers. A sample can be viewed here: https://vimeo.com/scctsi/review/300228463/c6fc3030f5. Gary San Angel, the Distance Education Specialist at USC’s Keck School of Medicine, noted that the live and interactive format of the lecture significantly increased the viewer engagement compared to their typical video output. Students watched more of the video and had better retention.

USC Price Director of Video Productions and Operations Services, Jonathan Schwartz, largely focused on his team’s live-streaming workflow. They use a mix of of encoders, content delivery networks and publishing platforms, and their commitment to production quality and value had me considering how live-streaming could be incorporated into Duke’s online course development.

While Duke has a great lecture capture system in DukeCapture, the focus of our online courses is in offline production where we have instructors set aside time to record standalone lectures in the studio for an online audience. This ensures that each video is focused on a specific learning objective and conveys that in a short amount of time. Live classroom recordings don’t usually lend themselves well to this priority, but both schools at USC have found ways to work around that limitation.

By working with instructors to design their classroom material to keep an online audience in mind, and by outfitting those classroom’s lecture capture infrastructure with potential live-tracking and live-switching abilities, we would be able to create a workflow that reduces both the bottleneck of the persistently busy professor and the ever growing demand for high-quality educational video.

Cisco Visits Duke – Meeting Recap

Last week, Cisco visited Duke University’s Technology Engagement Center (TEC) to offer an update on their WebEx/codec offerings. Greg Schalmo, a Senior Collaboration Architect at Cisco, detailed the transformation WebEx has undergone over the past six months. First, WebEx is now “video first,” (as compared to “content first”) mirroring the general trend in online conferencing. Second, the WebEx application has undergone a major facelift, bringing a refreshingly clean interface that simplifies the process of starting an online meeting (thank you Cisco!). Third, Cisco has ended some of their naming madness by folding Spark into Webex as Webex Teams, the Cisco Spark Board is now the Webex Board, and Spark Assistant is now Webex Assistant (R.I.P. Spark!).  This finally puts to rest the nagging question of, “So, what’s the difference between Spark and WebEx again?” Now, it’s ALL WebEx, but there are some nice enhancements, if you need them, in WebEx Teams. Eventually, both WebEx and WebEx Teams will be a single application where you can toggle between the two modes, but that’s a big undertaking, possibly coming in late 2019.

Cisco also detailed a few new hardware codec offerings (check out their Collaboration Device Product Matrix and their Cisco Webex Room Series for specifics). The highlights of the hardware overview were the Cisco Webex Codec Pro, a replacement to the SX80 (used in larger and/or more advanced teaching spaces) which adds additional digital inputs, additional mic options, and a range of advanced features such as voice control, automatic noise suppression, face recognition. For smaller spaces, Cisco offers the Webex Room Kit and Cisco Webex Room Kit Plus, which would work nicely in huddle rooms or small meeting spaces. The one device that made me nearly fall out of my seat was the Webex Room Kit Mini. The Mini offers all the usual niceties from a Cisco codec, but also allows the ability to connect the camera, mic, and speakers to an external device. So, it’s now possible to connect a 3rd party lecture capture device or even an alternative conferencing platform to a room. In our highly flexible teaching spaces, this is a significant enhancement worthy of note.

Blue Yeti Nano

One of the most overlooked technical aspects of in-office or at-home online teaching is audio capture. AV folks are quick to recommend $100-$200 webcams to significantly improve the video quality and flexibility of the teaching environment. But, when it comes to audio, many seem content delegating the sound capture to the built-in microphone of the webcam… or worse, the built-in microphone of the laptop or desktop (shiver!). The reality is, in most online teaching environments, the audio is as important, if not more so, than the video. Consider this, if you are watching a do-it-yourself YouTube video and the video is “OK-ish” (good enough to follow along), but the audio is good, you are still likely to follow along and learn from the recording. But, if the video is amazing, but the audio is poor, it doesn’t take long before you move on to the next offering. The same is true for online teaching.

If you ARE looking to enhance your audio (psssst, your students will thank you), Blue now offers the Blue Yeti Nano. The Nano a stylish desktop USB microphone designed for those that desire high quality (24-bit/48kHz) audio for quasi-professional recording or streaming podcasts, vlogs, Skype interviews, and online teaching (via WebEx, Zoom, etc.). At 75% the size of the original Yeti and Yeti Pro, the Yeti Nano is a bit more “backpack friendly.”

How will this improve my online teaching?
The Blue Nano has a few key features that will significantly improve your audio. First, the Blue Nano has a condenser microphone vs. the dynamic mic you’ll find in your laptop and webcam. Without going into too much technical detail, the condenser mic in the Nano is more complex, offers more sensitivity, and offers a more natural sound. Needless to say, this will blow your laptop’s built-in mic away.

Second, your built-in mic is most likely omnidirectional (it picks up sound in every direction). The Nano CAN be set to omnidirectional (ideal for when you have a conversation with 3+ people around a table, but it also offers a cardioid polar pattern. This means that when you are in front of the mic, you sound amazing, and sounds that aren’t in front of the mic are less prominent (ideal for teaching).

Third, the Blue Nano has a built-in mute button on the front of the mic. This may seem rather basic, but fumbling around for a virtual mute button when you have a PowerPoint, chat screen, etc. etc. open can be a pain. One quick tap of the green circle button on the front and the mic mutes.

At $99, the Blue Nano is a bit of an investment (one that you won’t really notice), but the people on the other side of the conversation will thank you.

October 2018 Adobe Creative Cloud Update Part 1: Adobe Premiere Pro

It’s fall, pumpkin spice is in the air, the holidays are Christmas decorations are going up, and software giant has just released updates to their entire Creative Cloud suite of applications.  Because the updates are so extensive, I’ve decided to do a multi-part series of DDMC entries that focuses on the new changes in detail for Premiere Pro, After Effects, Photoshop/Lightroom, and a new app Premiere Rush.  I just downloaded Rush today to my phone to put it through it’s paces so I’m saving that application for last but my first rundown of Premiere Pro’s new features is ready to go!

END TO END VR 180

Premiere Pro supports full native video editing for 180 VR content with the addition of a virtual screening room for collaboration.  Specific focal points can be tagged and identified in the same way you would in your boring 2D content.  Before you had to remove your headset to do any tagging but now you can keep your HMD (Head Mounted Display) on and keep cutting.  I’m just wetting my feet with VR but I can see how this could revolutionize the workflow for production houses integrating VR into their production workflow.  Combined with the robust networking features in Premiere Pro and symbiotic nature of the Adobe suite of applications this seems like a nice way to work on VR projects with a larger collaborative scope.

DISPLAY COLOR MANAGEMENT

Adobe has integrated a smart new feature that takes some of the guesswork out of setting your editing station color space.  Premiere Pro can now establish the color space of your particular monitor and adjust itself accordingly to compensate for color irregularities across the suite.  Red stays red no matter if it’s displayed in Premiere Pro, After Effects, or Photoshop!

INTELLIGENT AUDIO CLEANUP

Premiere Pro can now scan your audio and clean it up using two new sliders in the Essential Sound panel.  DeNoise and DeReverb allow you to remove background audio and reverb from your sound respectively.  Is it a replacement for quality sound capture on site?  No.  But it does add an extra level of simplicity that I’ve only experienced in Final Cut Pro so I’m happy about this feature.

PERFORMANCE IMPROVEMENTS

Premiere Pro is faster all around but if you’re cutting on a Mac you should experience a notable boost due to the new hardware based endcoding and decoding for H.264 and HEVC codecs.  Less rendering time is better rendering time.

SELECTIVE COLOR GRADING

Lumetri Color tools and grades are becoming more fine tuned.  This is a welcome addition as Adobe discontinued Speedgrade and folded it into Premiere Pro a while ago.  All your favorite Lumetri looks still remain but video can be adjusted to fit the color space of any still photo or swatch you like.  Colors can also be isolated and targeted for adjustment which is cool if you want to change a jacket, eye, or sky color.

EXPANDED FORMAT SUPPORT

Adobe Premiere now supports ARRI Alexa LF, Sony Venice V2, and the HEIF (HEIC) capture format used by iPhone 8 and iPhone X.

DATA DRIVEN INFOGRAPHICS

Because of the nature of my work as a videographer for an institution of higher education this feature actually has me the most excited.  Instrutional designers are constantly looking for ways to “jazz up” their boring tables into something visually engaging.  Now there is a whole slew of visual options with data driven infographic.  All you have to provide is the data in spreadsheet form then you can drag and drop in on one of the many elegant templates to build lower thirds, animated pie charts, and more.  It’s a really cool feature I plan to put through it’s paces on a few projects in place of floating prefabricated pie charts.

All these new additions make Adobe Premiere Pro a solid one stop editing platform but combined with the rest of the Adobe suite, one can easily see the endless pool of creative options that make it an industry standard!

Stay tuned for Part II:  Adobe Rush!

Logitech Spotlight – The Evolution of the Pointer

I’m sure if we went far enough back in time, there was once a person in a cave teaching tribal hunting techniques while pointing at a cave drawing with a stick. And thus, the “pointer” was born. Sure, the stick became more uniform, it even evolved to collapse and fit neatly in a pocket protector! But, it was still, in essence, a stick. But, as classroom technology advanced, traditional pointers simply weren’t large enough to keep up with the ever-increasing screen sizes. Also, pointers required that the pointee (I’m not sure if that’s a word) be within three or four feet of the content. So, in the late 1990’s the laser pointer was and continues to be, all the rage for presenters.

But as with all things AV, here comes that pesky technology to throw a wrench in our perfect laser stick type device. While the laser pointer worked wonderfully with people physically in the room, it didn’t allow remote participants (via WebEx, Skype, Zoom, etc.) to join in the pointing fun. Remote participants were usually reduced to looking at the postage stamp sized video feed, occasionally seeing a bit of red flash on the screen. Even worse, the booming voice of “we can’t see what you’re pointing at!” never blended well with a well-choreographed presentation.

Enter the Logitech Spotlight. When I opened the package, I really didn’t understand what it was. I was expecting an elegant upgrade to their previous laser pointers, but it clearly didn’t have a laser. I thought “Gee, that’s an expensive PowerPoint forward/reverse device.” Clearly, I had no clue as to the power that was well masked in this seemingly benign device. I connected the fob to my computer and launched an Apple Keynote presentation, and nothing happened. Hmmm, so I broke down and read the instructions (to be clear, there was a VERY clear sticker on the device saying “Download software to activate highlight, magnify, and timmer… but who reads stickers these days?). After installing the Spotlight software from Logitech’s website and charging the remote via USB C (my MacBook Pro’s power supply worked like a charm), I was still a little stumped. OK, so I went into the software and programmed the forward and reverse buttons, and boom, we had a very nice little remote. But, what was this top button that looked like a laser pointer button? I tapped it, and nothing really happened. So I pressed if for a second or so and the spotlight feature appeared on the screen. “Oh, cool, so it’s like a virtual ‘laser’ pointer, but without the laser.” Then it hit me… this is actually a big deal.

By virtualizing the pointer, individuals viewing the presentation remotely can also follow along without having to ask the above mentioned “So, what are you pointing at?” The implications are wide-reaching in higher education. From Panopto classroom recordings to WebEx, and Zoom meetings, even Skype calls or YouTube videos can take advantage of this type of pointer when sharing content. Yes, many of these platforms have built-in virtual pointers, but that requires that the presenter is tied to the computer’s mouse. Even if you have a wireless mouse, you’re still tethered to a desktop or table surface. The Logitech Spotlight frees the presenter to walk anywhere in the room. The software is very customizable, so while you can use the very cool spotlight, you can also magnify an area, or set the device to work more like a traditional laser pointer.

But wait, there’s more!!! The Logitech Spotlight also offers a timer that will vibrate the remote to help keep your presentation on track with regards to time. This feature is a big bonus for those folks that have time-sensitive presentations. Finally, the remote can act as a wireless mouse to offer basic button pushing you might need during presentations (think start/stop videos, close a window kind of control). It’s great for basic control, but don’t throw your wireless mouse away as it’s really only intended for basic control, and it’s only as good as how steady your hand is.

If I have a criticism, it would be that the remote would infrequently “jitter” (not hit the exact spot I wanted to hit) or momentarily lose connectivity. This may be due to my penchant for upgrading my Mac to the latest and greatest OS before considering how it may impact the applications I use. Still, I found the device to be game changing if you live and die by the pointer.

 

T1V ThinkHub

With the rise of active learning in higher education, AV groups have been tasked with designing, installing and managing these unique and complex digital media systems. Unlike traditional classrooms, where you may see a few projectors, a single control interface, and a few inputs for a laptop or document camera, active learning environments may have dozens of input and destinations. There have been three trains of thought on how to approach this issue:

Hardware: The “throw a bunch of hardware at the problem” approach has been deployed in many active learning environments. This configuration can include a large 16×16 or 32×32 matrixed switcher that functions as the nexus for the faculty and student-generated content. These systems generally work well, but deploying such a system can be expensive ($80,000 plus expensive), complex enough to need a specialized programmer and installer (or an external AV integrator), and may be prone to hardware or cabling failures, especially if the hardware is moved around the room as the classroom layout changes. (Examples: Extron and Crestron hardware installs)

Hybrid: Hybrid solutions use a combination of specialized hardware (usually proprietary) and software to build the active learning environment. These systems have a more turnkey approach but may lack customization and the ability to scale. Hybrid rooms are usually less expensive vs. true hardware solutions, but you are locked into specialized and usually expensive hardware that can’t be repurposed if needed. (Examples: Sony Vision Exchange, Google Jamboard, Cisco Webex Board)

Software: Software-based solutions are available, but have generally trailed behind the hardware and hybrid solutions in terms of their availability. Hardware is still required for a “software first” solution, but it’s usually in the form of computers attached to large commodity monitors. Proprietary hardware isn’t necessary, so this keeps costs down on that front.

T1V’s ThinkHub falls squarely in the software variety as it doesn’t require any specialized hardware. It’s difficult to articulate what ThinkHub is, but the best way to describe it is as a canvas where content (videos, PDFs, PowerPoint, etc.) can be dynamically loaded alongside wired and wireless sources (computers, phones, document cameras, microscopes, etc.), and it does this unique integration seamlessly. If that was all that ThinkHub did, I’d be impressed… but where the magic happens is with the ThinkHub’s ability to dynamically share content in multiple directions (from the faculty to the students and vice versa). Also, while a faculty member can use the ThinkHub’s touch interface, she or he can also control the canvas with a wireless tablet device, freeing them from “always being at the front of the class.”

ThinkHub is packed with useful annotation tools, has the ability to save and recall sessions (ideal if you made in-depth presentations multiple times a day), and, it can integrate with Zoom, WebEx, Bluejeans and Skype for Business.

Overall, we were impressed with the device. T1V has offered to make their showroom in Charlotte available should any group on campus be interested in testing the platform further.

Camera Tracking Review

A few weeks back, I had the opportunity to remotely demo a few autonomous camera tracking systems for use in a classroom environment. The idea is appealing. By updating the camera in the classroom, you move away from a static back-of-room shot to a considerably more impressive.

The first system we demoed was the PTZOptics Perfect Track. During the demonstration, the camera was able to gracefully pan and tilt as the subject moved around the front of the room. More importantly, it was configured to return to a general preset when no subject was in the predefined presentation area (this prevents the camera from getting “stuck” at the edge of the frame or at a door when someone exits the room… a real issue with older tracking systems). It took a considerable amount of my supervisor and I directing the demo individuals to “run faster” and “cover your face and move to the very edge of the tracking zone” before we were able to “trick” the system into action in a slightly unnatural way… but it still responded well, simply moving back to the “safe” preset. But, most importantly, a majority of the time camera movements felt very natural, almost to the point where it was hard to tell it apart from a mid-level camera operator (yes, I’ve seen MUCH worse human camera operators). The only real “gotcha” with this platform was that it’s SDI (not a major issue, but most classroom AV setups are more HDMI friendly), and the price (during the demo, it was said to be in the $8,000+ range). But, if you are filming in a classroom for a semester, that $8,000 price is very reasonable when compared to the cost of hiring a camera operator.

The second system we reviewed was the HuddleCamHD SimplTrack. While less expensive and USB only, it also proved to be a good solution, but perhaps slightly less impressive (and ~$2,000 less expensive) than the PTZOptics solution. It was also able to track the subject in a predefined presentation zone, but there were more frequent “misses” with the camera. This could have been due to the environment of the demo (there were a few minor obstructions in front of the tracking subject). It also had tracking zones and a “safe preset” that worked as detailed. Overall, I’d also recommend this system for consideration.

The Good:

  • The systems are improving in terms of their ability to intelligently track an individual or group of individuals
  • The robotic pan and tilt is nearly a thing of the past and the footage looked very natural
  • The video from these cameras is vastly superior to static, wide angle, back of the room cameras

The Bad:

  • The hardware/software costs for these systems are high
  • Setup is more involved
  • These cameras don’t work in every environment (they don’t like windows, reflective surfaces, and glair)

To sum up, we’re almost to the point where classroom AV folks should consider deploying these solutions in their highly utilized classrooms as a standard install. I’d still like to see a more affordable option (wouldn’t we all?), but the price is falling and the functionality is at a tipping point.