Blue Yeti Nano

One of the most overlooked technical aspects of in-office or at-home online teaching is audio capture. AV folks are quick to recommend $100-$200 webcams to significantly improve the video quality and flexibility of the teaching environment. But, when it comes to audio, many seem content delegating the sound capture to the built-in microphone of the webcam… or worse, the built-in microphone of the laptop or desktop (shiver!). The reality is, in most online teaching environments, the audio is as important, if not more so, than the video. Consider this, if you are watching a do-it-yourself YouTube video and the video is “OK-ish” (good enough to follow along), but the audio is good, you are still likely to follow along and learn from the recording. But, if the video is amazing, but the audio is poor, it doesn’t take long before you move on to the next offering. The same is true for online teaching.

If you ARE looking to enhance your audio (psssst, your students will thank you), Blue now offers the Blue Yeti Nano. The Nano a stylish desktop USB microphone designed for those that desire high quality (24-bit/48kHz) audio for quasi-professional recording or streaming podcasts, vlogs, Skype interviews, and online teaching (via WebEx, Zoom, etc.). At 75% the size of the original Yeti and Yeti Pro, the Yeti Nano is a bit more “backpack friendly.”

How will this improve my online teaching?
The Blue Nano has a few key features that will significantly improve your audio. First, the Blue Nano has a condenser microphone vs. the dynamic mic you’ll find in your laptop and webcam. Without going into too much technical detail, the condenser mic in the Nano is more complex, offers more sensitivity, and offers a more natural sound. Needless to say, this will blow your laptop’s built-in mic away.

Second, your built-in mic is most likely omnidirectional (it picks up sound in every direction). The Nano CAN be set to omnidirectional (ideal for when you have a conversation with 3+ people around a table, but it also offers a cardioid polar pattern. This means that when you are in front of the mic, you sound amazing, and sounds that aren’t in front of the mic are less prominent (ideal for teaching).

Third, the Blue Nano has a built-in mute button on the front of the mic. This may seem rather basic, but fumbling around for a virtual mute button when you have a PowerPoint, chat screen, etc. etc. open can be a pain. One quick tap of the green circle button on the front and the mic mutes.

At $99, the Blue Nano is a bit of an investment (one that you won’t really notice), but the people on the other side of the conversation will thank you.

October 2018 Adobe Creative Cloud Update Part 1: Adobe Premiere Pro

It’s fall, pumpkin spice is in the air, the holidays are Christmas decorations are going up, and software giant has just released updates to their entire Creative Cloud suite of applications.  Because the updates are so extensive, I’ve decided to do a multi-part series of DDMC entries that focuses on the new changes in detail for Premiere Pro, After Effects, Photoshop/Lightroom, and a new app Premiere Rush.  I just downloaded Rush today to my phone to put it through it’s paces so I’m saving that application for last but my first rundown of Premiere Pro’s new features is ready to go!


Premiere Pro supports full native video editing for 180 VR content with the addition of a virtual screening room for collaboration.  Specific focal points can be tagged and identified in the same way you would in your boring 2D content.  Before you had to remove your headset to do any tagging but now you can keep your HMD (Head Mounted Display) on and keep cutting.  I’m just wetting my feet with VR but I can see how this could revolutionize the workflow for production houses integrating VR into their production workflow.  Combined with the robust networking features in Premiere Pro and symbiotic nature of the Adobe suite of applications this seems like a nice way to work on VR projects with a larger collaborative scope.


Adobe has integrated a smart new feature that takes some of the guesswork out of setting your editing station color space.  Premiere Pro can now establish the color space of your particular monitor and adjust itself accordingly to compensate for color irregularities across the suite.  Red stays red no matter if it’s displayed in Premiere Pro, After Effects, or Photoshop!


Premiere Pro can now scan your audio and clean it up using two new sliders in the Essential Sound panel.  DeNoise and DeReverb allow you to remove background audio and reverb from your sound respectively.  Is it a replacement for quality sound capture on site?  No.  But it does add an extra level of simplicity that I’ve only experienced in Final Cut Pro so I’m happy about this feature.


Premiere Pro is faster all around but if you’re cutting on a Mac you should experience a notable boost due to the new hardware based endcoding and decoding for H.264 and HEVC codecs.  Less rendering time is better rendering time.


Lumetri Color tools and grades are becoming more fine tuned.  This is a welcome addition as Adobe discontinued Speedgrade and folded it into Premiere Pro a while ago.  All your favorite Lumetri looks still remain but video can be adjusted to fit the color space of any still photo or swatch you like.  Colors can also be isolated and targeted for adjustment which is cool if you want to change a jacket, eye, or sky color.


Adobe Premiere now supports ARRI Alexa LF, Sony Venice V2, and the HEIF (HEIC) capture format used by iPhone 8 and iPhone X.


Because of the nature of my work as a videographer for an institution of higher education this feature actually has me the most excited.  Instrutional designers are constantly looking for ways to “jazz up” their boring tables into something visually engaging.  Now there is a whole slew of visual options with data driven infographic.  All you have to provide is the data in spreadsheet form then you can drag and drop in on one of the many elegant templates to build lower thirds, animated pie charts, and more.  It’s a really cool feature I plan to put through it’s paces on a few projects in place of floating prefabricated pie charts.

All these new additions make Adobe Premiere Pro a solid one stop editing platform but combined with the rest of the Adobe suite, one can easily see the endless pool of creative options that make it an industry standard!

Stay tuned for Part II:  Adobe Rush!

Logitech Spotlight – The Evolution of the Pointer

I’m sure if we went far enough back in time, there was once a person in a cave teaching tribal hunting techniques while pointing at a cave drawing with a stick. And thus, the “pointer” was born. Sure, the stick became more uniform, it even evolved to collapse and fit neatly in a pocket protector! But, it was still, in essence, a stick. But, as classroom technology advanced, traditional pointers simply weren’t large enough to keep up with the ever-increasing screen sizes. Also, pointers required that the pointee (I’m not sure if that’s a word) be within three or four feet of the content. So, in the late 1990’s the laser pointer was and continues to be, all the rage for presenters.

But as with all things AV, here comes that pesky technology to throw a wrench in our perfect laser stick type device. While the laser pointer worked wonderfully with people physically in the room, it didn’t allow remote participants (via WebEx, Skype, Zoom, etc.) to join in the pointing fun. Remote participants were usually reduced to looking at the postage stamp sized video feed, occasionally seeing a bit of red flash on the screen. Even worse, the booming voice of “we can’t see what you’re pointing at!” never blended well with a well-choreographed presentation.

Enter the Logitech Spotlight. When I opened the package, I really didn’t understand what it was. I was expecting an elegant upgrade to their previous laser pointers, but it clearly didn’t have a laser. I thought “Gee, that’s an expensive PowerPoint forward/reverse device.” Clearly, I had no clue as to the power that was well masked in this seemingly benign device. I connected the fob to my computer and launched an Apple Keynote presentation, and nothing happened. Hmmm, so I broke down and read the instructions (to be clear, there was a VERY clear sticker on the device saying “Download software to activate highlight, magnify, and timmer… but who reads stickers these days?). After installing the Spotlight software from Logitech’s website and charging the remote via USB C (my MacBook Pro’s power supply worked like a charm), I was still a little stumped. OK, so I went into the software and programmed the forward and reverse buttons, and boom, we had a very nice little remote. But, what was this top button that looked like a laser pointer button? I tapped it, and nothing really happened. So I pressed if for a second or so and the spotlight feature appeared on the screen. “Oh, cool, so it’s like a virtual ‘laser’ pointer, but without the laser.” Then it hit me… this is actually a big deal.

By virtualizing the pointer, individuals viewing the presentation remotely can also follow along without having to ask the above mentioned “So, what are you pointing at?” The implications are wide-reaching in higher education. From Panopto classroom recordings to WebEx, and Zoom meetings, even Skype calls or YouTube videos can take advantage of this type of pointer when sharing content. Yes, many of these platforms have built-in virtual pointers, but that requires that the presenter is tied to the computer’s mouse. Even if you have a wireless mouse, you’re still tethered to a desktop or table surface. The Logitech Spotlight frees the presenter to walk anywhere in the room. The software is very customizable, so while you can use the very cool spotlight, you can also magnify an area, or set the device to work more like a traditional laser pointer.

But wait, there’s more!!! The Logitech Spotlight also offers a timer that will vibrate the remote to help keep your presentation on track with regards to time. This feature is a big bonus for those folks that have time-sensitive presentations. Finally, the remote can act as a wireless mouse to offer basic button pushing you might need during presentations (think start/stop videos, close a window kind of control). It’s great for basic control, but don’t throw your wireless mouse away as it’s really only intended for basic control, and it’s only as good as how steady your hand is.

If I have a criticism, it would be that the remote would infrequently “jitter” (not hit the exact spot I wanted to hit) or momentarily lose connectivity. This may be due to my penchant for upgrading my Mac to the latest and greatest OS before considering how it may impact the applications I use. Still, I found the device to be game changing if you live and die by the pointer.


T1V ThinkHub

With the rise of active learning in higher education, AV groups have been tasked with designing, installing and managing these unique and complex digital media systems. Unlike traditional classrooms, where you may see a few projectors, a single control interface, and a few inputs for a laptop or document camera, active learning environments may have dozens of input and destinations. There have been three trains of thought on how to approach this issue:

Hardware: The “throw a bunch of hardware at the problem” approach has been deployed in many active learning environments. This configuration can include a large 16×16 or 32×32 matrixed switcher that functions as the nexus for the faculty and student-generated content. These systems generally work well, but deploying such a system can be expensive ($80,000 plus expensive), complex enough to need a specialized programmer and installer (or an external AV integrator), and may be prone to hardware or cabling failures, especially if the hardware is moved around the room as the classroom layout changes. (Examples: Extron and Crestron hardware installs)

Hybrid: Hybrid solutions use a combination of specialized hardware (usually proprietary) and software to build the active learning environment. These systems have a more turnkey approach but may lack customization and the ability to scale. Hybrid rooms are usually less expensive vs. true hardware solutions, but you are locked into specialized and usually expensive hardware that can’t be repurposed if needed. (Examples: Sony Vision Exchange, Google Jamboard, Cisco Webex Board)

Software: Software-based solutions are available, but have generally trailed behind the hardware and hybrid solutions in terms of their availability. Hardware is still required for a “software first” solution, but it’s usually in the form of computers attached to large commodity monitors. Proprietary hardware isn’t necessary, so this keeps costs down on that front.

T1V’s ThinkHub falls squarely in the software variety as it doesn’t require any specialized hardware. It’s difficult to articulate what ThinkHub is, but the best way to describe it is as a canvas where content (videos, PDFs, PowerPoint, etc.) can be dynamically loaded alongside wired and wireless sources (computers, phones, document cameras, microscopes, etc.), and it does this unique integration seamlessly. If that was all that ThinkHub did, I’d be impressed… but where the magic happens is with the ThinkHub’s ability to dynamically share content in multiple directions (from the faculty to the students and vice versa). Also, while a faculty member can use the ThinkHub’s touch interface, she or he can also control the canvas with a wireless tablet device, freeing them from “always being at the front of the class.”

ThinkHub is packed with useful annotation tools, has the ability to save and recall sessions (ideal if you made in-depth presentations multiple times a day), and, it can integrate with Zoom, WebEx, Bluejeans and Skype for Business.

Overall, we were impressed with the device. T1V has offered to make their showroom in Charlotte available should any group on campus be interested in testing the platform further.

Camera Tracking Review

A few weeks back, I had the opportunity to remotely demo a few autonomous camera tracking systems for use in a classroom environment. The idea is appealing. By updating the camera in the classroom, you move away from a static back-of-room shot to a considerably more impressive.

The first system we demoed was the PTZOptics Perfect Track. During the demonstration, the camera was able to gracefully pan and tilt as the subject moved around the front of the room. More importantly, it was configured to return to a general preset when no subject was in the predefined presentation area (this prevents the camera from getting “stuck” at the edge of the frame or at a door when someone exits the room… a real issue with older tracking systems). It took a considerable amount of my supervisor and I directing the demo individuals to “run faster” and “cover your face and move to the very edge of the tracking zone” before we were able to “trick” the system into action in a slightly unnatural way… but it still responded well, simply moving back to the “safe” preset. But, most importantly, a majority of the time camera movements felt very natural, almost to the point where it was hard to tell it apart from a mid-level camera operator (yes, I’ve seen MUCH worse human camera operators). The only real “gotcha” with this platform was that it’s SDI (not a major issue, but most classroom AV setups are more HDMI friendly), and the price (during the demo, it was said to be in the $8,000+ range). But, if you are filming in a classroom for a semester, that $8,000 price is very reasonable when compared to the cost of hiring a camera operator.

The second system we reviewed was the HuddleCamHD SimplTrack. While less expensive and USB only, it also proved to be a good solution, but perhaps slightly less impressive (and ~$2,000 less expensive) than the PTZOptics solution. It was also able to track the subject in a predefined presentation zone, but there were more frequent “misses” with the camera. This could have been due to the environment of the demo (there were a few minor obstructions in front of the tracking subject). It also had tracking zones and a “safe preset” that worked as detailed. Overall, I’d also recommend this system for consideration.

The Good:

  • The systems are improving in terms of their ability to intelligently track an individual or group of individuals
  • The robotic pan and tilt is nearly a thing of the past and the footage looked very natural
  • The video from these cameras is vastly superior to static, wide angle, back of the room cameras

The Bad:

  • The hardware/software costs for these systems are high
  • Setup is more involved
  • These cameras don’t work in every environment (they don’t like windows, reflective surfaces, and glair)

To sum up, we’re almost to the point where classroom AV folks should consider deploying these solutions in their highly utilized classrooms as a standard install. I’d still like to see a more affordable option (wouldn’t we all?), but the price is falling and the functionality is at a tipping point.


Let’s face it… humans like articulating concepts by drawing on a wall. This behavior dates back over 64,000 years with some of the first cave paintings. While we’ve improved on the concept over the years, transitioning to clay tablets, and eventually blackboards and whiteboards, the basic idea has remained the same. Why do people like chalkboard/whiteboards? Simple, it’s a system you don’t need to learn (or you learned when you were a child), you can quickly add, adjust, and erase content, it’s multi-user, it doesn’t require power, never needs a firmware or operating system update, and it lasts for years. While I’ll avoid the grand “chalkboard vs. whiteboard” debate, we can all agree that the two communication systems are nearly identical, and are very effective in teaching environments. But, as classrooms transition from traditional learning environments (one professor teaching to a small to a medium number of students in a single classroom) to distance education and active learning environments, compounded by our rapid transition to digital platforms… the whiteboard has had a difficult time making the transition. There have been many (failed) attempts at digitizing the whiteboards, just check eBay. Most failed for a few key reasons. They were expensive, they required the user to learn a new system, they didn’t interface well with other technologies… oh, and did I mention that they were expensive?

Enter Kaptivo, a “short throw” webcam based platform for capturing and sharing whiteboard content. During our testing (Panopto sample), we found that the device was capable of capturing the whiteboard image, cleaning up the image with a bit of Kaptivo processing magic, and convert the content into an HDMI friendly format. The power of Kaptivo is in its simplicity. From a faculty/staff/student perspective, you don’t need to learn anything new… just write on the wall. But, that image can now be shared with our lecture capture system or any AV system you can think of (WebEx, Skype, Facebook, YouTube, etc.). It’s also worth noting that Kaptivo is also capable of sharing the above content with their own Kaptivo software. While we didn’t specifically test this product, it looked to be an elegant solution for organizations with limited resources.

The gotchas: Every new or interesting technology has a few gotchas. First, Kaptivo currently works with whiteboards (sorry chalkboard fans). Also, there isn’t any way to daisy chain Kaptivo or “stitch” multiple Kaptivo units together for longer whiteboards (not to mention how you would share such content). Finally, the maximum whiteboard size is currently 6′ x 4′, so that’s not all that big in a classroom environment.

At the end of the day, I could see this unit working well in a number of small collaborative learning environments, flipped classrooms and active learning spaces. We received a pre-production unit, so I’m anxious to see what the final product looks like and if some of the above-mentioned limitations can be overcomed. Overall, it’s a very slick device.

The Impact of Artificial Intelligence on Video

Big advances are taking place in intersection of video with AI (Artificial Intelligence). I ran across an interesting article in Streaming Media Magazine called The State of Video and AI 2018 that takes stock of some of these changes and I wanted to share it with you as we look toward what’s ahead for Duke.


We’ve been following trends in this area from a number of directions, including video captioning. As many of you are aware, the needs for captioning videos we produce at Duke are increasing, but the costs of captioning services, most of which rely on intensive manual labor, are high. However, new tools like IBM’s Watson, which includes more than 60 AI services, including machine captioning (with accuracy advertised as a whopping 96%), seem poised to shift the balance and make it possible for us to caption videos on a wider scale. We demoed Watson recently and will continue to monitor it as well as other tools in this space.

In this context I also wanted to point out that we recently began offering ASR (Automatic Speech Recognition) for Panopto, Duke’s lecture capture service. We are excited about the opportunities this new functionality will offer students and other viewers who are looking to drill down to points in videos where specific terms are found. This feature adds to Panopto’s already healthy set of features built around in-video search, including OCR (Optical Character Recognition) for slide content, and user-created time-stamped notes and bookmarks.

AV in a Box – The Sub $25K Classroom

As the expectations of classroom and meeting space AV changes over time, so too must the approach of delivering advanced AV systems for teaching and learning environments.

The Sanford School of Public Policy (SSPP), in collaboration with Duke Office of Information Technology (OIT) and Trinity Technology Services (TTS), was able to take a tentative list of desired outcomes for a scheduled AV update to four classrooms, and translate that into a cost-effective and robust classroom AV design. The process started with the Sanford School approaching my group (Media Technologies) at OIT and informing us that they were looking to upgrade a few classroom environments and if we could provide some general guidance to ensure they were maximizing their available funds. Based on the initial wants and needs assessment, OIT sketched a base AV design and reviewed the design with TTS to ensure the feasibility of the design and to obtain pricing. From that point, TTS finalized the design with a few minor modifications and provided pricing. Ultimately, TTS was selected as the AV integrator due to their cost-effective pricing and solid track record (roughly a 35%+ cost savings).

About the spaces:

  • Laser Projectors (5,000 lumens at 1920×1080, rated for 20,000 hours – no bulb replacements!)
  • Front and Back Cameras (no pan or tilt)
  • Built-in VoIP Calling
  • Integrated Lecture Capture (Panopto)
  • 7″ Touch Panel for Control
  • AV Bridge Standard (for WebEx, Skype, Google Hangout, YouTube, Facebook, etc.)

The system recycled the previous AV rack, speakers, and projector mount, so this was far from new construction. The Sanford School of Public Policy has indicated that they had a very smooth install, and minor issues since install ~4 months ago. So, it survived a full semester.

The pros and cons of such a system are difficult to quantify, but I’ll give it a shot.

  • significant reduction in overall cost (~35%)
  • simplified install (TTS has a robust understanding Duke’s network, VoIP systems, scheduling, etc. and it really helps)
  • good support, especially if you have tier one local support.
  • a unified graphical user interface (faculty moving from one of the 170+ TTS room to a Sanford School of Public Policy room will experience a similar user interface)
  • they understand the unique AV needs of an academic teaching environment.
  • did I mention the price?

Equally difficult would be to list the cons of using TTS. Instead of listing cons, I’ll list a few considerations when working with TTS.

  • TTS may not be an ideal fit for advanced rooms (“Advanced” is a relative term… they have done some impressively complex work and they continue to surprise, but there is a limit).
  • TTS may not be the perfect fit for new construction (Have they done new construction? Yes! Can they do all new construction? Probably not.)
  • There are limitations to their programming (TTS has a range of solid classroom designs, good programmers, and a dedication to clean design, but it’s best to “borrow” their best designs vs. reinventing the wheel.)

This was a wonderful project, and I look forward to reviewing this project in a few years to see how happy the Sanford School of Public Policy is with the overall project. Only time will tell.


Logitech DDMC Session

On November 30th, Warren Widener of Logitech visited the Technology Engagement Center on Duke’s campus to showcase three pieces of technology ideal for small and medium-sized conference rooms.

We all know Logitech for their webcams, keyboards, and mice, but over the past few years, they have expanded into small, and not so small, business environments as more organizations move toward small bring your own device (BYOD) meeting spaces. Logitech has achieved this by integrating their various devices into flexible and cost-effective offerings highlighted below. While they may be careful not to take on “the trons” of the industry, it’s clear they are looking to move up the food chain.

First, Warren provided a demonstration of the Logitech Smartdock. The Smartdock is essentially a dock for a Microsoft Surface Pro 4 with expanded I/O, designed to interface with Skype for Business and in-room Logitech hardware (cameras/mics) to simplify the process of launching an audio or video conference to the push of a button. The device is intended to live in the meeting space and act as the meeting scheduler and AV bridge. While not a perfect fit for Duke due to our deep enterprise WebEx integration, for businesses that rely on Skype for Business, this device makes one-touch video conferencing one step closer to reality.

Also highlighted at the session was the Logitech Meetup. The Meetup is an $899 MSRP wide-angle webcam, three-element mic array and tuned speakers, with build in acoustic echo cancelation, that ticks a number of boxes in small huddle room design. Unlike some of Logitech’s previous all-in-one designs, the Meetup is designed to be permanently mounted above or below a monitor and comes with a wall-mount bracket. The super-wide 120-degree field of view from the camera ensures everyone in a small conference room will be in the shot.

Finally, the session briefly touched on Logitec’s GROUP offering. We’ve seen previous iterations of this device, but Logitech promises that they continue to improve upon the overall audio quality and features from this device. Ideal for larger BYOD spaces with a pan tilt zoom camera, high-quality mics and speaker and open nature (it works with WebEx, Skype, Google Hangouts, Facebook Live, etc. etc.), the lack of integrated voice over IP (VoIP) makes it a more difficult sell in some of our more robust and demanding spaces.

Duke Panopto Upgrade, Tuesday, December 19, 2017

We’re excited to announce that we’ll be upgrading our current v. 5.3 installation of Panopto to version 5.5 on Tuesday, December 19th, 2017. Some of the headline features we’ll be gaining include:

  • Webcasts are now delivered via HTML5 in both the interactive viewer and the embed viewer. One of final steps in our move away from proprietary plug-in based technology (Flash, Silverlight) toward a completely browser-based playback architecture.
  • Added the capability to embed a Youtube video within a Panopto session.
  • Added welcome tours to orient new users logging into Panopto.
  • Added Playlists. Playlists allow sessions from any folder within a Panopto site to be presented together in a single, ordered list.


As per usual, we expect the system to be offline during business hours on this day. If you have questions, you can contact your Panopto Site Administrator or the OIT Service Desk

  • 5.4 Full release notes:
  • 5.5 Full release Notes: