The Rise and Fall of BYOD

The bring your own device (BYOD) meeting or teaching space has been a popular model for small and medium meeting and teaching spaces. With the rise of inexpensive and ultra-portable laptops and tablets, the traditional “local computer” has slowly lost favor in many spaces. The computer is expensive, requires significant maintenance, and is a prime target for malicious software. Also, users generally prefer using their own device as they know the ins and outs of the hardware and operating system they prefer. The BYOD model worked well when the guest was sharing a presentation or video to a local projector or monitor. But, as AV systems have grown to include unified communication (UC) systems (WebEx, Zoom, Skype, etc.), the pain points of BYOD have been magnified.

First, when hosting a meeting on a BYOD device, connecting your device to a projector or monitor is usually rather straightforward since standardizing on HDMI. Yes, you may still need a dongle, but that’s an easy hurdle in 2019. But, as we add UC, Zoom as an example, to the meeting, things get complicated. First, you need to connect the laptop to a local USB connection (this may require yet another dongle). This USB connection may carry the video feed from the in-room camera and the in-room audio feed. This may not sound complicated, but those feeds may not be obvious. For example, the camera feed could be labeled Vaddio, Magewell, or Crestron. With audio, it can be equally difficult to discover the audio input with labels such as USB Audio, Matrox, or Biamp. Sure, many reading this article may be familiar with what these do… but even as a digital media engineer, these labels can mean multiple things.

But, who cares… we are saving money while giving maximum AV flexibility, right? Errr, not really. Yes, those with the technical understanding of how the AV system works will be able to utilize all of the audiovisual capabilities… but for the rest of the world, there might as well not be an AV system in the space. Even worse, for those that have ever attended a meeting where it takes 10+ minutes to connect the local laptop to the correct mics, speakers, and camera, you may be losing money in the form of time, compounded by every person in attendance.

The Solution?
Soft codecs to the rescue! With the rise of UC soft codecs (Zoom Room, Microsoft Teams Rooms and BluJeans Rooms, etc.) you can integrate an inexpensive device (a less expensive computer) that is capable of performing a wide range of tasks. First, all of the in-room AV connects to the soft codec, so no fumbling for dongles or figuring out which audio, mic, speaker input/output is correct. Second, the soft codec monitors the space to ensure the hardware is functioning normally, breaking local AV groups out of break fix into a managed model. Third, with calendar integration, you can schedule meetings with a physical location. The icing on the cake is that most of these UC soft codecs offer wireless sharing… so you can toss your AppleTV, Solstice Pod, etc. out the window (OK, don’t do that… but it’s one less thing you need to buy during your next refresh). Oh, and don’t even get me started about accessibility and lecture capture!

We have a keen eye on soft codec system as a potential replacement to traditional classroom AV systems in the mid to long term… and so should you.

Help Us Test Sonix.ai

OIT has been following what’s happening in the evolving world of captioning over the years, and in particular monitoring the field for high quality, affordable services we think would be useful to members of the Duke community. When Rev.com came along, offering guaranteed 99% accurate human-generated captions for a flat $1.00 a minute (whereas some comparable services were well over $3.00/minute), we took note and have facilitated a collaboration with them that has been very productive for Duke. A recent review of our usage shows that a lot of you are using Rev, with a huge uptick in usage over the last couple years, and we’ve heard few if any complaints about the service.

While in general there has been a dismissive attitude toward machine (automatic) transcription, the newest generation technology, based on IBM Watson, has become so good that we can no longer (literally) afford to ignore it. With good quality audio to work from, this speech-to-text engine claims to deliver accuracy as high as 95% or more. IBM Watson isn’t a consumer-facing service, but we’ve been on the lookout for vendors building on this platform, and have found one we feel is worth exploring called Sonix. If cost is a significant factor for you, you might consider giving it a try.

Sonix captioning is a little over 8 cents per minute, and has waived the monthly subscription requirement and offered 30 free minutes of captioning for anyone with a duke.edu email address who sets up their account through this page: https://sonix.ai/academic-program/duke-university.

We are not recommending Sonix at this time, but are interested to hear what your experiences with them are. And we would caution that with any machine transcription technology, a review of your captions via the company’s online editor is required if you want to use this as closed captions (vs just a transcription). In our initial testing Sonix’s online editor looks fairly quick and easy to use.

If you set up an account and try Sonix, please reach out to oit-mt-info@duke.edu to let us know what your experiences are and what specific use cases it supports.

 

Quick AV Signal Flow with Lucidchart

When collaborating on the design of classroom AV systems, having the ability to rapidly sketch, modify, innovate, and share a signal flow diagram is an invaluable tool in avoiding expensive mistakes before install. But, creating signal flow diagrams has traditionally been a challenge for AV technicians as the software is either expensive, overly complicated, or locks the AV technician in as the single point of modifications for all time.

First, what is a signal flow diagram, and why do I need it? A signal flow diagram shows the signal path (audio, video, network, control, etc.) from inputs to outputs, for the entire AV system. It’s essentially a blueprint for the system… and would you buy a house where they didn’t have a blueprint? With a signal flow diagram, most entry-level technicians should be able to diagnose an AV issue down to the cabling or hardware level. Without this diagram, it’s difficult to troubleshoot small systems, and nearly impossible with larger systems.

Over the past few weeks, we’ve been testing Lucidchart to see if it’s capable of eliminating some of the frustrations with other software-based signal flow products. First, Lucidchart is web-based, so it’s not a piece of software you need to download and manage. If you have a web browser, Windows, Mac, or Linux, you can work on your project from the office, at home, or on your vacation… because we all love working during our vacation.

The platform is easy enough for a novice user to pick up after watching a few 5-10 min. videos. But, the true power comes in the ability for the design to be shared. By pressing the Share button at the top, you can share your design with clients in a “read-only” mode, so they can see, but not modify, the design. But, you can also share the design with collaborators to speed up the process. Also, this ability to keep users up-to-date on the design means you aren’t sending PDFs of the drawings. If you’ve ever attempted to incorporate change requests from the initial release of a drawing when you’re already three or four versions ahead… you’ll understand the appeal of real-time environments.

The only negatives we see are that we are required to design our own AV hardware blocks. While this is somewhat time-consuming, once you create a block, you never need to re-create it.

Check out a quick design we created!

New Machine Caption Options Look Interesting

We wrote in April of last year about the impact of new AI and machine learning advances in the video world, and specifically around captioning. A little less than a year later, we’re starting to see the first packaged services being offered that leverage these technologies and make them available to end users. We’ve recently evaluated a couple options that merit a look:

Syncwords

Syncwords offers machine transcriptions/ captions for $0.60/per minute, and $1.35/ minute for human corrected transcriptions. We tested this service recently and the quality was impressive. Only a handful of words needed adjustment on the 5 minute test file we used, and none of them seemed likely to significantly interfere with comprehension. The recording quality of our test file was fairly high (low noise, words clearly audible, enunciated clearly).

Turnaround time for machine transcriptions is about 1/3 of the media run time on average. For human corrected transcriptions, the advertised turnaround time is 3-4 business days, but the company says the average is less than 2 days. Rush human transcription option is $1.95 with a guaranteed turnaround of 2 business days and, according to the company, average delivery within a day.

Syncwords also notes edu and quantity discounts are available for all of these services, so please inquire with them if interested.

Sonix.ai

Sonix is a subscription-based service with three tiers: single-User ($11.25 per month and $6.00 per recorded hour/ $0.10/minute), Multi-User ($16.50 per user/month and $5.00 per recorded hour) , and Enterprise ($49.50 per user/month, pricing available upon request).  You can find information about the differences among the tiers here: https://sonix.ai/pricing

The videos in the folder below show the results of our testing of these two services together with the built in speech-to-text engine currently utilized by Panopto. To be fair, the service currently integrated with Panopto is free with our Panopto license, and for Panopto to license the more current technology would likely increase their and our costs. We do wonder, however, whether it is simply a matter of time before the currently state-of-the art services such as featured here become more of a commodity:

https://oit.capture.duke.edu/Panopto/Pages/Sessions/List.aspx?folderID=4bd18f0c-e33a-4ab7-b2c9-100d4b33a254

 

Rev Adds New Rush Option

Rev.com‘s captioning services have been in wide use at Duke for the last couple years in part because of their affordability (basic captioning is a flat $1.00/minute), the generally high accuracy of the captions, and the overall quality of the user experience Rev offers via its well-designed user interfaces and quality support. Quick turnaround time is another factor Duke users seem to appreciate. While the exact turnaround times Rev promises are based on file length, we’ve found that most caption files are delivered same or next day.

Rev.com

For those of you who need guaranteed rush delivery above and beyond what Rev already offers, the company just announced it now offers an option that promises files in 5 hours or less from order receipt. There is an additional charge of $1.00/minute for this service. To choose this option, simply select the “Rush My Order” option in desktop checkout.

If any of you utilize the new rush service, we’d love to hear how it goes. Additionally, if you have any other feedback about your use of Rev or other caption providers, please feel free to reach out to oit-mt-info@duke.edu.

Zoom Room Integration with External AV Integrators

Creating a basic Zoom Room couldn’t be easier. You get a computer, connect a display, keyboard, and mouse, and then install the Zoom Room software on the computer. Once you register the room and iPad (or another control device capable of running Zoom Room), you’re in business. In fact, a few weeks back, I had a “WOW!” moment when I installed a Zoom Room in under five minutes. Granted, they had the computer ready to go, and an iPad at the ready. This is the utopian AV setup many groups have been looking for.

But… what happens when you attempt to integrate a Zoom Room in an existing space with a more-robust traditional AV system? Oh, and the AV system was installed by an external AV integrator. Well, things become a bit more interesting, complex, and expensive. We sat down with a local AV integrator to sketch out just what this would look like in an existing space, as this is a bit of a shift in the industry.

The Easy Part
The content and video part is rather straightforward. The Zoom Room interface will need to be feed to the AV system, which will route the signal to the display or projector. A camera and content feed will need to be feed from the AV system to the Zoom Room, which will most likely require a few dongles (again, this is rather easy). If you’re lucky, your AV system will have the extra capacity (extra HDMI inputs/outputs) to handle this upgrade without the need for additional cards or splitting video signals using a distribution amplifier.

The Not So Easy Part
Now comes the interesting part… audio. Zoom Rooms shine when the computer manages everything (cameras, mics, speakers, etc.), but when deploying a Zoom Room in an existing space, the audio needs to be integrated so that the rooms acoustic echo cancelation (AEC) doesn’t play havoc with the Zoom Room’s echo cancelation. It’s usually easy to spot an issue as the audio in the Zoom Room will have an “elastic” or “warble” sound, which usually ramps up when the conversation speeds up. For this part, you really need someone that understands audio and the audio program.

Also, when integrating a Zoom Room, you’ll need to decide how you’d like to handle the control of the Zoom Room. Some touch panels are capable of switching between the Zoom Room interface and an existing program, but that may be a bit too complex for some users. The alternative is to have two control systems in the room, one for the AV system, and one for the Zoom Room. This setup isn’t ideal.

Pro Tips for Integrating a Zoom Room into an Existing AV Space

  • Work with a local AV consultant to give you a general sense of how difficult the integration will be. Does your existing system have extra capacity? Will your existing audio configuration be compatible with Zoom? How many screens do you plan on using? Etc. etc. etc. (Psssss, if you work at Duke, you have a group on campus that offers that service for FREE!!!). They will be able to detail a base cost associated with the install and may be able to sketch out the design upgrade you can pass along to the AV integrator.
  • Pass that design sketch to your AV integrator. They will most likely have additional questions, such as: Who will support the Zoom Room, who is buying the computer, how will users interface with the Zoom Room, etc.
  • Get a quote from the AV integrator.
  • Approve the project and install!

We are actively monitoring a number of spaces integrating Zoom Rooms, so stay tuned for updates over the coming months.

Warpwire Now Hosts Content for Apple Podcasts

With their commitment to innovation and fresh ideas, it might be too early to call Warpwire an “old dog,” but they definitely learned a cool new trick recently that has expanded the ways it is now possible to use Warpwire at Duke. When we learned that Apple was migrating content Duke had been hosting in what was formerly called iTunes U to a new space called Apple Podcasts and was no longer supporting the hosting of media files on Apple servers, we needed to find an external RSS feed provider and a new publishing workflow for Duke’s vibrant podcast community. Around this same time, Warpwire was making improvements to their service in the area of audio support, and when we approached them with this challenge, they enthusiastically made the needed changes to allow their product to serve this need.

Duke on Apple Podcasts

 

Some of the key changes include:

  • Support for the album art required by Apple Podcasts for feeds and for media
  • Support for metadata in the format required to work with iTunes (Warpwire also now supports Dublin Core metadata)
  • The ability to change the author of a podcast feed to a non NetID (i.e., the name of a Duke organization)
  • Support for formatting text that appears in the Description field for media files. This allows content owners, for example, to include text transcripts for their podcast files that will be available to viewers consuming content through iTunes.

A KB article is available that walks Duke podcasters through the process of creating an RSS feed in Warpwire and publishing it in Apple Podcasts: https://duke.service-now.com/kb_view.do?sysparm_article=KB0028063. If you have any questions or need assistance working with your podcasts in Warpwire, you can contact the OIT Service Desk at https://oit.duke.edu/help

Blue Yeti Nano

One of the most overlooked technical aspects of in-office or at-home online teaching is audio capture. AV folks are quick to recommend $100-$200 webcams to significantly improve the video quality and flexibility of the teaching environment. But, when it comes to audio, many seem content delegating the sound capture to the built-in microphone of the webcam… or worse, the built-in microphone of the laptop or desktop (shiver!). The reality is, in most online teaching environments, the audio is as important, if not more so, than the video. Consider this, if you are watching a do-it-yourself YouTube video and the video is “OK-ish” (good enough to follow along), but the audio is good, you are still likely to follow along and learn from the recording. But, if the video is amazing, but the audio is poor, it doesn’t take long before you move on to the next offering. The same is true for online teaching.

If you ARE looking to enhance your audio (psssst, your students will thank you), Blue now offers the Blue Yeti Nano. The Nano a stylish desktop USB microphone designed for those that desire high quality (24-bit/48kHz) audio for quasi-professional recording or streaming podcasts, vlogs, Skype interviews, and online teaching (via WebEx, Zoom, etc.). At 75% the size of the original Yeti and Yeti Pro, the Yeti Nano is a bit more “backpack friendly.”

How will this improve my online teaching?
The Blue Nano has a few key features that will significantly improve your audio. First, the Blue Nano has a condenser microphone vs. the dynamic mic you’ll find in your laptop and webcam. Without going into too much technical detail, the condenser mic in the Nano is more complex, offers more sensitivity, and offers a more natural sound. Needless to say, this will blow your laptop’s built-in mic away.

Second, your built-in mic is most likely omnidirectional (it picks up sound in every direction). The Nano CAN be set to omnidirectional (ideal for when you have a conversation with 3+ people around a table, but it also offers a cardioid polar pattern. This means that when you are in front of the mic, you sound amazing, and sounds that aren’t in front of the mic are less prominent (ideal for teaching).

Third, the Blue Nano has a built-in mute button on the front of the mic. This may seem rather basic, but fumbling around for a virtual mute button when you have a PowerPoint, chat screen, etc. etc. open can be a pain. One quick tap of the green circle button on the front and the mic mutes.

At $99, the Blue Nano is a bit of an investment (one that you won’t really notice), but the people on the other side of the conversation will thank you.

Logitech Rally – Sneak Peek

Logitech offered a sneak peek at their soon-to-be-released Rally USB-connected video conferencing solution. While Logitech has had somewhat similar offerings in the past (ie. Logitech Group), the Rally is a bit of a game changer as it competes more directly with the likes of Cisco, Extron, and Crestron in the mid-sized conference room AV hardware environment.

Out of the box, the $1999 Logitech Rally kit has two hubs, one that would sit behind the display(s) and one that would be mounted under a table or in a rack. The two hubs are connected by a single Cat 6 cable that can be ~50 meters long. The table hub sports two HDMI inputs, a USB connection for a laptop/desktop, and a connection for the mic pod (you can have up to seven mic pods connected to the system). The display hub has a connection for the pan/tilt/zoom camera, one (or two speakers, depending upon your configuration) and two HDMI outputs for the displays. So, for under $2K, you have all the AV hardware, minus the displays and computers, you need for a reasonably large conference room or teaching space. Combine the Rally with WebEx, Skype for Business, Zoom Rooms, and you have an impressive turnkey AV solution for under ~$7K.

It’s also worth mentioning that the Rally overcomes a significant limitation of USB. Most “slower” USB devices have a length limitation of 5 meters (just shy of 16.5′) and for high-speed devices (ie, the PTZ camera in this configuration), it’s 3 meters (or under 10′). Once you factor in the table and monitor height, that doesn’t give you much to work with. But, the Rally uses a Cat 6 cable between the pods, so you have a considerably more flexible system, while still using standard USB.

Here is a quick sketch of what a dual-screen Logitech Rally Zoom Room might look like.

Again, this was a pre-release Logitech Rally, so we look forward to getting our hands on a shipping unit in the coming months, but we will be keeping an eye on the platform.

Crestron NVX Training

Duke is considering deploying Crestron’s NVX network-based AV solution in a unique active learning/gaming environment, so we sent a few members of Duke’s AV community to attend Crestron’s NVX Design and Application (DM-NVX) and DigitalMedia Networking Certification (DM-NVX-N) classes. This was a unique class as it was highly compressed to accommodate our group’s schedule and needs.

Why NVX?
Crestron’s NVX platform is an AV over IP solution that replaces the need for more traditional 8×8, 16×16, 32×32, 64×64 and 128×128 DM switchers. NVX is more flexible, scalable, and brings AV into parity with modern IT practices. In essence, it’s the future of hardware-based AV.

The class started as many Crestron classes do with general introductions, some background on the devices, where to start when researching help (psss, it’s Crestron’s website)… and from that point on, we were thrown head first into the deep end of networking. Yes, very little of the class had to do with traditional audio or video. We covered the OSI model, IP addressing, subnet masks, port numbers, IP transport protocols, hubs, switches, routers, and DNS. For those that have a networking background, this was a nice refresher course. But, for more traditional AV folks, networking can sound like a foreign language. I won’t bore you with the details, but the first slide deck was over 180 slides, and the information was dense. After being bombarded with all of the information for roughly six hours, we took our first test. Most everyone passed on the first try… even if it was by a single question.

After the test and a bit of time to recover, we started a hands-on exploration of the hardware. We started off by connecting one NVX directly to another NVX, setting one as a transmitter and the other as a receiver. After a few minutes, we had a very basic AV system! The next phase was to connect the two NVX units to a local switch. That took a bit of switch configuring/setup, but again, it was easy. As we started the second day, we connected our local switches to a core switch, so we could share any of our NVX transmitters to any of the receivers. While more complicated, it wasn’t that difficult to configure. During the final hours of the last day, we chatted about programming for the NVX and how and why you may want to consider a DigitalMedia XiO Director, a Virtual Switching Appliance Crestron offers to simplify programming for more complex NVX setups. We had another test, and the class was over.

A few takeaways:

  • NVX, or AV over IP, is here to stay and AV groups should get comfortable with the future
  • While AV folks don’t need to throw away their old skills, networking is a core part of the future of AV
  • Start befriending your networking folks… today
  • IP over AV has a range of network security concerns, so you should also befriend your networking security folks
  • The future is exciting and complicated. Lean into the new way of doing things (or at least understand them)