This past Thursday, Jack D’Ardenne provided the Duke Digital Media Community (DDMC) with an overview of Duke’s Internet Protocol Television (IPTV) offering, called CampusVision. The platform features approximately 135 DirecTV channels and several Duke internal channels from Duke Athletics and Duke Chapel. While IPTV is the primary purpose of CampusVision, it’s also capable of a range of signage and AV related tasks. Specifically, with the more expensive of the two CampusVision players, it’s capable of acting as a rudimentary AV switcher which could come in handy in locations where you may want to watch the next basketball game… yet don’t want to install an expensive or complicated AV system to manage the area. Also, CampusVision is capable of emergency notification, so in theory, you could switch over your displays when an alert goes out. Visit the CampusVision page to request additional information on the platform.
One of the best aspects of being a Duke University Digital Media Engineer for the Office of Information and Technology is that I can regularly attend manufacturer-sponsored AV training sessions related to projects where I may not be directly involved. Learning about new platforms is an exciting opportunity to compare and contrast our existing offerings while exploring new or unique features a new platform offers. Duke is no stranger to BrightSign hardware. We’ve been deploying rebadged BrightSign decoders and encoders for our CampusVision (Duke’s Internet Protocol Television (IPTV) offering) for years. But, we’ve never used BrightSign’s hardware and software on a project, until now.
First and foremost, BrightSign makes hardware media players. As of the writing of this post, they offer eight different players with a variety of configurations (some that display 1080p video, others that play 4K… also, audio capabilities differentiate the players). Some of their players have HDMI encoders, which can come in handy in a wide range of environments. Most people like BrightSign hardware as it’s an alternative to installing a computer, where you need to maintain the operating system, application(s), etc. They perform a simple, yet expanding, set of functions, and they do it well.
For the project in question, Duke has installed an 18 display video wall in a 6 x 3 configuration. Currently, it’s capable of displaying the output from either a Windows computer or Linux computer in a “left nine screens, right nine screens” configuration, but more flexibility (and fewer computers) is the desired outcome. The training BrightSign provided went over the setup of the boxes and adding them to the BrightSign Network (a cloud service BrightSign offers). Overall, the setup was easy and we’re looking forward to the next training where we’ll go over uploading content and controlling the devices. Stay tuned!
For one of our online courses, we wanted to include some video testimonials with former students to discuss how the class prepared them for the real world. The only problem was that some of former students we wished to talk to lived in California – not particularly conducive for a quick recording session in our studio on campus. Instead, we used the video conferencing tool Zoom to facilitate the call and I used Camtasia to do a screen recording of the interview. While the concept is simple, I found some tips that can make the execution feel a bit more professional.
First, the basics of remote video recording still apply. The subject sat at a desk that faced a window which provided a lot of natural light. It was also around 7am in his time zone so it was pretty quiet as well.
In some scenarios, to get the best possible video quality, I’ll ask the subject to record themselves with an application like Quicktime and then send me the video file. While this helps bypass the compression of streaming video and screen-capture, it comes with a couple drawbacks. First, I as the video producer don’t have direct control over the actual recording process which is a risk. Second, subjects are usually doing you a favor just by agreeing to the interview, and the less you ask of them the better.
Ruling this option out, there’s two other choices. Using Zoom’s built-in recording tool, or using a third-party screen capture tool like Camtasia. They each have their plusses and minuses. Zoom’s built-in tool allows the user to simply hit record within the interface and save the file either to their local computer or the cloud. This will generate both a video file and an audio-only file. However, if the meeting unexpectedly shuts down or the conversion process is interrupted, the recording files could become corrupted and non-recoverable. With Camtasia, the recording is isolated from the conferencing tool so I can better trust that it will record successfully, even if the call drops.
Recording with Camtasia does present another problem. If anything shows up on my screen, be it an email notification, or my mouse moving and activating the Zoom room tools, that is all recorded as well. Zoom’s local recording tool will capture just the video feed.
For the purposes of this video, I would just be showing the subject and would edit out the interviewer’s questions. For this reason, I wanted to make sure that Zoom only gave me the video feed of my subject and did not automatically switch video feeds based on who was talking, which it does by default as part of the Active Speaker layout. By using the Pin function, I can pin the subject’s video feed to my interface so that I will only be seeing the subject’s video, whether I record by screen capture or by local recording. This won’t affect other participants’ views, but it’s also important to note that it would not affect the cloud recording view either.
While facilitating the interview, I muted my microphone to ensure no accidental sounds might come from my end. And because we would be editing out the interviewer’s questions, we coached the subject to rephrase each question in his answer. For example, if we asked “Why is programming important to you?” the subject might start their response with “Programming is important to me because…”
Ultimately, it was just a simple matter of starting the video conference, pinning the subject’s video, and hitting record on Camtasia. From there I could just sit back while the interviewer and subject spoke. Like a lot of video production, proper planning and research will make your job a lot easier when it’s actually time to turn the camera on.
One portable field encoder that looks like a powerful way to deliver a live broadcast is the Live U Solo. The live U has options to interface directly with Facebook Live as well as a number of other destinations. It supports a number of different connection protocols, including ethernet, wifi, and has two slots for 3G/4G cellular modems. Any of these signals can be bonded together so you essentially get an aggregate of all the connections the device can manage, capping at a bit rate of 5.5Mbps. This makes the Live U ideal for any situation in which you would otherwise be relying on a single connection point you were worried might not operate reliably on its own.
An option with SDI retails for about $1500.00, and there is an HDMI only version for $995.00.
We had an opportunity to test the Meeting Owl from OwlLabs this past week and wanted to share our thoughts on this unique conference room technology. The $799 webcam, mic, and speaker all-in-one unit is intended to sit at the center of the conference room table. What makes the Meeting Owl worth nearly $800? If I were reviewing the device simply on the speaker and mic array, I’d say this isn’t all that exciting of an offering. There are plenty of <$200 mic/speaker combos that would perform as well or better. But, the Meeting Owl’s unique 360 camera at the top that makes the unit stand out from its peers.
When sharing video, the device segments the camera feed into zones. At the top, there is a side-to-side 360-degree view of the room, and below is either one, two or three “active speaker” zones intelligently selected by the Meeting Owl. So, when two people in the room start talking, the camera segments lower area of the camera feed to accommodate the conversation. Overall, we found the intelligence of the camera to be rather good. Infrequently, it would pause a bit too long on a speaker that had stopped talking, or incorrectly divided up the lower section, prioritizing the wrong person… but considering the alternative is physically moving the camera… it’s a nice feature that livens up the meeting experience.
- Incredibly easy to setup and configure (under 10 minutes)
- 360 camera works as advertised
- Good quality internal mics
- Platform agnostic (works with Skype, WebEx, Zoom, Meetings, etc.)
- The image quality isn’t great (it’s a 720p sensor, so the sections are only standard definition, or worse, and it shows)
- Split screen can be distracting when in overdrive (sometimes it moves too slowly, other times it seems to move too quickly… this may be improved with a firmware update)
- At $799, OwlLabs is in the Logitech Meetup zone. While the products are rather different, each has their pros and cons depending upon the expectations of the user.
Overall, we enjoyed the product and can see it being deployed in a range of spaces. It also signals a new era in intelligent conferencing technologies. The local group at Duke that purchased the device also has plans to deploy this in a classroom where Zoom will be used for hybrid teaching sessions (some students local, others remote). It will be interesting to see how the far side reacts to the automated pan/tilt of the camera and if it can keep up with some of our most active faculty. My primary complaint about the device is that the image is too blurry. Also, the 360 lens tends to have the faces centered in the lower image area. Ideally, it would crop to a few inches above the top of the head of the active speakers(s). Perhaps we’ll see an HD or 4K version in the future that addresses a few of these shortcomings.
This year, I had the opportunity to represent Duke at the Streaming Media West Conference by participating in the panel “Best Practices for Education & Training Video.” Having seen the growth and development of our online course production over the past six years, it was fascinating to see the approaches that other institutions were pursuing.
The University of Southern California has been streaming interactive lectures over Facebook Live. The approach utilizes a blend of green-screen lectures, interviews and discussions, and instantaneous feedback from viewers. A sample can be viewed here: https://vimeo.com/scctsi/review/300228463/c6fc3030f5. Gary San Angel, the Distance Education Specialist at USC’s Keck School of Medicine, noted that the live and interactive format of the lecture significantly increased the viewer engagement compared to their typical video output. Students watched more of the video and had better retention.
USC Price Director of Video Productions and Operations Services, Jonathan Schwartz, largely focused on his team’s live-streaming workflow. They use a mix of of encoders, content delivery networks and publishing platforms, and their commitment to production quality and value had me considering how live-streaming could be incorporated into Duke’s online course development.
While Duke has a great lecture capture system in DukeCapture, the focus of our online courses is in offline production where we have instructors set aside time to record standalone lectures in the studio for an online audience. This ensures that each video is focused on a specific learning objective and conveys that in a short amount of time. Live classroom recordings don’t usually lend themselves well to this priority, but both schools at USC have found ways to work around that limitation.
By working with instructors to design their classroom material to keep an online audience in mind, and by outfitting those classroom’s lecture capture infrastructure with potential live-tracking and live-switching abilities, we would be able to create a workflow that reduces both the bottleneck of the persistently busy professor and the ever growing demand for high-quality educational video.
One of the most overlooked technical aspects of in-office or at-home online teaching is audio capture. AV folks are quick to recommend $100-$200 webcams to significantly improve the video quality and flexibility of the teaching environment. But, when it comes to audio, many seem content delegating the sound capture to the built-in microphone of the webcam… or worse, the built-in microphone of the laptop or desktop (shiver!). The reality is, in most online teaching environments, the audio is as important, if not more so, than the video. Consider this, if you are watching a do-it-yourself YouTube video and the video is “OK-ish” (good enough to follow along), but the audio is good, you are still likely to follow along and learn from the recording. But, if the video is amazing, but the audio is poor, it doesn’t take long before you move on to the next offering. The same is true for online teaching.
If you ARE looking to enhance your audio (psssst, your students will thank you), Blue now offers the Blue Yeti Nano. The Nano a stylish desktop USB microphone designed for those that desire high quality (24-bit/48kHz) audio for quasi-professional recording or streaming podcasts, vlogs, Skype interviews, and online teaching (via WebEx, Zoom, etc.). At 75% the size of the original Yeti and Yeti Pro, the Yeti Nano is a bit more “backpack friendly.”
How will this improve my online teaching?
The Blue Nano has a few key features that will significantly improve your audio. First, the Blue Nano has a condenser microphone vs. the dynamic mic you’ll find in your laptop and webcam. Without going into too much technical detail, the condenser mic in the Nano is more complex, offers more sensitivity, and offers a more natural sound. Needless to say, this will blow your laptop’s built-in mic away.
Second, your built-in mic is most likely omnidirectional (it picks up sound in every direction). The Nano CAN be set to omnidirectional (ideal for when you have a conversation with 3+ people around a table, but it also offers a cardioid polar pattern. This means that when you are in front of the mic, you sound amazing, and sounds that aren’t in front of the mic are less prominent (ideal for teaching).
Third, the Blue Nano has a built-in mute button on the front of the mic. This may seem rather basic, but fumbling around for a virtual mute button when you have a PowerPoint, chat screen, etc. etc. open can be a pain. One quick tap of the green circle button on the front and the mic mutes.
At $99, the Blue Nano is a bit of an investment (one that you won’t really notice), but the people on the other side of the conversation will thank you.
Let’s face it… wirelessly sharing content can be a pain. Even in very basic AV setup such as an Apple TV, you need to know the name of the Apple TV (which sometimes is named very differently from the room you are in), you join, and are then asked to enter a passcode. Sure, it’s only a minute or so, assuming you’re familiar with the setup, but it’s time spent not focusing on your presentation.
“One touch” connection solutions are available from a number of vendors, but they usually require downloading special software to enable that feature, or that the “one touch” is only one touch after an extensive setup. Why can’t it just be built into the core application? Why does it need to be so difficult?
Surprise, Zoom Rooms has that feature built in. Basically, you enter the physical room, open the Zoom app, press “Share screen” and presto, you’ve connected to the Zoom Room and sharing content in seconds. It’s that easy. No need to enter a ten digit code or find the Zoom Room in AirPlay… it just works.
This is really intended if you simply want to share your screen as you will lose some of Zoom’s more advanced functionality when in “quick connect” mode. For example, when you use “Share screen” you lose the ability to annotate, pause the share, or select only specific applications to share. That said, when additional functionality was needed (say I wanted to add a remote participant or use more advanced features) I could simply invite myself to the Zoom Room and Zoom handled the transition gracefully.
It’s the little things that will drive user adoption, and this is a nice feature.
We’ve recently been exploring the potential of 360 video production and how it can best be utilized for our future projects. To view the 360 video, we’ve been using an Oculus Go which is a wireless VR headset – no computer or phone required. Ideally, we could just hand over the Go to a viewer and they could immediately watch one of our videos. One challenge we found was the Go does not currently allow a way for those outside the headset to see what the viewer currently sees (though apparently this feature is in development). With a bit of googling and trial and error, we successfully mirrored the display on a computer.
A quick proof of concept can be viewed here: https://warpwire.duke.edu/w/lD8CAA/
I mostly worked from this guide on pixvana, but to quickly summarize:
- I downloaded the Android Debug Bridge (adb) and saved the folder in my user folder on my MacPro.
- I made sure my copy of VLC Media Player was up to date.
- I put the Oculus Go in Developer mode (which you’ll need to set up a organization account with Oculus to do).
- I made sure the Go and my computer were on the same WiFi network.
- With the Go plugged into my computer via USB, I obtained the Go’s IP address by typing into the terminal “adb shell ip route”.
- I entered the command “adb tcpip 555”.
- I unplugged the Oculus Go.
- I entered the command “adb connect ‘IPADDRESS'” with IPADDRESS being the same as the one found in step 5.
- I entered the command
./adb exec-out "while true; do screenrecord --bit-rate=2m --output-format=h264 --time-limit 180 -; done" | "/Applications/VLC.app/Contents/MacOS/VLC" --demux h264 --h264-fps=60 --clock-jitter=0 -
From there, VLC displayed the streaming video output from the Oculus Go. There was noticable lag (3 seconds or more) but otherwise it worked pretty seamlessly. The only trouble is it’s tough to view the mirrored stream on the desktop if you still have the headset on!
I also tested an app called Vysor. Vysor largely eliminates the terminal commands and is more easy to use but plays an ad every 30 minutes. However, I did notice the lag is significantly less noticeable. A paid upgrade will also allow for higher-quality mirroring and a shareable link for people to view the stream remotely.
Wirecast recently announced a new cloud-based service that supports live captions based on ASR (automatic speech recognition) and an rtmp re-streaming service. Both work in conjunction with Wirecast 10. This means that if you are using Wirecast 10, you can automatically caption your videos and simultaneously push them to another provider like YouTube or Facebook live. This is an interesting development because we are seeing the entrance of new ASR platforms like IBM Watson that claim to offer much greater accuracy than has been possible with earlier generation ASR technologies. I’m not sure what platform Wirecast is leveraging, but we’d love to hear from anyone at Duke using Wirecast 10 who is willing to give their 100 minute free trial a go.
It’s a subscription-based service with monthly fees starting at $25.00/month for re-streaming and $60.00/month for live captions. Detailed information and a link to set up an account and get started can be found here: