This past Thursday, Jack D’Ardenne provided the Duke Digital Media Community (DDMC) with an overview of Duke’s Internet Protocol Television (IPTV) offering, called CampusVision. The platform features approximately 135 DirecTV channels and several Duke internal channels from Duke Athletics and Duke Chapel. While IPTV is the primary purpose of CampusVision, it’s also capable of a range of signage and AV related tasks. Specifically, with the more expensive of the two CampusVision players, it’s capable of acting as a rudimentary AV switcher which could come in handy in locations where you may want to watch the next basketball game… yet don’t want to install an expensive or complicated AV system to manage the area. Also, CampusVision is capable of emergency notification, so in theory, you could switch over your displays when an alert goes out. Visit the CampusVision page to request additional information on the platform.
While we’ve worked with the Insta 360 Pro fairly extensively in the past, we hadn’t yet tested it’s capability for livestreaming. In particular, I was curious about viewing the livestream from within our VR headset, the Oculus Go.
Though there’s a few ways you could set up the livestream, I found the following to be the most reliable. You can follow along this process with this video capture of setting up the stream. After connecting the camera to my local wifi, I updated the WiFi settings on the camera to be in Access Point (AP) mode. I then connected the camera via cable to an ethernet port, which generate a new IP address for the camera. I plugged that IP address into the camera control app on my laptop which was on the same local wifi network and got connected to the camera. I could’ve theoretically streamed just over WiFi without plugging into the Ethernet, but I found the connection wasn’t strong enough when I later actually went to livestream. I could also use the camera control app on an iPad or other mobile device, but using a laptop to setup the livestream was much easier since I could access both the camera application and the livestream host on the same device at the same time.
With the camera control app on the laptop connected to the camera, I then went over to YouTube to set up the livestream host. YouTube makes this really easy – there’s an icon right on the homepage that allows you to “Go Live.” From here, I set up the stream. I named it, and made sure the stream was unlisted so that only I knew where to access it. YouTube provided me a URL and key code to plug into my camera control app. Back in the camera control app, I made sure it was in Custom Rtmp server, and plugged in the stream URL and key from YouTube. I ran the video feed at 4k, 30FPS, 15Mbps bit rate. I then hit the “Live” button to send the signal to YouTube. After a few moments, the feed came through, I toggled on the 360 video option, and I could then Go Live from YouTube to take the stream public. From real life to the live feed, I estimated about a 10-15 second lag.
To access the stream from in the Oculus Go, like most things in a VR headset, is straightforward if not exactly seamless. Within the headset, I opened the YouTube app, searched for my channel, and accessed the stream from my videos there. I could alternatively input the URL manually into the browser, but that process is a bit tedious when wearing using the headset. Watching a 15-second old version of myself from within a VR headset is probably the closest thing I’ve ever had to out of body experience.
As long as I’ve been working with 360 video, one element has always been out of reach: interactivity. Particularly when viewed through a headset, the immersive nature of 360 video lends itself well to exploration and curiosity. The challenge has always been how to add that interactivity. Neither working with an external vendor, nor developing a in-house solution seemed worthwhile for our needs. However, the tool Thinglink now offers an intuitive way not only to augment media with interactive annotations, but to link that various media to each other.
Thinglink, as described previously, is a web platform that allows the user to add interactive pop-up graphics onto photos and videos, in 2D or in 360. Duke is piloting the technology, so I took the opportunity to test both the creation and publishing of 360 video through Thinglink.
The creation part couldn’t have been simpler (and in its pursuit of simplicity also feels a bit light on features). I was able to upload a custom 360 video without trouble, and immediately start adding annotated tags. You can see my test video here. There are four primary forms of tags:
- Labels add a simple text box that are best used for… labeling things. This would be useful in a language learning context where you might want to add, say, the Spanish word for “tree” near a tree visible in the video.
- Text/Media are fancier versions of labels which includes room for a title, description, photo, or external link. This is for something where you might want to add a little more context for what you are tagging.
- Embeds allow you to insert embed codes. This would typically be a video (from either YouTube or Duke’s own Warpwire) but could include surveys or any other platform that provides you an HTML embed code to add to your website.
- Tour Links allow you to connect individual tagged videos/photos together. If I wanted to provide a tour of the first floor of the Technology Engagement Center, for example, I could start with a video from the main lobby. For the various rooms and hallways visible from the lobby, I could then add an icon that when clicked on moves the viewer to a new video from the perspective of the icon that they clicked.
Adding all of these is as simple as clicking within the video, selecting what kind of tag you want, and then filling in the blanks. My only real gripe here is a lack of customization. You can’t change the size of the icons, though you can design and upload your own if you like. The overall design is also extremely limited. You can’t change text fonts, sizes etc. There is a global color scheme which just comes down to a background color, text color, button background and button text color. In the “Advanced” settings, you can reset the initial direction POV that the 360 video starts in, and you can also toggle “Architectural mode” with eliminates the the fish-eye POV at the expense of less overall visibility. While viewing
All in all, it’s incredibly easy to set up and use. Sharing is also pretty straightforward, provided you don’t intend to view the video in an actual VR headset. You can generate a shareable link that is public, unlisted, or only visible to your organization. You can even generate an embed code to place the Thinglink viewer within a website. What I was most curious about, however, was if I could properly view a Thinglink 360 video with our Oculus Go headset. In this regard, there’s a lot of room for improvement.
In principal, this use case is perfectly functional. I was able to access one of Thinglink’s demo 360 videos from within the Oculus Go headset and view and interact with the video with no trouble. The headset recognized the Thinglink video was a 360 video and automatically switched to that mode. A reticule in the center of my field of vision worked as mouse, in that if I hovered directly over a tag icon, it would “click” and activate the icon, negating the need for an external controller. The only issue was that the window that was activated when I “clicked” on an icon would sometimes be behind me and I had no idea anything had happened.
When I tried to view and access my own video, however, I had a lot of trouble. From a simple logistics standpoint, the shareable Thinglink URLs are fairly long and are tedious to input when in a VR headset (I made mine into a TinyURL which slightly helped). When I was finally able to access the video, it worked fine in 2D mode but when I clicked on the goggles icon to put the video into VR Headset mode I was met with a simple black screen. The same went for trying to view the video in this mode on my phone or on desktop. I found that after several minutes of waiting, an image from the video would eventually come up. Even when I was able to see something other than darkness, I discovered that the embedded videos were not functional at all in VR mode.
While the functionality is potentially there to create an interactive 360 video tour in Thinglink and view it within a VR Headset, it’s simply not practical at this point. It’s a niche use case, sure, but one that seems within grasp. If the developers can work out the kinks, this platform would really be a gamechanger. For now, interactive 360 video will have to stay on the flat screen for me.
Some of Duke’s Communications staff have been experimenting lately with Otter.ai, a new transcription service that offers 600 minutes per month, and seem to be enjoying it. Otter, which was started by an ex Google engineer in early 2019, is an interesting move forward in the captioning and ASR space. It’s focus seems to be less on captioning, such as Rev.com (a widely used service at Duke) and more on live recording of meetings via your browser and making searchable transcripts available in a collaborative, teams-based environment. I had some problems in utilizing Otter to produce a caption file, but it does seem like Otter could be useful for simple transcription workflows, and the idea of using something like Otter to record all your meetings poses some interesting possibilities and questions.
Below is a summary of what I found in my initial testing:
- High accuracy, comparable to other vendors we’ve tested recently utilizing the newest ASR engines
- Interesting collaboration feature set
- Can record your meeting right from within the browser
- Nice free allotment—600 free mins/ month (6000/month for the pro plan, education pricing $5.00/month)
- Includes speaker identification
- If your goal is captions and not just transcriptions, Otter is more limited–only seems to supported export of captions in .srt format (not .vtt, which some of our users, including the Duke Libraries, prefer)
- The .srt I exported in my test file was was grouped by paragraph, not by line, and so it wouldn’t be possible to use the .srt with one of our video publishing systems like Warpwire or Panopto without extensive editing to chunk the file up by line.
Last month, Kontek visited the Technology Engagement Center (TEC) on Duke’s campus to provide an updated overview of their services, introduce new faces at the company, and to detail the updated organizational structure of the company. It was also a bittersweet opportunity to say farewell to Billy Morris, a longtime Senior Account Manager for Kontek on his journey to retirement. Wes Newman kicked off the conversation, discussing how the organization has changed and grown over the years and how he has empowered his team to be the best Kontek. Marques Manning, Director, UX Design & Technology then spoke about the specific changes that have taken place, with the introduction of more robuts commissioning standards, improved internal and external communications, and a raised standards when it comes to the UX experience for customers.
With the release of a new audio plugin on FX Factory called De-Clipper, we decided to compare our previous audio cleanup tools and see what came out on top.
De-Clipper ($59) by Accusonous is billed as “the world’s first entirely automatic De-Clipper.”
SoundSoap 5 ($150) by Antares is a suite of audio cleaning tools that includes denoise, declicking, and hum removal in addition to the declipper feature.
RX Elements ($129) by iZotope is a basic suite of audio cleaning tools, featuring de-click, de-clip, de-hum and de-noise.
We ran each tool over a couple samples of clipped audio in Final Cut Pro X, the results of which you can find here:
Overall, we found the RX Elements tool to be the most effective by a large margin. Further, within the interface of each tool, Accusonous’s De-Clipper and SoundSoap offer rather binary options. In SoundSoap, this is literally an on/off switch for the effect. RX Elements, meanwhile, offers more in the way of fine-tuning the various thresholds and gains. This flexibility is perhaps best demonstrated by comparing the interfaces for the various tools:
An important consideration is that while Accusonous’s De-Clipper is a standalone tool, both Soundsoap and RX Elements include a suite of other features and are thus priced accordingly. In particular, we’ve found the noise reduction abilities to be quite effective in both plug-ins and well worth the additional cost.
Today our Video Working Group meeting was led by Steve Toback and he discussed our review tool, lookat.io.
Lookat is something we all use now in Academic Media Production. It allows us to collaborate more efficiently and clearly with staff on videos. It works by leaving comments directly onto the video so that people can provide feedback easily and the editors can see the exact point in a video an edit needs to be made.
Steve went over how you can upload different versions of one video to a file on lookat. We also discussed sped-up playback, replying to comments, 360 video support, and drawing on the video to indicate edits. I also didn’t realize that people outside of Duke could leave comments on lookat, so that was good to learn.
Our discussion also turned to review in general and we mentioned other tools that provide similar features to lookat like Motion rays and Vimeo. You can contact Steve Toback if you’re interested in using Lookat for your projects.
So I got one of those annoying ads in my Facebook feed about this amazing wireless microscope that’s regularly a buhzillion dollars but can be yours today for only $79.99. I actually thought that was a pretty neat idea so I checked out Amazon and found this baby for $43.
To say I’m impressed with this thing is an understatement. Did I mention it only cost $43? I connected it to my Mac, it immediately recognized the microscope in QuickTime X and I was recording.
I then stuck a penny under the scope and connected it to a Zoom meeting:
Did I mentioned this only cost $43?
While it can only capture 720 via USB, it can capture 1080 via wireless. Wireless? Yes. It has it’s own (unsecure) hotspot. Connect your phone (or multiple phones) to the hotspot, launch the app, and you can capture stiils or video to your phone or tablet (Android or iOS)
It comes with a built in LED light, cable and a stand and a cover. And it costs… well you know.
Possible use cases (beyond secondary and primary school) could be something like DukeEngage that wants to bring a 1000x portable microscope out in the field to capture images/video.
In March, Apple announced the 2nd generation of AirPods. At $159, I was a bit hesitant to purchase them as I didn’t really need yet another thing to keep charged. Sure, the addition of “hey Siri” control, 50% more talk time on a charge, and wireless charging are a nice bump… but those were really “nice to have” features. Also, my $20 set of Bluetooth headphones were working just fine and allowed me to say “Yah, but mine were $20!”
Fast forward six months and my cheaper headphones had gone missing and I had become somewhat frustrated with the way the cheaper headphones connected with my iPhone, Apple laptop, and Apple TV (it was just clunky and time-consuming to switch between my devices). With $145 dollars burning a hole in my pocket, I splurged on a new toy. I assumed I’d have immediate buyers remorse.
It’s only been 24 hours with the AirPods, but my initial reaction has been incredibly positive. The AirPods act as an extension of my iPhone, MacBook Pro, and Apple TV with seamless transitions between the devices. The human interface (removing an AirPod pauses the current song or movie) is simply brilliant and it’s clear Apple has painstakingly woven the AirPods into their entire hardware ecosystem. So far, the few minor “complaints” would be that the mics aren’t as good as my standard iPhone mic. The AirPod mics aren’t bad… I had an hour-long conversation with a family member and they indicated that the audio was good. They just aren’t great, which was a little surprising… but not really when you realize the mic is basically next to my ear. I guess my other complaint is that there is a $145 hole in my wallet where money once lived.
Overall, I’m excited to see how I’m able to use the AirPods for WebEx, Zoom, Skype, etc. meetings and how they hold up over the coming weeks, months, and hopefully years.
“Senses of Venice” launches today at the Chappell Family Gallery which is located at the intersection of the Perkins and Rubenstein Libraries on Duke’s West Campus. My team’s second major collaboration with the brilliant folks at Duke’s Art, Art History & Visual Studies@Duke with the addition on this project including the Duke University Libraries, our Trinity School’s AV Engineering Team and a very special collaborator, Brad Lewis, a gentleman from my past trade, the producer of Ratatouille and “How To Train Your Dragon 3”
The centerpiece of the exhibit is a “dual” display consisting of an interactive screen that is synchronized with a projector on the wall behind it creating an incredible immersive experience as well as an experience that can be shared by the guest using the display as well as those gathered around. This was developed in partnership with folks at the University of Padua as well as CamerAnebbia in Italy.
Another wonderful touch added by CamerAnebbia is the use of a “multi-plane” effect in the exhibit’s other interactive screen. This effect that was originally used by Walt Disney in his early films by painting animation on different layers of glass and moving them independently.
The exhibit will be in place through December so please stop by and experience this unique look at this vibrant and historically important city!