ThingLink Pilot at Duke Has Potential for 360 Video, Images

Duke Learning Innovation recently launched a new pilot of a tool called ThingLink. ThingLink offers the ability to annotate images and videos using other images, videos, and text to create visually compelling, interactive experiences. One  core use case for ThingLink is to start with a graphic (such as a map) or a photograph as a base and place buttons in strategic places that users can click to expose more information. ThingLinks can also link to other ThingLinks to create structured learning experiences.ThingLink Example

The screenshot above is from an example project on ThingLink’s “Featured” page by Encounter Edu. In this example, viewers can click on the “+” signs to reveal more information about each portion of the carbon cycle.

While creation of learning objects like these could have wide value for education, one aspect of ThinkLink we think DDMC-ers might find intriguing is its AR/ VR authoring capabilities. A challenge for 360 video, even with professionally produced material, can be that viewers sometimes feel lost clicking around trying to figure out what to look at next. With a tool like ThingLink’s VR editor, you can curate the experience by creating guideposts, and in doing so provide your users with a potentially more rewarding experience as they engage with 360 videos and images.

OIT Media Technologies production team is going to be reviewing ThingLink’s VR/ AR capabilities and posting their findings to the blog.

If you or others on your team would like to test ThingLink out, you apply to be a part of the pilot here:



2019 Lecture Capture Survey

We’re excited to announce that our 2019 Lecture Capture Survey is complete. We had a chance to take a birds eye view of ten of the leading lecture capture tools and make some observations about general trends in this rapidly evolving product space.

We hope this information will be useful to you. Please feel free to reach out with any questions or comments to

A publicly accessible PDF version of the complete survey can be found here:

-OIT Media Technologies Team

Quick AV Signal Flow with Lucidchart

When collaborating on the design of classroom AV systems, having the ability to rapidly sketch, modify, innovate, and share a signal flow diagram is an invaluable tool in avoiding expensive mistakes before install. But, creating signal flow diagrams has traditionally been a challenge for AV technicians as the software is either expensive, overly complicated, or locks the AV technician in as the single point of modifications for all time.

First, what is a signal flow diagram, and why do I need it? A signal flow diagram shows the signal path (audio, video, network, control, etc.) from inputs to outputs, for the entire AV system. It’s essentially a blueprint for the system… and would you buy a house where they didn’t have a blueprint? With a signal flow diagram, most entry-level technicians should be able to diagnose an AV issue down to the cabling or hardware level. Without this diagram, it’s difficult to troubleshoot small systems, and nearly impossible with larger systems.

Over the past few weeks, we’ve been testing Lucidchart to see if it’s capable of eliminating some of the frustrations with other software-based signal flow products. First, Lucidchart is web-based, so it’s not a piece of software you need to download and manage. If you have a web browser, Windows, Mac, or Linux, you can work on your project from the office, at home, or on your vacation… because we all love working during our vacation.

The platform is easy enough for a novice user to pick up after watching a few 5-10 min. videos. But, the true power comes in the ability for the design to be shared. By pressing the Share button at the top, you can share your design with clients in a “read-only” mode, so they can see, but not modify, the design. But, you can also share the design with collaborators to speed up the process. Also, this ability to keep users up-to-date on the design means you aren’t sending PDFs of the drawings. If you’ve ever attempted to incorporate change requests from the initial release of a drawing when you’re already three or four versions ahead… you’ll understand the appeal of real-time environments.

The only negatives we see are that we are required to design our own AV hardware blocks. While this is somewhat time-consuming, once you create a block, you never need to re-create it.

Check out a quick design we created!

Wirecast 10 Adds Live Captions

Wirecast recently announced a new cloud-based service that supports live captions based on ASR (automatic speech recognition) and an rtmp re-streaming service. Both work in conjunction with Wirecast 10. This means that if you are using Wirecast 10, you can automatically caption your videos and simultaneously push them to another provider like YouTube or Facebook live. This is an interesting development because we are seeing the entrance of new ASR platforms like IBM Watson that claim to offer much greater accuracy than has been possible with earlier generation ASR technologies. I’m not sure what platform Wirecast is leveraging, but we’d love to hear from anyone at Duke using Wirecast 10 who is willing to give their 100 minute free trial a go.

New Wirecast Cloud Services

It’s a subscription-based service with monthly fees starting at $25.00/month for re-streaming and $60.00/month for live captions. Detailed information and a link to set up an account and get started can be found here:


Let’s face it… humans like articulating concepts by drawing on a wall. This behavior dates back over 64,000 years with some of the first cave paintings. While we’ve improved on the concept over the years, transitioning to clay tablets, and eventually blackboards and whiteboards, the basic idea has remained the same. Why do people like chalkboard/whiteboards? Simple, it’s a system you don’t need to learn (or you learned when you were a child), you can quickly add, adjust, and erase content, it’s multi-user, it doesn’t require power, never needs a firmware or operating system update, and it lasts for years. While I’ll avoid the grand “chalkboard vs. whiteboard” debate, we can all agree that the two communication systems are nearly identical, and are very effective in teaching environments. But, as classrooms transition from traditional learning environments (one professor teaching to a small to a medium number of students in a single classroom) to distance education and active learning environments, compounded by our rapid transition to digital platforms… the whiteboard has had a difficult time making the transition. There have been many (failed) attempts at digitizing the whiteboards, just check eBay. Most failed for a few key reasons. They were expensive, they required the user to learn a new system, they didn’t interface well with other technologies… oh, and did I mention that they were expensive?

Enter Kaptivo, a “short throw” webcam based platform for capturing and sharing whiteboard content. During our testing (Panopto sample), we found that the device was capable of capturing the whiteboard image, cleaning up the image with a bit of Kaptivo processing magic, and convert the content into an HDMI friendly format. The power of Kaptivo is in its simplicity. From a faculty/staff/student perspective, you don’t need to learn anything new… just write on the wall. But, that image can now be shared with our lecture capture system or any AV system you can think of (WebEx, Skype, Facebook, YouTube, etc.). It’s also worth noting that Kaptivo is also capable of sharing the above content with their own Kaptivo software. While we didn’t specifically test this product, it looked to be an elegant solution for organizations with limited resources.

The gotchas: Every new or interesting technology has a few gotchas. First, Kaptivo currently works with whiteboards (sorry chalkboard fans). Also, there isn’t any way to daisy chain Kaptivo or “stitch” multiple Kaptivo units together for longer whiteboards (not to mention how you would share such content). Finally, the maximum whiteboard size is currently 6′ x 4′, so that’s not all that big in a classroom environment.

At the end of the day, I could see this unit working well in a number of small collaborative learning environments, flipped classrooms and active learning spaces. We received a pre-production unit, so I’m anxious to see what the final product looks like and if some of the above-mentioned limitations can be overcomed. Overall, it’s a very slick device.

Epson Demonstrates Pro L1300U

This past week, Epson provided an overview of their Pro L1300U projector at the Technology Engagement Center. The projector is an impressive 8,000-lumen beast, specifically designed for medium to large environments where image and color accuracy matters.

The laser light engine is designed to provide 20,000 hours of near maintenance-free service. If you’ve ever seen an AV technician’s eyes light up when they talk about laser projectors… it’s due to the reality that they wouldn’t need to service the projector for nearly 10+ years under normal usage scenarios. For example, if a projector is used six hours a day, five days a week, for 50 weeks a year, that’s about 1500 hours a year. Divide 1500 by the expected 20,000 life of the laser engine, and we’re looking at about 13 and a half years! Now if we could only get the faculty, staff, and students to turn off the projectors (half kidding).

Key Features:
Image Quality: While the projector has a native WUXGA resolution of 1920×1200, it also has a “4K enhancement feature.” Wait… don’t close your browser just yet. I’m generally suspicious of such “marketing-ese,” but it actually seemed to work as advertised. The image seemed to be somewhere between 1920×1200 and a 4K image in terms of quality, so chalk me up to impressed.

Service: Epson offers a good service plan for high use cases. If something should fail with the projector while it’s under warranty, you can get a replacement drop-shipped overnight. That’s music to my AV technician’s ears and sets Epson apart from some of the low-end projector manufacturers.

Lens Options: Simply put, Epson has an impressive array of unique lens options for their projectors. Access the right lens can make or break an AV install in a unique space.

Chameleon Mode: Wouldn’t it be nice if you could swap out your non-Epson projector with a new Epson, and not need to reprogram the AV system? Yes, this is a feature of Epson’s current generation of projectors. You can set the projector to respond to commands from a number of other projector manufacturers. Considering the cost of having an AV system reprogrammed, this could be a great cost-saving measure if you aren’t happy with your current projector or want to test an Epson in your space before purchasing.

As the price of laser projectors fall, Epson continues to lead the pack in many ways and their “sneak peek” roadmap seemed to reinforce that opinion. We look forward to seeing their new offerings soon.

Warpwire Workflows and Guides

Many of you by now are familiar with Warpwire’s support website since we feature their collection of video tutorials, called Guides, in the Help section of our service landing page.  Warpwire recently added a new section to their support site, called Workflows. These Workflows show how to use Warpwire from the standpoint of particular use cases, such as when an instructor wants to provide feedback to students via video, or when an instructor in a language course would like to review video or audio clips of her students practicing speaking skills.

Below are some of the new Warpwire Workflows we think you might find helpful. If there are other use cases you would like for Warpwire to consider adding, please feel free to reach out to and let us know your ideas so that we can share them with the company. And as always, if there are particular features you would like to see in Warpwire that don’t currently exist, we want to hear about those too:

For those of you who aren’t yet familiar with Warpwire’s video Guides, below is a selection of some of the tutorials we think users at Duke might find most useful, especially when they are starting out:


Warpwire 2.2.3 Now Offers Downloading

One of the oft-cited feature requests we’ve received for Warpwire since we began running it at Duke in 2014 is the ability for asset owners to download their media files from the system. With Warpwire 2.2.3, which we launched on January 3rd, we now have that ability. Since Warpwire’s main purpose is to function as a secure streaming platform, this feature is only available to Media Library Administrators, asset owners, or Warpwire System Administrators. Any users with these permissions will see the download option by default for the assets to which they have rights, as shown in the image below:
Warpwire download link
You can also add the ability to download files manually for any particular users you assign access to your recordings via the Share menu in Warpwire as shown in the image below:
It should be mentioned that Warpwire doesn’t save your original files, so what you’ll be presented with when you select the download option is a list of the three different encoded formats Warpwire created from your original files, as shown in the image below. If you need a file sharing service for keeping your originals, we recommend
 Warpwire download options

New Features at, currently the most widely utilized caption service provider at Duke, just announced some new features we wanted to let you know about. All are included at no extra charge in their standard $1.00 per minute service. For more information about getting started with Rev or another caption provider, you can visit You may also be interested in attending A Hands-on Guide to Captioning at Duke, a Learn IT@Lunch session scheduled for Wednesday, January 31, 2018 in which OIT’s Joel Crawford Smith and Todd Stabley will discuss video captioning at Duke and help you set up an account with Rev and get started captioning your videos.

Browser-based Captions Editor: It makes minor fixes, converting formats and frame rates. You can access it on any Order Detail page by clicking “edit”. Or give it a test run here:

Rev's new browser-based caption editor

Rev’s new browser-based caption editor

Browser-based Transcript Editor: Allows changes like formatting, speaker labels, etc. If you order timestamps, Rev gives you a transcript with word by word timestamps that play along with your file. You can test it out here:

Turnaround: Rev reduced transcription turnaround by 25% and caption turnaround by 50% over the last 12 months.

Revver network: Rev crossed 14,000 monthly active Revvers (freelancers who transcribe and caption). 90% are based in US/Canada. This allows us to turn around large volumes with high quality. Rumor has it a Duke staffer who thought they were quite qualified to be a Rev captions editor and was rejected. Say it isn’t so!

Support coverage: Rev expanded their 24/7 support to the weekends as well.

Additional improvements: custom timestamp offset for transcripts, PDF and TXT transcript outputs, and improved Rev API support.

If any of our peer Universities are interested in speaking with Rev, feel free to reach out to us and we’ll connect you.

Wolfvision Cynap

First announced at InfoComm 2015, the Wolfvision Cynap continues to add and enhance core features to the device to adapt to the changing wireless connectivity landscape. To categorize the Cynap as a wireless presentation and collaboration device is a disservice to the robust capabilities of what Wolfvision has created. The Cynap can also acts as a media player, provide web conferencing for Skype for Business, provides app-free, dongle-free mirroring, it can also stream mixed content to services like YouTube and Facebook, and offers robust recording capabilities. Also, it has basic whiteboard and annotation functionality. Finally, the Cynap can receive content from two HDMI inputs or you can stream content to the device as additional inputs (think digital signage), and that’s just the tip of the iceberg… literally.

It would take five DDMC posts to cover the core features of the Cynap. Unfortunately, that brings me to the core “gotcha” of the system. With such an advanced piece of hardware, comes complexity (aka feature fatigue) and cost. The device is outside the budget of a small/medium sized huddle room upgrade. Also, the device would need to exist in an environment where the user base is willing to self-train on the functionality of the Cynap, or offer an on-site trainer to train and evangelize the product. That said, if you found the right group of users that could take advantage of the vast capabilities of the Cynap, it could be an incredibly powerful tool.