Big advances are taking place in intersection of video with AI (Artificial Intelligence). I ran across an interesting article in Streaming Media Magazine called The State of Video and AI 2018 that takes stock of some of these changes and I wanted to share it with you as we look toward what’s ahead for Duke.
We’ve been following trends in this area from a number of directions, including video captioning. As many of you are aware, the needs for captioning videos we produce at Duke are increasing, but the costs of captioning services, most of which rely on intensive manual labor, are high. However, new tools like IBM’s Watson, which includes more than 60 AI services, including machine captioning (with accuracy advertised as a whopping 96%), seem poised to shift the balance and make it possible for us to caption videos on a wider scale. We demoed Watson recently and will continue to monitor it as well as other tools in this space.
In this context I also wanted to point out that we recently began offering ASR (Automatic Speech Recognition) for Panopto, Duke’s lecture capture service. We are excited about the opportunities this new functionality will offer students and other viewers who are looking to drill down to points in videos where specific terms are found. This feature adds to Panopto’s already healthy set of features built around in-video search, including OCR (Optical Character Recognition) for slide content, and user-created time-stamped notes and bookmarks.