Skip to content

Panopto Automated Tagging Pilot

By: Todd Stabley

One of the reasons Duke chose Panopto in 2010 after a project comparing leading enterprise capture tools was Panopto’s extensive search capabilities.  These include the ability to search for any text used as part of a PowerPoint or Keynote presentation, personal notes that viewers type in and store with their lectures for later review, metadata for recordings, such as title and description, and full text transcriptions imported into recordings as caption tracks.

Viewers can search for a word in any of these categories either within a particular recording, or across all the recordings in their node (i.e., trinity.capture.duke.edu or law.capture.duke.edu). The results returned will be indexed so that when you click on any instance of the word, it will take you right to that spot in the recording and begin playing from there.

Beginning this semester and continuing into 2013, senior developers on OIT’s Systems team and the Interactive Technology Services group that manages DukeCapture are partnering to explore a unique automated tagging pilot.  Normally caption tracks are generated by humans (an expensive proposition, and one that Panopto does support if needed for Section 508/504 compliance). This project allows us to export recordings captured in Panopto to a Cisco device that will convert the audio to text and automatically import that text back into Panopto as caption tracks. As with all computer-based speech-to-text technologies, the accuracy of the transcriptions is by no means perfect, and as a result our short-term goal is not verbatim transcripts, but rather text tags that can help you locate parts of the video to watch.  Currently the system handles complex words better than simple ones, which makes it useful for this type of tagging and search.  Longer term, we plan to continue working on accuracy as well as on expanding the number of recordings using this technology. Imagine having access to the entire body of lectures captured at Duke, and being able to find every instance of where a particular term you were interested in occurred.

An example of an automatic text track for a Panopto recording

Several fall 2012 courses are signed up for this pilot, and we are looking for additional faculty volunteers for 2013 who would be interested in helping us explore this technology. If you are a faculty member and would be interested in having your lectures captured or in participating in our caption pilot, please feel free to contact your DukeCapture Site Administrator to get set up.

If you’re a DukeCapture Site Adminstrator, please spread the word to the faculty members and other speakers you support!

 

Leave a Reply

Your email address will not be published. Required fields are marked *