Panasonic PressIT Wireless Presentation System

 

In today’s world, wireless presentation systems are becoming increasingly popular. They offer several advantages over traditional wired presentation systems, including ease of use, flexibility, and reliability.

One of the biggest advantages of wireless presentation systems is their ease of use. With a wireless presentation system, there is no need to connect any cables between your computer and the display. This makes it much easier to set up and use your presentation, especially in large or crowded rooms.

Wireless presentation systems also offer a great deal of flexibility. With a wireless presentation system, you can present from anywhere in the room. This is ideal for presentations where you need to move around and interact with your audience.

Finally, wireless presentation systems are very reliable. They are less likely to experience interference or signal loss than traditional wired presentation systems. This means that you can be confident that your presentation will go off without a hitch.

If you are looking for a wireless presentation system that is easy to use, flexible, and reliable, then the Panasonic PressIT Wireless Presentation System is a great option. It is a high-quality system that offers a few features that make it ideal for both business and personal use.

Features of the Panasonic PressIT Wireless Presentation System

The Panasonic PressIT Wireless Presentation System offers several features that make it a great choice for both business and personal use. These features include:

  • Easy to use: The PressIT system is very easy to use. Simply connect the transmitter to your computer and press the button to start presenting.
  • Flexible: The PressIT system is very flexible. You can present from anywhere in the room, and you can connect to four devices at the same time.
  • Reliable: The PressIT system is very reliable. It is less likely to experience interference or signal loss than traditional wired presentation systems.

There are many benefits to using the Panasonic PressIT Wireless Presentation System. These benefits include:

  • Increased productivity: The PressIT system can help you to increase your productivity by making it easier to set up and use your presentations.
  • Improved collaboration: The PressIT system can help you to improve collaboration by making it easy for multiple people to share content during a presentation.
  • Enhanced engagement: The PressIT system can help you to enhance engagement by making your presentations more interactive and engaging.

Conclusion

The Panasonic PressIT Wireless Presentation System is a great choice for anyone who needs a reliable and easy-to-use wireless presentation system. It offers several features that make it ideal for both business and personal use. If you are looking for a wireless presentation system that can help you to increase your productivity, improve collaboration, and enhance engagement, then the Panasonic PressIT Wireless Presentation System is a great option.

360 Video in 2020

Insta360 One R

Insta360 One R

We’ve been experimenting in the 360 video / VR headset space for a couple years now and it’s been fascinating to follow the trend in real time. In particular, we’ve been working with the Insta360 Pro and the Oculus Go headset to explore academic use cases for these immersive video projects. As we start a new year, recent announcements from both Insta360 and Oculus point towards a diminishing interest in this use case and for 360 video in general.

As mentioned in a recent blog post, Insta360’s new camera is the One R. It encourages you to “adapt to the action” with two ways to shoot: as a 360 cam or as a 4k 60fps wide-angle lens. It features a AI-driven tracking algorithm to automatically follow moving subjects in your shots. The Auto Frame algorithm automatically detects which portions of a 360 shot would work best within a 16:9 frame. In almost every marketed feature, there’s a subtext of using the 360 camera as powerful tool for outputting 16:9 video. Coming from one of the leaders in the 360 camera space, this focus isn’t particularly encouraging for the long-term consumption of 360 video.

The viewing of 360 video was always at its most immersive in a headset, which has proved to be one of the biggest boundaries to wider adoption, since most viewers are unlikely to even have a headset, let alone find it and put it on just to watch a video. As such, the standalone $200 Oculus Go seemed a natural solution for businesses who could produce their own 360 content and simply hand over an Oculus Go headset to their client. Recently, however, Oculus dropped the Go from its Oculus for Business platform, suggesting their Oculus Quest is the best solution for most business VR needs. This development sees Oculus leaning more towards support for full Virtual Reality, and less towards immersive 360 video playback.

While certainly not gone from the conversation, excitement and application for 360 video seems to be waning from a couple years ago. We’ll continue to search for use cases and projects that show the potential of this technology, so please reach out to the DDMC if you find any exciting possibilities.

New Insta360 ONE R

Insta360 just launched their latest 360 camera, the ONE R. It’s actually a modular system and not a single, self-contained camera. Only time will tell, but it seems like the ONE R could be an innovative approach to  solving the problem of how to pack the burgeoning features we are seeing in the action and 360 camera spaces into a workable form factor. Certainly Insta360 seems to have doubled down their focus on the using 360 as coverage for standard 16:9 action shots.

The ONE R starts with a battery base and a touch screen that sits on top (it can be installed forwards or backwards depending on the use case) next to an empty space that could include one of the following:

  • A 5.7K 360 camera
  • A 4K action camera that records at 60fps for 4K and 200fps for 1080p
  • A 5.3K wide-angle (14.4mm equivalent) mod co-developed with Leica that has a 1-inch sensor (30fps for 5.3K, 60fps for 4K, and 120fps for 1080p) This module was developed with the help of camera company Leica.

 

 

Key features include:

  • Insta360’s FlowState stabilization is a key part of all three modules.
  • Waterproof to 16 feet, despite the module design
  • Aerial mod that makes it possible to hide your drone from footage
  • External mic support
  • Various remote control options, including Apple Watch, voice, and a GPS enabled smart remote
  • Selfie stick
  • Motion tracking to lock in on subjects
  • Tons of software/ post production options like bullet time, time lapse, slo mo, etc.

We’re not seeing a ton of immediate academic use cases for features such as the above, but will certainly keep the ONE R in mind if the right project arises.

 

Using Thinglink to Create an Interactive 360 Video Experience

As long as I’ve been working with 360 video, one element has always been out of reach: interactivity. Particularly when viewed through a headset, the immersive nature of 360 video lends itself well to exploration and curiosity. The challenge has always been how to add that interactivity. Neither working with an external vendor, nor developing a in-house solution seemed worthwhile for our needs. However, the tool Thinglink now offers an intuitive way not only to augment media with interactive annotations, but to link that various media to each other.

Thinglink, as described previously, is a web platform that allows the user to add interactive pop-up graphics onto photos and videos, in 2D or in 360. Duke is piloting the technology, so I took the opportunity to test both the creation and publishing of 360 video through Thinglink.

The creation part couldn’t have been simpler (and in its pursuit of simplicity also feels a bit light on features). I was able to upload a custom 360 video without trouble, and immediately start adding annotated tags. You can see my test video here. There are four primary forms of tags:

  • Labels add a simple text box that are best used for… labeling things. This would be useful in a language learning context where you might want to add, say, the Spanish word for “tree” near a tree visible in the video.
  • Text/Media are fancier versions of labels which includes room for a title, description, photo, or external link. This is for something where you might want to add a little more context for what you are tagging.
  • Embeds allow you to insert embed codes. This would typically be a video (from either YouTube or Duke’s own Warpwire) but could include surveys or any other platform that provides you an HTML embed code to add to your website.
  • Tour Links allow you to connect individual tagged videos/photos together. If I wanted to provide a tour of the first floor of the Technology Engagement Center, for example, I could start with a video from the main lobby. For the various rooms and hallways visible from the lobby, I could then add an icon that when clicked on moves the viewer to a new video from the perspective of the icon that they clicked.

Adding all of these is as simple as clicking within the video, selecting what kind of tag you want, and then filling in the blanks. My only real gripe here is a lack of customization. You can’t change the size of the icons, though you can design and upload your own if you like. The overall design is also extremely limited. You can’t change text fonts, sizes etc. There is a global color scheme which just comes down to a background color, text color, button background and button text color. In the “Advanced” settings, you can reset the initial direction POV that the 360 video starts in, and you can also toggle “Architectural mode” with eliminates the the fish-eye POV at the expense of less overall visibility. While viewing

All in all, it’s incredibly easy to set up and use. Sharing is also pretty straightforward, provided you don’t intend to view the video in an actual VR headset. You can generate a shareable link that is public, unlisted, or only visible to your organization. You can even generate an embed code to place the Thinglink viewer within a website. What I was most curious about, however, was if I could properly view a Thinglink 360 video with our Oculus Go headset. In this regard, there’s a lot of room for improvement.

In principal, this use case is perfectly functional. I was able to access one of Thinglink’s demo 360 videos from within the Oculus Go headset and view and interact with the video with no trouble. The headset recognized the Thinglink video was a 360 video and automatically switched to that mode. A reticule in the center of my field of vision worked as mouse, in that if I hovered directly over a tag icon, it would “click” and activate the icon, negating the need for an external controller. The only issue was that the window that was activated when I “clicked” on an icon would sometimes be behind me and I had no idea anything had happened.

When I tried to view and access my own video, however, I had a lot of trouble. From a simple logistics standpoint, the shareable Thinglink URLs are fairly long and are tedious to input when in a VR headset (I made mine into a TinyURL which slightly helped). When I was finally able to access the video, it worked fine in 2D mode but when I clicked on the goggles icon to put the video into VR Headset mode I was met with a simple black screen. The same went for trying to view the video in this mode on my phone or on desktop. I found that after several minutes of waiting, an image from the video would eventually come up. Even when I was able to see something other than darkness, I discovered that the embedded videos were not functional at all in VR mode.

While the functionality is potentially there to create an interactive 360 video tour in Thinglink and view it within a VR Headset, it’s simply not practical at this point. It’s a niche use case, sure, but one that seems within grasp. If the developers can work out the kinks, this platform would really be a gamechanger. For now, interactive 360 video will have to stay on the flat screen for me.

ThingLink Pilot at Duke Has Potential for 360 Video, Images

Duke Learning Innovation recently launched a new pilot of a tool called ThingLink. ThingLink offers the ability to annotate images and videos using other images, videos, and text to create visually compelling, interactive experiences. One  core use case for ThingLink is to start with a graphic (such as a map) or a photograph as a base and place buttons in strategic places that users can click to expose more information. ThingLinks can also link to other ThingLinks to create structured learning experiences.ThingLink Example

The screenshot above is from an example project on ThingLink’s “Featured” page by Encounter Edu. In this example, viewers can click on the “+” signs to reveal more information about each portion of the carbon cycle.

While creation of learning objects like these could have wide value for education, one aspect of ThinkLink we think DDMC-ers might find intriguing is its AR/ VR authoring capabilities. A challenge for 360 video, even with professionally produced material, can be that viewers sometimes feel lost clicking around trying to figure out what to look at next. With a tool like ThingLink’s VR editor, you can curate the experience by creating guideposts, and in doing so provide your users with a potentially more rewarding experience as they engage with 360 videos and images.

OIT Media Technologies production team is going to be reviewing ThingLink’s VR/ AR capabilities and posting their findings to the blog.

If you or others on your team would like to test ThingLink out, you apply to be a part of the pilot here: https://duke.qualtrics.com/jfe/form/SV_6R07iAqB2jeXYGh

Links

 

Meeting Owl Review

We had an opportunity to test the Meeting Owl from OwlLabs this past week and wanted to share our thoughts on this unique conference room technology. The $799 webcam, mic, and speaker all-in-one unit is intended to sit at the center of the conference room table. What makes the Meeting Owl worth nearly $800? If I were reviewing the device simply on the speaker and mic array, I’d say this isn’t all that exciting of an offering. There are plenty of <$200 mic/speaker combos that would perform as well or better. But, the Meeting Owl’s unique 360 camera at the top that makes the unit stand out from its peers.

When sharing video, the device segments the camera feed into zones. At the top, there is a side-to-side 360-degree view of the room, and below is either one, two or three “active speaker” zones intelligently selected by the Meeting Owl. So, when two people in the room start talking, the camera segments lower area of the camera feed to accommodate the conversation. Overall, we found the intelligence of the camera to be rather good. Infrequently, it would pause a bit too long on a speaker that had stopped talking, or incorrectly divided up the lower section, prioritizing the wrong person… but considering the alternative is physically moving the camera… it’s a nice feature that livens up the meeting experience.

Pros:

  • Incredibly easy to setup and configure (under 10 minutes)
  • 360 camera works as advertised
  • Good quality internal mics
  • Platform agnostic (works with Skype, WebEx, Zoom, Meetings, etc.)

Cons:

  • The image quality isn’t great (it’s a 720p sensor, so the sections are only standard definition, or worse, and it shows)
  • Split screen can be distracting when in overdrive (sometimes it moves too slowly, other times it seems to move too quickly… this may be improved with a firmware update)
  • At $799, OwlLabs is in the Logitech Meetup zone. While the products are rather different, each has their pros and cons depending upon the expectations of the user.

Closing Thoughts:

Overall, we enjoyed the product and can see it being deployed in a range of spaces. It also signals a new era in intelligent conferencing technologies. The local group at Duke that purchased the device also has plans to deploy this in a classroom where Zoom will be used for hybrid teaching sessions (some students local, others remote). It will be interesting to see how the far side reacts to the automated pan/tilt of the camera and if it can keep up with some of our most active faculty. My primary complaint about the device is that the image is too blurry. Also, the 360 lens tends to have the faces centered in the lower image area. Ideally, it would crop to a few inches above the top of the head of the active speakers(s). Perhaps we’ll see an HD or 4K version in the future that addresses a few of these shortcomings.

VR/360 Video at Streaming Media West

While attending the Streaming Media West conference this year, I had the opportunity to check out a panel on the state of 360 video and VR. The panel featured representatives from different parts of the video production industry: journalism, education, marketing, etc. What stood out to me most was the diverse application and varied amount of use cases they shared, and how those applications worked around some of the common challenges native to the platform.

Raj Moorjani, Product Manager at Disney-ABC, discussed how they’ve been using 360 video in their news department as a way to bring viewers deeper into a story. While it’s not fit for all the content they produce, Moorjani found that sometimes it was most effective to simply share the almost raw or unedited video; to simply give viewers the sense of really being where the story was happening. The quick turnaround helped them keep up with the fast pace of the news.

For more highly produced content, it can be difficult to justify the effort and cost while VR headsets are still not widely adopted. Scott Squires, Creative Director and Co-Founder of production studio Pixvana, pointed out that there was a growing market for enterprise training, where you have more control over the end user having the hardware. Having produced training videos in 360 for waiters on a cruise ship, Squires found that the retention rate for the material was much better than with traditional video. He noted Wal-Mart is even deploying 17,000 headsets to its stores for employee training.

In the consumer space, there’s been a slow adoption of the technology, but the panelists see that speeding up with recent improvements to the hardware. The Oculus Go, a standalone VR headset released this year, received praise for its accessibility and value. The previously arduous stitching and editing workflows have largely been smoothed out as well. However, even with technical advancements, there is still a lack of compelling content for most consumers. Squires predicts that as the tools become even easier to use, that amateur production and home movies could be a huge selling point.

Having only experimented with 360 video over the past year here at Duke, I found it validating that even those who are producing it professionally were grappling with the same challenges Though we’re still a far from widespread adoption, I’ve found there’s a growing enthusiasm for its potential as we learn more about how to best work with this technology. For more, check out the full panel here.

October 2018 Adobe Creative Cloud Update Part 1: Adobe Premiere Pro

It’s fall, pumpkin spice is in the air, the holidays are Christmas decorations are going up, and software giant has just released updates to their entire Creative Cloud suite of applications.  Because the updates are so extensive, I’ve decided to do a multi-part series of DDMC entries that focuses on the new changes in detail for Premiere Pro, After Effects, Photoshop/Lightroom, and a new app Premiere Rush.  I just downloaded Rush today to my phone to put it through it’s paces so I’m saving that application for last but my first rundown of Premiere Pro’s new features is ready to go!

END TO END VR 180

Premiere Pro supports full native video editing for 180 VR content with the addition of a virtual screening room for collaboration.  Specific focal points can be tagged and identified in the same way you would in your boring 2D content.  Before you had to remove your headset to do any tagging but now you can keep your HMD (Head Mounted Display) on and keep cutting.  I’m just wetting my feet with VR but I can see how this could revolutionize the workflow for production houses integrating VR into their production workflow.  Combined with the robust networking features in Premiere Pro and symbiotic nature of the Adobe suite of applications this seems like a nice way to work on VR projects with a larger collaborative scope.

DISPLAY COLOR MANAGEMENT

Adobe has integrated a smart new feature that takes some of the guesswork out of setting your editing station color space.  Premiere Pro can now establish the color space of your particular monitor and adjust itself accordingly to compensate for color irregularities across the suite.  Red stays red no matter if it’s displayed in Premiere Pro, After Effects, or Photoshop!

INTELLIGENT AUDIO CLEANUP

Premiere Pro can now scan your audio and clean it up using two new sliders in the Essential Sound panel.  DeNoise and DeReverb allow you to remove background audio and reverb from your sound respectively.  Is it a replacement for quality sound capture on site?  No.  But it does add an extra level of simplicity that I’ve only experienced in Final Cut Pro so I’m happy about this feature.

PERFORMANCE IMPROVEMENTS

Premiere Pro is faster all around but if you’re cutting on a Mac you should experience a notable boost due to the new hardware based endcoding and decoding for H.264 and HEVC codecs.  Less rendering time is better rendering time.

SELECTIVE COLOR GRADING

Lumetri Color tools and grades are becoming more fine tuned.  This is a welcome addition as Adobe discontinued Speedgrade and folded it into Premiere Pro a while ago.  All your favorite Lumetri looks still remain but video can be adjusted to fit the color space of any still photo or swatch you like.  Colors can also be isolated and targeted for adjustment which is cool if you want to change a jacket, eye, or sky color.

EXPANDED FORMAT SUPPORT

Adobe Premiere now supports ARRI Alexa LF, Sony Venice V2, and the HEIF (HEIC) capture format used by iPhone 8 and iPhone X.

DATA DRIVEN INFOGRAPHICS

Because of the nature of my work as a videographer for an institution of higher education this feature actually has me the most excited.  Instrutional designers are constantly looking for ways to “jazz up” their boring tables into something visually engaging.  Now there is a whole slew of visual options with data driven infographic.  All you have to provide is the data in spreadsheet form then you can drag and drop in on one of the many elegant templates to build lower thirds, animated pie charts, and more.  It’s a really cool feature I plan to put through it’s paces on a few projects in place of floating prefabricated pie charts.

All these new additions make Adobe Premiere Pro a solid one stop editing platform but combined with the rest of the Adobe suite, one can easily see the endless pool of creative options that make it an industry standard!

Stay tuned for Part II:  Adobe Rush!

Oculus Announces New Educational Pilot Program and VR Experiences

In August 2018, Oculus announced a new education program that would distribute some of its Rift and Go headsets to a select group of educational institutes in Taiwan, Seattle, and Japan. In addition to access to the technology, the program is also focused on training both students and teachers on how to develop for the platform and use it in the classroom.

Most interestingly, the Japan program is focusing on using VR for distance learning and increasing student access to coursework and other educational materials. While there seems to be a huge potential for innovation in this space, its not clear from the announcement exactly how the headsets would affect access to the coursework as described. The Oculus Go doesn’t seem equipped for navigating a learning management system, and the Oculus Rift already requires a PC that would supposedly to be sufficient on its own. While the benefit of a headset here seems nebulous, I’m eager to see practical application of this program.

In addition to these programs, Oculus also published a few new educational apps for its headsets. TitanicVR and Hoover Dam: Industrial VR are both immersive experiences that allow you to tour the respective structures and learn about their history and operation.

On the Oculus Go, I was able to try out Breaking Boundaries in Science, a new app that explores the scientific contributions of Jane Goodall, Marie Curie and Grace Hopper. Available for free, the app places you in cartoon recreations of their workspaces, be it a camp site in Gombe or Curie’s lab in Paris. Using a teleportation system to move around, you can examine different objects in the room and listen to audio clips about their significance. While the use of VR is a bit superfluous to the educational impact, the novelty and production value of the app seems like a great way to get kids interested in the history of these women.

Insta360 ONE X

Insta360 steps into the world of action cameras with a big upgrade of their flagship Insta360 ONE.

Here are some high of the features that together are generating a lot of buzz for this device:

  • Retains $399 price
  • Insta360’s trademark FlowState stabilization seems exceptional based on the sample videos shown on the company’s website
  • Optional 5-meter and 30-meter clear housing for diving/ watersports
  • Optional disappearing selfie stick
  • 5.7K 30fps, 4K 50fps, 3K 100fps
  • Optional airplane-shaped “drifter” you can insert the device in and toss for dynamic action shots

Full details: https://www.insta360.com/product/insta360-onex/