Video production Archives - Epiphan Video https://www.epiphan.com/blog/category/video-production/ capture, stream, record Tue, 18 Jun 2024 23:13:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 NDI® and NDI|HX for network video production https://www.epiphan.com/blog/ndi-ndihx-network-video-production/ https://www.epiphan.com/blog/ndi-ndihx-network-video-production/#respond Fri, 24 Feb 2023 17:32:25 +0000 https://www.epiphan.com/blog/blog-template-copy/ Learn how to get more video inputs cheaper using NDI over the Gigabit Ethernet than HDMI or SDI starting right now with your existing gear.

The post NDI® and NDI|HX for network video production appeared first on Epiphan Video.

]]>

If you’ve heard the acronym NDI and wondered how it can help your live video productions, you’ve come to the right place.

We’re here to answer the most frequently asked questions about NDI, NDI|HX, bandwidth speeds, and how the protocol gives you more options to create video.

Jump to

    What is NDI video?

    Network Device Interface, or NDI®, lets you transmit and receive broadcast-quality, low latency over existing LANs using cost-effective CAT5/6 cables and the quality is virtually lossless.

    NDI allows the flexibility to choose from a wider variety of video input sources. Across the same network, you can access multiple cameras, software on computers, mobile devices, and more on the network using a single LAN port. No expensive multiple-port HDMI switches or SDI routers are needed.

    The number of NDI sources you can add to your network is practically limitless. To access NDI devices across subnets, you can manually enter the IP address of devices on those other subnets using NDI Access Manager.

    With NDI, taking your video production to a different location is a lot simpler without a whole mess of HDMI and SDI cables to manage. All your video sources are readily accessible from anywhere over the network. CAT5/6 cables are also a lot cheaper than traditional SDI or HDMI cables.

     

    NDI and network bandwidth

    High Bandwidth NDI, as the name suggests, uses a lot of local bandwidth to share audio and video. This is because the protocol transmits high-quality, low latency, uncompressed video streams that are virtually lossless. Every pixel captured by the camera’s sensors is sent without any reduction – resulting in a lot of data.

    Transmitting uncompressed video, especially at higher resolutions, requires at the very least a Gigabit network, though the ideal transfer speeds ultimately depend on the number of concurrent High Bandwidth NDI streams.

    A single 1920×1080@30 fps NDI stream needs approximately 100 Mbps of dedicated bandwidth.

    The bandwidth required will change, however, depending on the resolution, framerate, and how much motion is being captured by the camera. For example, a 1920×1080@60 fps NDI stream requires 150 Mbps. Meanwhile, a 4K@30 fps NDI stream is roughly 250 Mbps.

    It’s recommended to leave approximately 25 percent of headroom so that the signal remains smooth just in case any unexpected network traffic pops up during production.

    So, if you have three PTZ cameras sending their video to an Epiphan Pearl-2 at 1920×1080@30 fps, that’ll cost you approximately 300 Mbps of bandwidth. If the Pearl-2 is also outputting the NDI stream in 4K, that puts your bandwidth at 550 Mbps. Reserving 250 Mbps for the suggested headroom, three sources and one output already totals 800 Mbps, just enough for the Gigabit network to handle the transfer

    Resolution / FramerateHigh Bandwidth NDI MbpsNDI|HX Mbps
    1920×1080p@301008
    1920×1080p@6015010.5
    3840×2160@3020018
    3840×2160@6025030

    High Bandwidth NDI vs. NDI|HX

    There are two variations of NDI to work with: High Bandwidth NDI and NDI|HX.

    Both are great options for transmitting video and embedded audio signals across a local network, the main difference between them being compression and latency.

    High Bandwidth NDI is uncompressed and requires a lot of bandwidth to transmit. NDI|HX is the high-efficiency alternative, functioning almost exactly the same way but requiring significantly less bandwidth thanks to the advanced compression  – making it an ideal choice for any environment where bandwidth is limited. The high efficiency of NDI|HX will add some latency compared to the High Bandwidth version, but at most, it’s only a frame or two. Audio is uncompressed in both High Bandwidth NDI and NDI|HX.

    Here’s how much dedicated bandwidth is recommended for NDI and NDI|HX:

    • A single 1920×1080@30 fps High Bandwidth NDI stream needs at least 100 Mbps of dedicated bandwidth
    • A single 1920×1080@30 fps NDI|HX stream needs 8 Mbps of dedicated bandwidth

    It’s worth noting that these figures are estimates and the actual bandwidth required will vary based on resolution, framerate, and motion.

    NDI|HX allows creators to take advantage of almost identical, exceptional quality without worrying so much about bandwidth availability. Of course, the bandwidth available should always be something to consider, but NDI|HX was designed to work over low-bandwidth networks.

    Available NDI tools

    More and more software applications like Zoom, Microsoft Teams, Skype, NewBlueFX, EasyWorship and others offer native NDI support.

    To help spur adoption, there are free NDI plugins for popular applications like VLC and Adobe Creative Cloud. For all the rest, NDI offers simple-to-install free tools. For example, NDI Screen Capture lets you turn a computer into multiple NDI inputs that any other NDI device on the network can access. You can define live video sources like the computer’s webcam, the full-screen display, different windows or combination of windows. Even different software applications running on that computer can be converted into an NDI input.

    A couple more options include the NewTek NDI Connect and the NDI|HX Camera app (available for a nominal fee). NewTek Connect makes any camera or device connected to a computer’s capture card (or the local webcam) available to other NDI systems on the LAN. And NDI|HX Camera converts the output from iOS devices and Android phones into NDI video inputs. If you want to take your content further and out onto the WAN, consider using SIENNA Cloud for NDI.

    The many free tools and flexible options that are available are definitely helping to drive adoption.

    NDI adoption

    Adoption of NDI is spreading since its introduction to the public early in 2016. Designing NDI into your AV system is getting easier as more hardware and software options become available. There are also more NDI enabled cameras to choose from companies like PTZOptics, or Panasonic, as well as full-featured video production systems like the Epiphan Pearl-2 that support NDI and NDI|HX. Additionally, Pearl Mini can support up to two NDI|HX inputs, making it a perfect choice to build a small studio.

    If you already have existing cameras and AV equipment, NDI is still within your reach. You don’t have to make a huge commitment to switch over all your gear. NDI converters are readily available. These converters connect existing NDI unaware HDMI or SDI video sources to the network – and convert NDI into HDMI signals for your average monitor or video projector. NDI converters range in price and features. For example, the BirdDog Mini offers 1080p60 NDI encoding, 1080p60 NDI decoding, tally, and power over the Ethernet.

    NDI AV system adoption

    Practical NDI applications

    Here are some practical ways you can start using NDI.

    NDI for live events

    Using NDI can offer several advantages when capturing live events such as lectures, conferences, or performances.

    NDI eliminates the need for cabling, which can be challenging to set up and manage. By connecting cameras and other devices to an all-in-one production system like the Pearl-2 or Pearl Mini, everything you need to stream or record the live event is acquired in less time and in a much more organized fashion.

    If the live event uses presentation software like Microsoft PowerPoint, this can easily be added to the LAN using the free NDI Screen Capture or NDI Screen Capture HX. With this app, the presentation computer becomes another NDI input for your video production system to ingest. It can be added to Pearl-2 or Pearl Mini like any other NDI or NDI|HX source, recorded, streamed, and switched between as needed for the production.

    Certain venues won’t have the network bandwidth required for an effective NDI production. It is possible to work around this, however. With NDI Bridge, you can securely connect two entirely different NDI networks from anywhere in the world. This is particularly useful for remote production workflows or transmitting signals to other locations.

    Adding remote guests to broadcasts with NDI

    Adding a remote guest to a live production was once a tricky proposition. However, with the addition of NDI-out to all major video conferencing apps such as Microsoft Teams, Zoom, and Skype, it’s possible to bring the remote guest’s audio and video to your production tool or system of choice.

    When enabling NDI-out on Microsoft Teams, Zoom, and Skype, the feeds can be ingested by Pearl-2, allowing you to switch, record, and stream video with these remote sources as well as any physical input available. Create interesting layouts with NDI inputs using the built-in custom layout editor and add text, custom backgrounds, and transparent images.

    Using NDI for meetings and conferences

    The top video conferencing platforms all feature an NDI-out function, but they also accept NDI Webcam Input. Any NDI-enabled camera, including mobile devices with the NDI|HX Camera app, can be used in place of a laptop’s built-in webcam, which allows meeting participants in far greater quality.

    Taking this a step further, adding NDI-enabled PTZ cameras to any conference room can elevate the quality of town halls, conferences, webinars or trainings being broadcast internally or to the public. Connect the cameras to the network, acquire the signals in a video production system, and create a dynamic, switchable program that’s either being sent to a content delivery network or directly into the computer hosting the video conferencing app.

    If your video production system outputs NDI like Pearl-2, ingesting the NDI becomes an easy and inexpensive process. Install the free NDI Virtual Input software on the computer that’s running your conference software. Then, your switched NDI program appears like any other connected webcam source that you can select using the conference software.

    Graphic overlays with alpha channel

    With NDI, you don’t need to use the HDMI ports on your production gear to connect an effects computer. All your video sources including your computer generated effects connect through the LAN port. This also lets you offload chroma keying from your production equipment for better performance so you can worry less about chroma effects.

    Overflow rooms made easy

    Stream NDI output to overflow rooms to accommodate larger audiences without a lot of planning ahead. Quickly and easily connect video from one room to another using the existing LAN connections in the rooms. An inexpensive NDI converter is all you need to convert the NDI video to HDMI that you can feed into a common monitor or room projector.

    NDI and NDI|HX FAQ

    Is NDI free?

    Yes! NDI is free to use. Costs only come from acquiring the hardware or software licenses. But the ability to access NDI is free.

    Should I use a hardware or software encoder with NDI?

    While software encoders have advantages, the reliability and performance of hardware encoders make a huge difference when working with NDI.

    Software encoding needs computers with state-of-the-art GPUs and ample memory to operate efficiently in an NDI workflow. If the computer doesn’t meet these high standards, it may crash or freeze while trying to process the uncompressed video. Software encoding also has been known to add latency to the NDI feeds.

    Hardware NDI encoders, like the Pearl-2 or Pearl Mini, were designed explicitly to encode video, obtaining high-quality video and audio with no additional latency or competing programs.

    How do I find my NDI sources?

    There are a few ways to find your NDI sources. It depends on the devices being used.

    If you’re using an NDI camera, start by logging in to the camera’s interface and making sure NDI is enabled. Once enabled, you can check the stream using NDI Studio Monitor.

    The NDI discovery server, available in the NDI Software Developer Kit, can help you find your NDI sources as well. The NDI discovery server automatically discovers NDI sources on the network and provides a list of available sources that can be accessed by authorized users without having to manually search for their exact network location. To make sure each user has the appropriate access to NDI devices on the network, the NDI Access Manager app allows network administrators to set permissions and control access.

    How do I connect my NDI sources?

    If you’re connecting an NDI source to a Pearl, access the device’s admin panel.

    In the Inputs menu, click Add input. The Add input page will open with all available NDI resources listed.

    If it’s there, you can press Add and your NDI source will be connected to Pearl. If you don’t see it, enter either the “Group name” if it’s been assigned a group or the IP address of the NDI device in the “Extra source IP addresses” field to discover and receive the signal.

    How does NDI transport audio?

    Although NDI is commonly associated with video sharing, it can transport both video and audio over networks.

    NDI can transport up to 16 channels of uncompressed, high-quality, low latency, 48 kHz, 24-bit audio data in the same stream as video in three ways: embedded audio, analog audio, or digital audio.

    The most common way of transporting audio via NDI is by using embedded audio. With embedded audio, the audio data is carried within the video signal, making it easier to keep the audio and video in sync and allowing multiple audio channels to be carried within a single NDI stream.

    To transport analog audio via NDI, it must be connected to a physical, NDI-enabled device like a mixing console that converts the analog audio to digital.

    Transporting digital audio via NDI can be accomplished with the Audio Direct tool. This set of Virtual Studio Tools (VST) audio plugins allows virtually any audio software to select, receive, and generate multichannel audio.

    Wrap up

    NDI opens up new opportunities to make your live video production workflow easier and more flexible. You can save money on cables and infrastructure by using the existing Ethernet network and gain easy access to a lot more video sources at broadcast quality. There are free plugins and tools to help you start incorporating NDI into your current AV system right away.

    If you’re considering a new AV system and want to start taking advantage of the benefits of NDI, an all-in-one video production system like Pearl-2 makes a lot of sense. Pearl-2 accepts multicast and unicast NDI sources, as well as high-efficiency NDI|HX. Pearl Mini also accepts NDI|HX, allowing to work with any NDI|HX source in low-bandwidth settings.

    With Pearl-2 or Pearl Mini, you can bring in video and audio from a multitude of different NDI and NDI|HX sources, such as:

    • Remote NDI-enabled PTZ cameras
    • Any NDI-unaware HDMI/SDI source using a converter or the free NDI Virtual Input software
    • Webcams connected to remote computers using the free NDI Connect application
    • Output from iOS and Android mobile devices using the NDI Camera app
    • Direct input from NDI-compatible software (Zoom, Microsoft Teams, Skype, EasyWorship, etc)

    NDI output from Pearl-2 is recognized as a webcam and is compatible with other NDI applications and devices that support webcams as an input, like Skype and many more.

    Capture broadcast-quality, low-latency video with more freedom thanks to NDI and Pearl

    Pearl-2 and Pearl Mini support NDI, providing users with more options to acquire high-quality video signals from networked cameras.

    Unlock your NDI workflows

    The post NDI® and NDI|HX for network video production appeared first on Epiphan Video.

    ]]>
    https://www.epiphan.com/blog/ndi-ndihx-network-video-production/feed/ 0
    How to get the most out of an NDI encoder https://www.epiphan.com/blog/how-to-get-the-most-out-of-an-ndi-encoder/ https://www.epiphan.com/blog/how-to-get-the-most-out-of-an-ndi-encoder/#respond Thu, 23 Feb 2023 13:25:31 +0000 https://www.epiphan.com/?p=167479 To ensure that all your NDI devices work together seamlessly, here are some things you need to know to get the most out of an NDI encoder.

    The post How to get the most out of an NDI encoder appeared first on Epiphan Video.

    ]]>

    Once video producers started transporting their signals across local networks with Network Device Interface (NDI), they found a more elegant, scalable way to work. Eight years since its release, NDI has been adopted by heavyweight manufacturers like Canon, Sony, Panasonic, further empowering more flexibility in creators.

    An NDI encoder lets you capture video and audio signals from NDI-native cameras, mixers, displays, and more. To ensure that all your NDI devices work together seamlessly, here are some things you need to know to get the most out of an NDI encoder.

    Jump to

      Network bandwidth

      The first step when using NDI is to assess the available bandwidth on your local network. Regardless of the NDI encoder’s make, model, or features, the encoding can only be performed to its full potential if the local network can transport the signals.

      At a minimum, the workflow should be running on a Gigabit network – throughput speeds, full duplex ports, and upstream and downstream data speeds must all be capable of transferring the data. Insufficient bandwidth can negatively affect the final product – dropped frames, frozen video, audio glitches, and sudden disconnection.

      The general rule of thumb is to reserve 25 percent of bandwidth for headroom so that the signal transmission remains smooth should there be any unaccounted for traffic while your production is underway. So, if a Gigabit network isn’t doing the trick, consider upgrading to a 10 Gigabit network.

      A typical High Bandwidth NDI stream at 1920×1080p@30 fps needs approximately 100 Mbps per stream. But the bandwidth required to use NDI effectively will vary depending on the number of video sources, the resolutions, and the frame rates. Ultimately, the bandwidth required is unique to the video production at hand, making it wise to test the network if possible.

      In 2016, one year after High Bandwidth NDI was released, a high-efficiency version of the IP-based solution called NDI|HX was released. It was specifically designed to work on low-bandwidth networks.

      One 1920×1080p@30 fps NDI|HX stream needs approximately 24 Mbps, making it highly versatile. However, the efficient compression used by NDI|HX does add some latency to the stream. The added latency varies depending on the device, frame rate, and resolution, but it’s typically not more than one or two frames of delay – hardly noticeable to the eye or ear.

      NDI and NDI|HX bandwidth recommendations

      Resolution / FramerateHigh Bandwidth NDI MbpsNDI|HX Mbps
      1920×1080p@301008
      1920×1080p@6015010.5
      3840×2160@3020018
      3840×2160@6025030

      Hardware over software

      When choosing any encoder – and NDI encoders are no different – we always face the same question: hardware or software?

      Software encoders have their benefits in certain situations, but the performance and reliability hardware encoders offer compared to software alternatives are essential for an NDI production. Software encoding can be resource-intensive, requiring very powerful computers with state-of-the-art GPUs and plenty of memory to run effectively in an NDI workflow.

      Unless one or several computers already meet the requirements of processing NDI’s high-quality, low-latency video, encoding software can crash mid-record or stream. At best, the software encoder will add latency to the NDI feeds, undermining one of the biggest benefits of the solution. Determining a system’s requirements to run an NDI software encoder effectively may be challenging as it can vary depending on the content being produced.

      A hardware NDI encoder is an investment in performance and reliability. Optimized from the ground up to encode video, acquiring high-quality and audio inputs with no added latency and no other competing programs makes hardware a wise choice.

      Multi-encoding support

      Multi-encoding is the process of encoding one or more inputs multiple times with different settings. As a result, you have a wealth of redundancy by capturing all your assets simultaneously at different resolutions, bitrates, and frame rates. For example, you can send a 720p@30 fps live stream to a media server while saving a 1080p@30 fps recording to the device’s internal storage.

      Depending on the encoder, multi-encoding support can offer far greater depth than just encoding at different settings. Hardware encoders that feature multiple programs or channels can encode assets independently of the content being shared. For example, if your hardware encoder allows for content to be mixed and switched within the device, you could have several layouts streaming a 720p@30 fps to a media server in one channel, a 1080@30 fps recording saved to internal storage, and ISOs of each video asset recorded in separate channels as additional backups.

      This is particularly useful any time you are streaming live events where there are no second chances. Should the stream stutter or fail, you have the recorded backup with its layouts, but you can also work off individual camera feeds to create a produced broadcast.

      Multi-encoding: What it is and when it's useful

      Multi-encoding is all about simultaneously encoding multiple video assets for streaming and recording. Read this article to dive deeper into what multi-encoding is and when to use it.

      Learn more

      Input versatility

      While NDI has many advantages, keeping your options open and having plan B’s readily available is always a good idea in video production. When searching for an NDI encoder, a device that can accept networked signals and physical cables ensures you can create the best possible video in any circumstance.

      When an encoder can accept HDMI, SDI, and NDI or NDI|HX equally, you can stream and record your content regardless of the network’s condition. Plus, you eliminate the need to add converters for any incompatible hardware. But the most significant benefit is that physically connecting certain devices can help keep the network clear to acquire the essential NDI signals with greater reliability. On the other hand, putting every device on the same network could cause network congestion.

      Be prepared for any situation with an NDI encoder that works with everything.

      NDI encoder

      Remote device management

      An NDI encoder sits at the heart of your workflow. As such, if it’s interrupted in any way, it can grind your video production to a halt.

      Whether dealing with a single or several NDI encoders, it’s essential to have the chance to access it from anywhere and mitigate any downtime. Remote device management is an extraordinary asset for any NDI encoder.

      By accessing it in the cloud or receiving customized alerts from 24/7 monitoring, you can take corrective action immediately and begin troubleshooting from anywhere.

      When a device malfunctions, it’s stressful for all stakeholders. Giving yourself the convenience and flexibility to manage the device remotely in these high-pressure situations is insurance against production interruptions.

      Be there when you can’t with Epiphan Edge

      Epiphan Edge gives you a window into your Pearl-powered productions wherever you are with full remote control, 24/7 device monitoring, and more.

      Discover Epiphan Edge

      Interoperability

      On top of delivering high-quality video with greater convenience, NDI’s rise to prominence can also be attributed to its interoperability. As an open standard protocol, it’s publicly available to anyone, allowing any hardware manufacturer to implement it.

      When choosing an NDI encoder, selecting a device that shares NDI’s commitment to compatibility will make it a seamless fit in almost any situation.

      Integrations with content management systems like Kaltura or automation controls like Creston or Q-SYS can make your productions vastly more efficient and easier to use. Devices with an open API allow owners to customize the hardware to suit their exact needs, further optimizing the exact workflow.

      Investing in an NDI encoder that can’t complement the systems already in place, either with built-in integrations or through the open API, can lead to frustration, downtime, and even the complete overhaul of video tech stacks.

      To take full advantage of the quality NDI offers, a device that works with virtually everything, like NDI itself, is a huge boon for integrators.

      The best NDI encoders: Epiphan Pearl-2 and Pearl Mini

      The award-winning Epiphan Pearl production systems, known for their reliability and versatility, can be featured at the center of your NDI workflows.

      Pearl Mini accepts two NDI|HX inputs, allowing you to connect high-quality, low-latency video more efficiently over your local networks.

      Pearl-2 allows users to receive up to six NDI|HX, up to three High Bandwidth NDI inputs, and output 1080p@30 fps High Bandwidth NDI or one 4K@30 fps High Bandwidth NDI stream.

      Discover what Pearl can do for your NDI workflows.

      Capture broadcast-quality, low-latency video with more freedom thanks to NDI and Pearl

      Pearl-2 and Pearl Mini support NDI, providing users with more options to acquire high-quality video signals from networked cameras.

      Unlock your NDI workflows

      The post How to get the most out of an NDI encoder appeared first on Epiphan Video.

      ]]>
      https://www.epiphan.com/blog/how-to-get-the-most-out-of-an-ndi-encoder/feed/ 0
      How to add Microsoft Teams to vMix via Epiphan Connect https://www.epiphan.com/blog/how-to-add-microsoft-teams-to-vmix-via-epiphan-connect/ https://www.epiphan.com/blog/how-to-add-microsoft-teams-to-vmix-via-epiphan-connect/#respond Tue, 20 Dec 2022 16:16:43 +0000 https://www.epiphan.com/?p=166038 Take advantage of a new, simpler, more reliable method of adding Microsoft Teams guests to vMix projects with Epiphan Connect.

      The post How to add Microsoft Teams to vMix via Epiphan Connect appeared first on Epiphan Video.

      ]]>

      Microsoft Teams and vMix are both powerfully versatile applications. And when creators add Microsoft Teams to vMix, the possibilities for remote production are only limited by the user’s own creativity.

      Up until recently, when vMix users wanted to add Microsoft Teams participants to their vMix productions, they had to rely on NDI. And while Teams’ NDI-out feature does provide broadcast-quality video signals, few users have the network bandwidth to run it reliably.

      Thankfully, it’s now possible to add Microsoft Teams guests and screen share content to a vMix production on any network with Epiphan Connect. Not only is Epiphan Connect a cloud-based solution that’s more practical for users, but there are also additional benefits that will elevate your vMix projects’ production value.

      The best way to produce live events with Microsoft Teams or Zoom

      Compatible with a range of video encoding solutions, Epiphan Connect™ can extract up to nine Microsoft Teams or Zoom participant feeds and transport them into your productions in 1080p and with isolated audio.

      Discover Epiphan Connect

      Contents

        Built to adapt to any network conditions

        Unlike NDI’s reliance on high-speed, local networks to achieve low latency video, the Secure Reliable Transport (SRT) protocol is designed to deliver broadcast-quality content over any network conditions.

        Created with resilient error correction to compensate for packet loss, bandwidth fluctuations, and other connectivity issues, SRT is better suit to share content over the Internet. That’s why Epiphan Connect extracts isolated audio and video feeds from any Microsoft Teams meeting and makes them shareable via SRT.

        By using SRT, a protocol that was built to adapt to the unpredictability of public Internet, Connect removes a huge barrier of Microsoft Teams’ NDI-out feature. Once the feeds are isolated, simply place them into any SRT-enabled production tool like vMix and you’re free to start creating. No need to redo your local network infrastructure. Connect is ready to work as soon as you create an account.

        Learn more about SRT

        If you are looking to send low-latency video over public Internet, consider Secure Reliable Transport (SRT). SRT is a streaming protocol developed specifically to make low-latency, high-quality over-the-network video transmission possible. Learn more about the differences between SRT vs NDI.

        Powered by cloud

        When using Epiphan Connect to get video and audio from Microsoft Teams to vMix, local hardware and networks aren’t responsible the extraction process. That all happens in the cloud.

        The isolated video and audio feeds are securely received directly from Microsoft servers and made available via SRT in your Epiphan Edge account. Because all the heavy lifting is done by servers built for scalability, the resolution of the extracted video is much higher than a standard call: up to 1080p.

        Rather than make your computer do all the work generating, acquiring, and streaming, you can rest easy and let powerful servers shoulder some of the heavy load. This frees your system up for better performance for the actual production.

        Standard call resolution

        Epiphan Connect

        Isolated audio with a single click

        Though Microsoft Teams NDI may deliver broadcast-quality audio, the audio feeds leave a lot to be desired. Whether selecting the primary speaker stream or individual users, the incoming audio is always mixed together. A single track for multiple is an audio engineer’s worst nightmare as it’s impossible to mix.

        When you use Epiphan Connect to add Microsoft Teams to vMix, you have the option of isolating each speaker’s audio. Simply select the “Isolated Audio” when the extraction begins and then you’re free to start, stop, and mix each participant’s audio inside vMix.

        Virtual confidence monitoring

        If you have ever joined someone else’s vMix project as a remote guest, you’ve no doubt felt a bit disconnected from the show. Despite a producer’s best efforts to keep you in the loop, nothing quite compares to seeing the finished product right in front of you.

        With Epiphan Connect, you can help remote guests stay involved in the action the virtual confidence monitor. By enabling the return feed in Connect’s admin panel, a live look at the broadcast is added directly to the Microsoft Teams meeting. Guests can easily see the layouts, switches, and everything else the audience is seeing to keep the broadcast flowing seamlessly.

        connect virtual confidence monitor

        A more reliable way to join

        vMix users already know that the f has a built-in feature to get guests into a project known as vMix Call. This feature, however, is only available with the HD, 4K, and Pro vMix licenses, which cost $350, $700, and $1,200 USD respectively.

        Some users may prefer the native Call feature to live stream with guests, but this method is not without its challenges. For example, vMix Call doesn’t allow participants to share their screens, limiting how guests can contribute to the show. But the biggest issue is actually getting guests to join.

        It’s very common for firewalls to block vMix Call. This can be solved if you have time to do extensive testing and loop in the IT department as needed. But the time to test the vMix Call with guests and the IT team may not always be available.

        Rather than gamble your project’s success on the IT department’s availability, it’s far safer to lean on Microsoft Teams. It’s an app available on every device and trusted by IT professionals around the world.

        How Microsoft Corp leveraged Epiphan Connect

        Learn how Microsoft used our tool to elevate its all-hands event into an engaging video experience.

        Watch the video

        Adding a Microsoft Teams to vMix with Epiphan Connect

        To take advantage of the stream stability, improved video quality, isolated audio, virtual confidence monitor, and streamlined guest experience that Epiphan Connect has to offer, follow these steps:

        Step 1: Log in to Epiphan Edge, select Epiphan Connect, and add the Microsoft Teams meeting URL.

        Step 2: In vMix, select “Add Input” and select Stream / SRT.

        vMix SRT stream

        Step 3: In vMix, select “SRT caller.” In Epiphan Connect, select “SRT listener”

        Step 4: Copy the SRT URLs from Epiphan Connect and add it to the “Hostname” field without the srt:// and the port number.

        The URLs should look like this: extractorbot-97bffb91-d698-47fe-a532-171a64a3224f.connect.epiphan.cloud

        Step 5: Add the port number from Epiphan Connect into the “Port Number” field in vMix

        Step 6: Add the full SRT url from Connect to the “Stream” field in vMix

        Step 7: Record or live stream with remote guests joining the vMix production from Microsoft Teams!

        Epiphan Connect: Produce broadcast-quality content with Microsoft Teams or Zoom

        Featuring Epiphan Connect™ for Microsoft Teams and Zoom

        Epiphan Connect bridges the gap between the convenience of video conferencing and the quality of broadcast.

        The post How to add Microsoft Teams to vMix via Epiphan Connect appeared first on Epiphan Video.

        ]]>
        https://www.epiphan.com/blog/how-to-add-microsoft-teams-to-vmix-via-epiphan-connect/feed/ 0
        How to troubleshoot Microsoft Teams NDI https://www.epiphan.com/blog/troubleshooting-microsoft-teams-ndi/ https://www.epiphan.com/blog/troubleshooting-microsoft-teams-ndi/#respond Fri, 16 Dec 2022 09:14:54 +0000 https://www.epiphan.com/?p=166002 If you’re struggling to get Microsoft Teams NDI up and running, we’re here to help you diagnose the issue and work around the problem so that you can produce exceptional remote content.

        The post How to troubleshoot Microsoft Teams NDI appeared first on Epiphan Video.

        ]]>

        Microsoft Teams NDI has been a tantalizing addition to the premier collaboration app. It has enabled creators to receive high-quality, low-latency video signals for their remote and hybrid productions from an app used by 270 million people. However, NDI is not without its challenges.

        Creators frequently run into issues when working with NDI – not just on MS Teams, but any NDI stream. If you’ve been struggling to get Microsoft Teams NDI up and running, we’re here to help. Read on to diagnose your NDI issues and find out more about fixes or alternative methods you can use to produce great content with Microsoft Teams.

        The best way to produce live events with Microsoft Teams or Zoom

        Compatible with a range of video encoding solutions, Epiphan Connect™ can extract up to nine Microsoft Teams or Zoom participant feeds and transport them into your productions in 1080p and with isolated audio.

        Discover Epiphan Connect

        Contents

          Has NDI been enabled in Teams?

          It may seem obvious, but the most common issue creators have with Microsoft Teams NDI is that it hasn’t been properly enabled. If you can’t find the purple “Broadcast over NDI” button in Teams then this may likely be your issue.

          To enable NDI in Microsoft Teams, you – or whoever serves as your organization’s Microsoft administrator – must first access the Microsoft Teams’ client.

          Start by entering the Microsoft Teams admin center. Select “Meeting policies” under the “Meetings” tab on the lefthand menu.

          In the “Meeting policies” menu, click “+ Add” and scroll down to the “Audio & video” section. Find the “Allow NDI streaming” button and toggle it on. Save the policy.

          Allow NDI streaming in Microsoft Teams

          Once the policy has been saved by the Microsoft admin, you can now go into Teams, select the three dots next to your profile picture and choose Settings. In the Settings menu, select the Permissions tab and toggle on Network Device Interface (NDI).

          enable Microsoft Teams NDI

          Start a Microsoft Teams meeting, select More actions and click the purple “Broadcast over NDI” button.

          If the option to broadcast over NDI still isn’t available, the issue may be your operating system. The NDI function is not currently supported on M1 Macs.

          Are you meeting the hardware requirements?

          If you notice frames dropping or your system keeps crashing, it’s possible that your hardware is not powerful enough to run Microsoft Teams NDI. NDI can be extremely taxing for certain systems’ CPU usage. Microsoft support even recommends “…limiting the number of NDI®-out video streams to two or three on any single computer” for maximum performance.

          With the right hardware, the NDI broadcast should only be taking up about 50% of your CPU. However, some creators report that running NDI causes their CPU to spike up to 90% usage. When all that memory is being used up, overheating, freezing, and crashes are commonplace. To ensure your record or stream can be completed, keep an eye on how your computer is managing its resources.

          To check this, start a Teams NDI broadcast. On a Windows PC, open “Task Manager.” In Task Manager’s “Processes” tab, check to see the CPU usage on Microsoft Teams. If it’s hovering above 50% – your hardware could use an upgrade.

          Microsoft Teams NDI issues

          There are a couple of ways you can approach the hardware upgrade. You could shop for a new computer. A PC with an Intel i5 Sandy Bridge CPU or better with integrated NVIDIA discrete GPU with 2 Gb video or more as well as 8 Gb of system memory ought to do the trick.

          Alternatively, you could rely on a hardware production system that supports NDI – like the Pearl-2 – to do the heavy lifting. A hardware production system is more reliable than the average computer because it was built specifically to encode and decode all manner of video signals. Whereas the most powerful computer available was built for a variety of tasks, including everything from processing video to keeping dozens of browser tabs open.

          Either way, if your CPU is approaching 90% usage when broadcasting NDI from Microsoft Teams, your current setup can’t handle the labor-intensive requirements. If upgrading your hardware is out of the question, you can always look into an SRT-based alternative.

          Are you meeting the bandwidth requirements?

          If your PC’s CPU usage looks fine and you’re still dealing with dropped frames, freezing, and all-out crashes, then there’s just one culprit left: your local network.

          NDI video consumes a lot of local network bandwidth. A single 1920×1080 NDI stream running at 30 fps needs at least 125 Mbps of dedicated bandwidth. And that’s just a single example. Higher framerates, higher resolution, and more streams will require more bandwidth.

          Ideally, you should use a Gigabit Ethernet connection when working with any NDI stream. Public Internet or networks just don’t have the stability to handle transporting low-latency video. So, if it’s a bandwidth issue, you may need upgrade the existing infrastructure – look into wiring, switches, and anything else you’ll need to maximize the transfer speeds needed.

          Pro Tip

          If you are looking to send low-latency video over public Internet, consider Secure Reliable Transport (SRT). SRT is a streaming protocol developed specifically to make low-latency, high-quality over-the-network video transmission possible. Learn more about the differences between SRT vs NDI.

          If you have the fastest network available, it could be that other people are clogging the network with their own activity. If that’s the case, it may be as simple as telling the other people on the network to cease their uploads and downloads until after the recording or stream concludes.

          An alternative to Microsoft Teams NDI: Epiphan Connect

          Microsoft Teams’ NDI feature is just one method of getting high-quality, low-latency video signals from the communication app. Another way to get the video you need from MS Teams – which doesn’t require a computer or network upgrade – is Epiphan Connect.

          Epiphan Connect is an accessible alternative to Microsoft Teams NDI because the onus is not on your local network or system. The extraction happens in the cloud, freeing up your hardware and bandwidth to do the actual recording or streaming.

          Once Epiphan Connect has been added to your Microsoft Tenant by the administrator, you can add it to any Microsoft Teams meeting. Just login to Epiphan Cloud, select the Epiphan Connect button on the left-hand side and click on the green “Connect to a meeting” button. Enter the Microsoft Teams meeting URL, select your audio preferences, and then hit “Connect & Join.”

          Epiphan Connect

          producing video with Microsoft Teams

          After Epiphan Connect joins the Teams meeting, each participant’s video and audio will be available as SRT streams. SRT was specifically designed to deliver media across unpredictable public networks. It’s becoming increasingly popular with broadcasters because it’s a cheaper and logistically simpler alternative to satellite trucks and private networks.

          The SRTs can be added to any production tool that supports the protocol – which almost all do. Because the extraction performed by powerful servers, the video resolution will be higher than a typical Teams call – up to 1080p. You can control and customize the resolution as desired, as well as select your preference between mixed and isolated audio.

          NDI is no longer the end-all and be-all of getting exceptional video signals from Microsoft Teams. Now, thanks to Epiphan Connect, there’s a much more feasible of getting great video and audio from the app. One that doesn’t have such strict hardware and network requirements to work.

          Epiphan Connect: Produce broadcast-quality content with Microsoft Teams

          Start your free Epiphan Connect trial today

          Sign up now and enjoy your first month free!

          Elevate the way you bring remote contributors projects with Microsoft Teams and Epiphan Connect.

          The post How to troubleshoot Microsoft Teams NDI appeared first on Epiphan Video.

          ]]>
          https://www.epiphan.com/blog/troubleshooting-microsoft-teams-ndi/feed/ 0
          How to use OBS with Microsoft Teams https://www.epiphan.com/blog/how-to-use-obs-with-microsoft-teams/ https://www.epiphan.com/blog/how-to-use-obs-with-microsoft-teams/#respond Mon, 12 Dec 2022 20:31:57 +0000 https://www.epiphan.com/?p=165791 Looking for an easier, more reliable way to add remote guests to OBS? Look no further as we break down how to use OBS with Microsoft Teams.

          The post How to use OBS with Microsoft Teams appeared first on Epiphan Video.

          ]]>

          Microsoft Teams and OBS Studio are two heavyweights in their respective fields. The former streamlines how we collaborate remotely, while the latter allows budding and experienced creators to easily share their content. But these two hugely popular apps don’t have to exist in separate worlds. There is an easy way to combine their uses to create exceptional content. Let us walk you through how to use OBS with Microsoft Teams.

          By following this guide, you’ll be able to add remote participants from any Microsoft Teams meeting to your OBS Studio project with Epiphan Connect. Improving the quality of remote productions, Epiphan Connect can open a world of possibilities for OBS and Microsoft Teams users.

          The best way to produce live events with Microsoft Teams and Zoom

          Compatible with a range of video encoding solutions, Epiphan Connect™ can extract up to nine Microsoft Teams or Zoom participant feeds and transport them into your productions in 1080p and with isolated audio.

          Discover Epiphan Connect

          Contents

            An easier way to connect Microsoft Teams and OBS

            Before Epiphan Connect, using OBS to capture, record, or stream a Microsoft Teams meeting could be a tricky proposition. There were a few different methods available such as installing virtual camera plugins or ingesting Microsoft Teams’ NDI-out. But these methods had their risks given the strain they put on operating systems and networks.

            Thankfully, Epiphan Connect doesn’t rely on local hardware or monopolize network bandwidth to provide OBS users with high-quality, stable signals for their production. All the heavy lifting is done by the cloud, enabling OBS users to focus on the actual content of their recording or stream rather than mitigating technical difficulties. And freeing up your CPU and network to run the actual production is just the start of why Epiphan Connect is a more reliable solution when using OBS and Microsoft Teams together.

            But alleviating the load on networks and hardware is just the beginning. Here are three more reasons why Connect is the perfect link between OBS and Microsoft Teams:

            Improved video quality

            A common complaint about using OBS’s virtual camera to capture Microsoft Teams content and participants is the low video quality. There are typically two explanations for why the virtual camera looks blurry or laggy:

            • Your PC lacks the horsepower to run multiple virtual cameras at 1080p or 30 fps along with all the other programs and media supporting the production, causing it to drop frames and compromise quality.
            • An issue within your network connection, such as low bandwidth, may be causing packet loss in the incoming video feed, resulting in dropped frames, jitter, or low resolution.

            Virtual Camera

            Epiphan Connect

            And while you may be able to resolve these issues by adjusting the virtual camera settings, buying a new computer, or calling your Internet service provider about a plan upgrade, Epiphan Connect bypasses these common issues by doing the hard work of extracting a video signal in cloud servers.

            Epiphan Connect isolates participant and screen share video feeds in Microsoft Teams and sends them to the cloud. The feeds are received directly from Microsoft servers so at no point in the process are you relying on local hardware or network bandwidth for the signal. This allows Connect to capture a clean image in a much higher resolution than a standard call: up to 1080p. And because the isolated video feeds are sent in Secure Reliable Transport (SRT), you don’t need to worry about packet loss or jitter. Learn more about what makes SRT protocol for streaming a reliable way to send video.

            Isolated audio

            Clean audio is nearly impossible to achieve when relying on OBS’s Virtual Camera feature to capture Microsoft Teams content. Even with the Virtual Audio Cable plugins available for OBS, it’s only possible to receive a single, mixed audio track featuring all participants. There’s no real way of controlling individual audio feeds, leaving the door open for a crowded sonic experience.

            Even if you are working with experienced guests who have developed a keen sense of timing and tact, there’s no guarantee you’ll get a good balance of their audio. You may find yourself mixing the audio live during the stream to compensate for guests who are too loud or too quiet, which can be risky.

            Rather than rely on live sound mixing or guests’ audio competency, take control of the audio with Epiphan Connect. It allows you to capture isolated audio feeds from each participant. You can take full control over who the audience hears, balancing their levels perfectly well in advance, in separate tracks. Anyone who works in video will tell you that sound is just as – if not more – important than video, and Connect makes participant audio capture easy.

            obs isolated audio connect

            Virtual confidence monitor

            Getting guests to join your OBS project via Microsoft Teams is incredibly convenient. It’s free, familiar, and available on all devices. But when using NDI or OBS’s Virtual Camera to get their Teams feeds them into the production, they can sometimes feel disconnected from what’s happening in the show. This can sometimes lead to guests missing their cues, not realizing they are live, or skipping over important content.

            Epiphan Connect can help remote guests stay involved in the action with the virtual confidence monitor. By enabling the return feed in Connect’s admin panel, a live look at the broadcast is added directly to the Teams meeting. Guests can easily see the layouts, switches, and everything else the audience is seeing to keep the broadcast flowing seamlessly.

            connect virtual confidence monitor

            Enjoy the freedom and flexibility it provides while elevating the production value of any OBS project that features remote guests.

            Add MS Teams to OBS in 3 steps with Epiphan Connect

            Sign up for an Epiphan Cloud account and register for Epiphan Connect.

            Add Epiphan Connect to your Microsoft Tenant. If you are not the Microsoft administrator, get in touch with the person who is and ask them to add Epiphan Connect.

            Once Epiphan Connect has been added, follow these steps to use Microsoft Teams with OBS.

            Step 1: Add Epiphan Connect to the Teams meeting and copy the SRT URLs.

            Step 2: In OBS, click the + icon in the Sources panel and select “Media Source” – make sure the “Local File” box is unchecked.

            Step 3: In OBS add the SRTs from Epiphan Connect into the “Input Field” and watch as the remote guests from Microsoft Teams connect to your OBS project.

            Step 4: Stream or record with confidence and convenience!

            Epiphan Connect: Produce broadcast-quality content with Microsoft Teams

            Start your free Epiphan Connect trial today

            Sign up now and enjoy your first month free!

            Elevate the way you bring remote contribution into your OBS projects with Microsoft Teams and Epiphan Connect.

            The post How to use OBS with Microsoft Teams appeared first on Epiphan Video.

            ]]>
            https://www.epiphan.com/blog/how-to-use-obs-with-microsoft-teams/feed/ 0
            How to host a virtual product launch event https://www.epiphan.com/blog/how-to-host-a-virtual-product-launch-event/ https://www.epiphan.com/blog/how-to-host-a-virtual-product-launch-event/#respond Thu, 01 Dec 2022 04:37:00 +0000 https://www.epiphan.com/?p=164660 Reach and engage your customers around the world with a virtual product launch event! Follow this guide to find out how to assemble the perfect virtual event to celebrate your hard work.

            The post How to host a virtual product launch event appeared first on Epiphan Video.

            ]]>

            Your team has spent months – sometimes years – brainstorming, testing, and perfecting a new product that’s about to turn your industry on its head. Now with the release on the horizon, you owe it to your team, your investors, your audience, and yourself to mark the occasion and celebrate all the hard work with a product launch event.

            A blog post, a press release, and social media posts are all necessary assets to help get the word out about your brand’s next big step. But these simply don’t generate the same buzz as an event.

            When done successfully, a launch event generates hype and gives customers, as well as the press, opportunities to interact with the creators. The anticipation and interactivity often help form positive impressions and demand-generation momentum.

            So let’s find out how you can drum up some excitement ahead of your brand’s next big release and delight customers around the world with a virtual product launch event.

            Jump to

              Why your next launch event should be virtual

              As exciting as it can be to browse around for theaters, convention halls, or rooftop terraces, it may be more advantageous for your brand to host the event virtually.

              Increased reach

              Only a handful of brands have a following dedicated enough to fly across the country to attend the latest launch. When all the audience has to do is click a link, attendance goes way up. You can now spread awareness to customers wherever they are.

              Flexibility for guest speakers

              Even with the incentive of an all-expenses-paid trip to your city, some guests simply won’t be able to fit your event into their busy schedules. By making the event virtual, their three-day commitment becomes a 15-minute appearance, which is far more feasible and appealing to those high-powered, in-demand influencers that can bring extra attention to your event.

              Seamless engagement

              Chats, polls, and other interactive elements allow the audience to actively participate in the action without disrupting the show’s flow. Their real-time reactions are an invaluable resource. It’s a finger on the pulse allowing you to get to know your audience better for future ventures.

              Cost-effectiveness

              Renting a venue is expensive. The space itself is always costly, but there are additional, sometimes hidden costs to consider, like flying guests in, catering staff, AV support staff, stage design, printed materials, or gift bags. Virtual launches may still require some professional assistance, but the labor and associated costs will be significantly lower.

              Benefits of live streaming: Product announcements
              Screenshot from the Pearl Nano live product launch video. Live product launches are a great way to generate hype for your company’s latest and greatest innovations.

              Hosting a virtual product launch event

              If you’re ready to engage customers worldwide and create some big buzz around your upcoming release, follow these steps to create an unforgettable virtual event.

              Workshop the story you want to tell

              How do you get the audience excited? How do you make this event memorable? What would you like reporters to say after the event? It all starts with figuring out what you want to say.

              Thankfully, by now you have unique value propositions for the latest release. The key now is to distill those value props down to a digestible, compelling message.

              Virtual event production at Microsoft

              Learn how Microsoft leveraged Epiphan Connect to elevate its global all-hands meeting into an engaging video experience.

              Watch the video

              Tip: A/B test this with friends. Tell them about your new product in different ways and gauge their interest, excitement, and understanding. Make tweaks to your pitch based on their reactions or follow-up questions.

              Organize a production team

              Because of its lightweight, flexible nature, a virtual event can be run by a three-person team. These three essential roles are:

              • Tech producer. Responsible for making sure the video, sound, and live stream are working correctly, create and switch between various branded layouts on video.
              • Talent and content producer. Responsible for booking guests, organizing the run-of-show, managing content, and coaching speakers and guests.
              • Live moderator. Responsible for answering questions in the chat, asking questions, publishing polls, and managing other interactivity.

              It is possible to spread these responsibilities across a larger team. But as long as you have these three key roles filled, you’ll have everything you need to run the event smoothly.

              Set presenters and guests up for success

              If you have the budget available, it’s worth shipping capture cards to presenters so they can connect professional cameras to their computers. The average, built-in webcam just can’t compare to the image quality of even an old DSLR. The same goes for sound. If you can send an $80 condenser mic to speakers, it’ll be worth it because the quality will be exponentially better than the built-in microphone. These upgrades add production value, which will help communicate your message more clearly.

              If there’s no budget or time to ship out these upgrades, don’t panic. There is a way to get high-quality video and audio with everyday tools that’s easy for speakers, guests, and producers. Epiphan Connect can turn any ordinary Microsoft Teams meeting into a virtual studio. When Epiphan Connect is added to a Microsoft Teams call, the meeting participants’ audio and video – as well as the screen share – are isolated into stable streams that can be ingested by any production tool.

              Not only do producers get high-quality, isolated video and audio to work with, but they can also rest easy knowing that speakers and guests won’t have to do any setup. They join the meeting just like they’ve done millions of times before. And once they’re in, they’re ready to go live.

              Epiphan Connect: Produce broadcast-quality content with Microsoft Teams or Zoom

              Featuring Epiphan Connect™ for Microsoft Teams and Zoom

              Epiphan Connect bridges the gap between the convenience of video conferencing and the quality of broadcast.

              Find your audience

              YouTube or Twitch? Facebook Live or any of the myriad, virtual event platforms? These may seem like tough decisions, but there’s typically an obvious answer.

              Take a look at your social media accounts. If you have a big Facebook audience, then stream to Facebook Live. Simple as that. In the event there’s no clear-cut platform choice, it is possible to multistream to multiple sites.

              how to multistream
              Multistreaming: How to live stream to multiple destinations
              Read more

              Create the content

              Now the event is starting to take shape – you have your story, your crew has been assembled, you have your tools, and know which platform you’ll be streaming to. The key now is to see how all the pieces fit together.

              Create every asset you think you’ll need to dazzle the audience. This should include:

              • Graphics and lower thirds to make the final show look professional
              • Layouts of the different speaker views
              • Supporting media like intro videos, slides, diagrams, and anything else that may help tell your story
              • A timer to add a sense of urgency
              • Giveaways and prizes
              • Polls, questions, and other interactive elements

              As you’re creating this content, be mindful of how long it’ll be once you go live. Overstuffing a launch event is a critical mistake. The punchier your event is, the stickier it’ll be in the minds and memories of the audience.

              Rehearse, rehearse, rehearse

              The difference between a good virtual launch event and a great one is largely determined by how much rehearsal time you have budgeted for.

              It is essential to take as much time as possible to iron out any technical issues on the production side, make sure speakers are comfortable and confident with their points and setups, and that the overall show isn’t too long.

              Be prepared to rework or cut segments, change the order, and walkthrough some technical information you may take for granted. It can be a difficult process with tough decisions, but it’s better to wrestle with it now while no one’s watching.

              Sample run-of-show table

              Promote, promote, promote

              While you’re finalizing the show in rehearsal, give yourself plenty of time to promote the event wherever possible. Create LinkedIn or Facebook ads, tweet about it, and reach out to media and influencers. Tease giveaways and prizes, create a sense of FOMO. Whatever you can to spread the word, do it.

              No matter what your promotional strategy looks like, make sure it’s easy for customers to RSVP. For example, if you send an email announcement to customers, provide them with a one-click signup to reserve their spot. Fewer hoops to jump through means more RSVPs and better engagement on the big day.
              Showtime

              Epiphan video thumb

              Follow-ups and VoD

              There’s a certain “you had to be there” to an in-person launch. Even if it’s recorded, when it’s over, it’s over. Virtual events, on the other hand, tend to be a little bit more permanent thanks to their accessible nature. Anyone with a link can feel like they are a part of the action even if it’s been months since you went live.

              While the live show is great for those who can attend, the video-on-demand of the event’s stream might get 10 times the views with the right promotion, keywords, and tags. Send out follow-up emails, post links to social media, and leave no stone unturned to make sure you’re taking advantage of the longer shelf-life this format affords.

              Unlock the full potential of virtual events

              Product launches are special occasions that don’t come around too often. However, that doesn’t mean you can’t apply the strategies and tips discussed above to other kinds of virtual events. They may not require the same level of planning or promotion, but any virtual event can help galvanize customers into a real community.

              All it takes is your creativity. And with Epiphan Connect and Microsoft Teams, you’ll be able to execute your vision from anywhere with trust in the stability and ease these tools provide for virtual events.

              The post How to host a virtual product launch event appeared first on Epiphan Video.

              ]]>
              https://www.epiphan.com/blog/how-to-host-a-virtual-product-launch-event/feed/ 0
              How to produce live events with MS Teams https://www.epiphan.com/blog/how-to-produce-live-events-with-ms-teams/ https://www.epiphan.com/blog/how-to-produce-live-events-with-ms-teams/#respond Wed, 16 Nov 2022 05:14:26 +0000 https://www.epiphan.com/?p=164873 With 270 million users and growing, Microsoft Teams has become a staple of the modern workplace, streamlining how we communicate and collaborate. But it’s more than just a way for us to work and learn together.

              The post How to produce live events with MS Teams appeared first on Epiphan Video.

              ]]>

              With 270 million users and growing, MS Teams has become a staple of the modern workplace, streamlining how we communicate and collaborate with our colleagues and classmates no matter where they are. 

              But it’s more than just a way for us to work and learn together. 

              More and more, we’re seeing virtual and hybrid event producers lean on Microsoft Teams to host their productions. This is because most guests already use the app daily, creating a familiar environment that boosts their confidence before going live. 

              Let’s look at some of the ways you can produce live events with MS Teams to elevate your hybrid and virtual events. We’ll cover three ways you can extract live content including capturing isolated participant and screen shares, NDI broadcasting, and Epiphan Connect.

              Jump to:

                Quick-look comparison

                Isolated speaker screen-captureMicrosoft Teams NDI-outEpiphan Connect
                Number of participants
                Unlimited, but requires a dedicated computer for every participant3*
                * Possible to do more than 3, but the quality drops as you add more
                9*
                * Per one Connect instance. You can add more participants if you add more Connect instances
                Max resolution
                720p720p1080p
                Remote access
                NoNoYes
                Requires additional hardware
                YesYesNo

                The best way to produce live events with Microsoft Teams and Zoom

                Compatible with a range of video encoding solutions, Epiphan Connect™ can extract up to nine Microsoft Teams or Zoom participant feeds and transport them into your productions in 1080p and with isolated audio.

                Discover Epiphan Connect

                Isolated speaker screen-capture

                Effort level: Triathlon

                Feasibility: How much gear you got?

                Results: Decent, with a high risk of failure

                Setting up an isolated full screens:

                • Start your Microsoft Teams call
                • On separate computers, pin and fullscreen the individual speakers, creating the equivalent of an independent video signal
                • Capture the separate speakers’ feeds using a hardware encoder or streaming software
                • Crop, mix, and switch between sources, add lower thirds or graphics in your production tool of choice
                isolated screen share ms teams

                Setting up an isolated speaker screen capture is a perfectly functional (but clumsy) way to produce live events with MS Teams.

                For isolated speaker screen capture to work, your producer will need a separate device for each individual speaker and another for screen share content. If you plan on featuring five guests and a slide deck, you’ll need six laptops all acting as sources, feeding into hardware or software encoders. The event’s producer will also need to join the call from the six different computers, creating a crowded virtual studio. 

                It’s doable. And some of the most experienced producers trust this method because it’s what they’ve always had to do to get isolated, switchable video from Microsoft Team meetings. 

                But events with more than two featured speakers tend to be problematic when using this method as it takes up so much room, it’s tough to monitor, and each device is susceptible to its own crashes and freezes. It’s especially difficult when guests go off script and decide to share their screen – sending every layout out of whack. 

                Again, it’s doable, but we recommend reserving this method for the experienced, steady hands already comfortable with and equipped for this method. 

                MS Teams NDI

                Effort level: A steep hike

                Feasibility: Very doable with admin access and available hardware

                Results: High quality in controlled environments  

                How to enable NDI in Microsoft Teams:

                • Start your Microsoft Teams call
                • Check the settings to make sure NDI broadcasting is enabled. Contact your Microsoft Teams administrator if you don’t see it available
                • Click the “Broadcast with NDI” button
                • Make sure your hardware or software encoder is connected to the local network.
                • Add the speakers’ NDI feeds to the encoder
                • Crop, mix, and switch between sources, add lower thirds or graphics in your production tool of choice
                produce live event with ms teams ndi

                Enabling NDI-out broadcasting on Microsoft Teams is a far more elegant solution compared to isolated speaker screen capture. It eliminates the need for multiple devices, which is a huge plus, connecting the meeting directly to the streaming software or hardware encoder digitally over the local network.

                NDI-out gives producers a clean, UI-free video and audio feed to work with, affording them a lot of options when designing layouts. However, for the best results, NDI should be used in controlled environments as NDI outputs can be a strain on network and hardware performance. For example, a single 1080p NDI stream typically requires up to 100Mbps for consistent, maximum quality. If there are three 1080p NDI streams running concurrently, that’s a load of 300Mbps. 

                This may not be a problem if you’re set up in your studio, hardwired to a network that transmits data at speeds more than 1 Gigabit per second. But it can cause issues if you’re streaming from a client’s conference room and someone else on the network decides to upload video files to the cloud. Depending on the network’s speed and bandwidth available, even someone opening their browser on the same network can cause interruptions and packet losses.

                Understanding the network’s limitations is essential, but it’s not the only impediment to watch out for. When using NDI, it’s also essential to know your hardware’s limits. Depending on your setup, streaming via NDI can push a CPU up to as much as 90% usage. If your operating system decides to refresh a background application mid-stream, it could send the whole production tumbling down. We strongly recommend using a hardware encoder to stream NDI-out because it can shoulder part of the load on your computer’s CPU.

                Microsoft Teams’ NDI-out function can create an incredible end product, but it’s ideal for controlled environments with dedicated hardware. Any fluctuations in network bandwidth or CPU usage can derail the stream’s performance. 

                NDI broadcasting isn’t enabled in Microsoft Teams by default. Your Microsoft Teams administrator will have to enable NDI in Microsoft Teams through the admin center. Next, you’ll need to enable NDI at the user level. To find out how to do that, see Microsoft’s instructions for broadcasting audio and video from Teams with NDI.

                Epiphan Connect and MS Teams

                Effort level: A walk in the park

                Feasibility: Cloud efficiency for greater flexibility

                Results: Highest quality and vastly simplified setup

                How to use Epiphan Connect:

                • Create an Epiphan Connect account and pair it to your Microsoft Teams account
                • Paste the MS Teams meeting URL in Epiphan Connect
                • Epiphan Connect bot joins the meeting
                • Connect isolates each participant and screen share into separate feeds
                • Pull the participants’ isolated feeds into your production tool of choice (check out Epiphan Unify if you want to produce from the cloud)
                • Crop, mix, and switch between sources, add lower thirds or graphics in your production tool of choice
                produce live events in MS Teams and epiphan connect

                Epiphan Connect simplifies the production workflow, eliminating the need for local hardware, creating a true, cloud-based virtual event. Producers, like the participants, can be anywhere in the world. Once the participants’ are isolated into separate feeds, they can be added into any production where the output is unparalleled: full HD video, isolated audio, and the power to go wherever your creativity takes you.

                This tool is the long-awaited marriage between convenience and power. It makes producing live events with MS Teams easier for all producers. No more complex wires-webs or worrying about who else might be connected to the network. Start a meeting and let the cloud handle the packet transfers with full, real-time diagnostics. 

                It’s been amazing to see the leaps and bounds we’ve made in such a short period of time. Not long ago, the only option to produce professional content through Microsoft Teams was isolated speaker screen capture. And while complicated for producers, it has always delighted guests with its simplicity. Now, with the emergence of Epiphan Connect, we’ve already found a way to convenience both speaker and producer. 

                To see it in action, watch how Microsoft Corp was able to leverage Epiphan Connect to elevate its global all-hands event into an engaging video experience without the costs and complexity of typical event productions.

                Pearl and Connect CTA

                The familiarity of Microsoft Teams and the power of Pearl

                Together, Epiphan Connect and Pearl production encoders allow you to create live event content that is sure to impress and engage your audience.

                Discover Epiphan Connect

                The post How to produce live events with MS Teams appeared first on Epiphan Video.

                ]]>
                https://www.epiphan.com/blog/how-to-produce-live-events-with-ms-teams/feed/ 0
                SRT protocol for streaming explained https://www.epiphan.com/blog/srt-protocol/ https://www.epiphan.com/blog/srt-protocol/#respond Wed, 13 Jul 2022 09:00:00 +0000 https://www.epiphan.com/?p=162177 The Secure Reliable Transport (SRT) protocol is a huge asset for any streamer or company looking to produce live video remotely. Read on to find out what makes SRT special and how you can take advantage.

                The post SRT protocol for streaming explained appeared first on Epiphan Video.

                ]]>

                It’s a common challenge with remote productions: you want to bring in a guest who can contribute a lot to your content, but their feed comes in grainy and full of stutters and stops. The culprit: your guest’s less-than-stellar network conditions. Do you cut them from the program, or live with the drop in quality?

                With the Secure Reliable Transport (SRT) protocol, the answer is easy: count them in, and don’t worry about it hurting your quality. SRT will compensate for the shortcomings of their network.

                So what is the SRT protocol, exactly? And how can you take advantage of SRT’s game-changing capabilities for streaming and remote video production? Read on for answers to these and other questions about this powerful protocol.

                Contents

                  The best SRT encoder for remote contribution

                  The portable and powerful Pearl Nano is the perfect SRT hardware encoder. Compact and cost-effective to ship, easy to set up, and simple to use, this award-winning device will meet all your SRT needs.

                  What is the SRT protocol?

                  SRT is an open-source video transport protocol developed by Haivision. SRT stands for “Secure Reliable Transport,” which reflects two of its major benefits for video and audio streaming: security and reliability.

                  The goal of SRT is to deliver high-quality, low-latency video streaming over unpredictable public networks, which it does by:

                  • Offering unparalleled control over video and audio transmission, including the ability to adjust latency, buffering, and other key parameters to suit network conditions
                  • Incorporating a unique, bi-directional User Datagram Protocol (UDP) stream that continuously sends and receives control data during streaming
                  • Compensating for packet loss, jitter, and other threats to quality based on UDP stream data

                  SRT is shaking up not just the realm of Internet streaming but the broadcast world as well. That’s because SRT technology can replace costly (and logistically problematic) satellite trucks and private networks for many remote video applications. Think reports from the field and contributions from guests in another city, country, or continent.

                  Is SRT better than RTMP?

                  From a technological standpoint, yes, SRT is superior to the Real-Time Messaging Protocol (RTMP). A lot of it comes down to the fact that SRT is a more modern protocol. Ready for a little history lesson?

                  RTMP was created way back in the early 2000s primarily as a way to stream video, audio, and data from servers to Macromedia’s Flash player. When Adobe acquired Macromedia in the mid 2010s, along with RTMP, the company repurposed the protocol for broader streaming. Ultimately, RTMP served its purpose. But it’s just not cut out for many modern streaming contexts.

                  So, what sets SRT apart from RTMP? Built into SRT is a unique, bi-directional UDP stream that continuously sends and receives control data during streaming. Through this, SRT can adapt to fluctuating network conditions to minimize packet loss, jitter, and other threats to quality. This makes the protocol a viable solution for sending video over the worst networks, or unpredictable networks such as the public Internet.

                  How SRT works How SRT works

                  Compare this to RTMP or RTMPS, which send source data to the target streaming server or content delivery network (CDN) without regard for any data that may get lost along the way. Often the result is a suboptimal viewing experience for your audience at the other end.

                  All that said, these protocols can co-exist. For example, you could use SRT to bring remote guest feeds into your production encoder but stream out your program via RTMPS, because that’s what your preferred CDN supports. In any case, your remote guest feeds will look and sound much better with SRT doing the heavy lifting on the contribution side.

                  Is SRT secure?

                  SRT is a highly secure streaming protocol. It’s what puts the “S” in “SRT,” after all.

                  The SRT protocol offers up to 256-bit Advanced Encryption Standard (AES) encryption, safeguarding data from contribution to distribution. And the protocol plays nice with firewalls. Multiple handshaking methods and flexible network address translation (NAT) traversal mean there’s rarely a need to ask a network admin to make policy exceptions, or engage one in the first place.

                  Who uses SRT protocol?

                  Since its invention by Havision in the early 2010s, SRT has quickly become the go-to streaming protocol for remote contribution. For example, NASA uses SRT to distribute live video across control rooms for low-latency, real-time monitoring. One of the largest demonstrations of SRT’s potential was the 2020 virtual NFL draft, where producers used SRT to deliver more than 600 live feeds.

                  Beyond high-profile examples like these, there are the countless content creators that have made SRT a key part of their video production workflows. That includes Epiphan: the SRT protocol is involved in the lion’s share of live show episodes, webinars, and other live productions we do.

                  What do you need to use SRT?

                  You’ll need solutions that can encode and/or decode the SRT protocol, depending on your application. Any remote guests will need an SRT encoder on their end to send an SRT signal over the Internet to the production encoder, which can then decode the signal and work with it for production.

                  One remote guest

                  SRT remote guests SRT remote guests

                  SRT multiple remote guests

                  SRT multiple remote guests SRT multiple remote guests

                  Finding SRT-capable solutions

                  With SRT being an open-source, royalty-free technology that solves long-standing challenges, companies have been quick to adapt the protocol for use in their own products. These companies make up the SRT Alliance, whose members develop, manufacture, and operate SRT video encoders and decoders, content delivery networks, media gateways, capture cards, cloud infrastructure, and cameras.

                  This growing ecosystem is one of the biggest benefits of buying in. It means that, if you’re looking to use SRT yourself, there are plenty of options to choose from. For example, you could use dedicated hardware that supports this open-source protocol, like a camera or hardware encoder, or streaming software such as OBS Studio.

                  As well as a frequent user of SRT, Epiphan is a proud member of the SRT Alliance. We’ve brought SRT encoding and decoding to our award-winning Pearl video production systems. That includes Pearl Nano, the best SRT encoder for remote contribution. Our cloud-based video production platform, Epiphan Unify, can work with SRT signals from anywhere.

                  Epiphan Unify: Build a better hybrid workflow

                  Streamline your remote production workflow – in the cloud

                  Remotely stream, record, switch, and mix broadcast-quality content and live events from anywhere with the cloud-powered Epiphan Unify. Compatible with any encoder or camera that supports SRT.

                  Establishing a backchannel for real-time communication

                  Producers need some way to communicate with guests and contributors in real time to ensure they’re set to go live and queue them up for their appearance. A backchannel for communication is also essential when your production involves interaction between remote participants.

                  Your backchannel for real-time interaction over SRT can be a separate SRT stream or a video conferencing platform. What option makes the most sense depends on the performance of the networks involved as well as the physical distance between sources and the destination (i.e., the round trip time). With high enough network bandwidth and low enough round trip times, it’s possible to use a parallel SRT stream as a communication backchannel. Otherwise, a video conferencing platform can do the trick.

                  SRT backchannel communication SRT backchannel communication

                  How is SRT latency calculated?

                  There’s a bit of a learning curve with the SRT protocol, specifically when it comes to the latency tuning mechanic. If you’re wrestling with this side of it, no worries. Here, we’ll break it down step by step.

                  SRT: Key terms and concepts
                  • Latency: The maximum of encoder and decoder configured latency. This value specifies how long the decoder will buffer received packets before decoding.
                  • Round trip time (RTT): How long it takes (in milliseconds) for a single packet to travel from a source to your destination and back. This measure is important when it comes to setting your latency.
                  • Receive rate: The data upload speed (in megabits per second) from the input’s network.
                  • Buffer: The number of SRT packets received and waiting to be decoded.
                  • Packet loss: The percentage of packets lost as reported by the SRT decoder in the last measurement interval.
                  • Re-sent packets: How many lost packets were sent back to the destination via SRT’s UDP stream.
                  • Total packets received/lost: A comparison of the number of packets received versus those lost.

                  How to calculate SRT latency

                  For each contributor using the SRT protocol, follow this process:

                  1. Measure your RTT. Your SRT decoder should report this.
                  2. Measure the packet loss rate. Your SRT decoder should report this.
                  3. Measure your network bandwidth. (If you’re using a Pearl system, you can find a network test tool under
                  4. Configuration > Network > Network diagnostics.)
                  5. Determine your RTT multiplier using the table below.
                  Worst case
                  loss rate (%)
                  RTT multiplierBandwidth
                  overhead (%)
                  Minimum SRT latency
                  (for RTT <= 20 ms)
                  <= 133360
                  <= 342580
                  <= 7520100
                  <= 10617120
                  1. Determine your SRT latency using the formula RTT * RTT multiplier.
                  SRT latency formula SRT latency formula
                  1. Ensure you have enough uplink bandwidth to send the video. For example, if the available uplink bandwidth is 4 Mbps and the stream bitrate is set to 6 Mbps, for example, there’ll be significant packet loss that SRT can’t help.
                  2. Confirm that the send buffer is less than or equal to your SRT latency. When the send buffer approaches the SRT latency, packet drops will occur that SRT can’t recover from.
                  Is low latency a must?

                  If the minimum SRT latency value you calculated isn’t essential for your application, we recommend adding extra latency to account for RTT or packet loss growth during streaming.

                  Master SRT with award-winning solutions

                  Epiphan Pearl hardware encoders fully support the SRT protocol. Pearl systems feature multiple built-in inputs for video and professional audio, simplifying setup by letting you directly connect advanced video and audio gear for the highest-quality SRT streams. Plus, end-to-end control through Epiphan Cloud makes it possible to configure and test contribution encoders located anywhere in the world. This reduces the chance of errors and simplifies production for everyone involved.

                  Learn more about SRT protocol support on Pearl systems on our website. And be sure to stay up to date on Epiphan Unify, a new cloud-powered video production platform that will work with any SRT-capable encoder or camera.

                  Epiphan video thumb

                  SRT streaming on YouTube

                  The post SRT protocol for streaming explained appeared first on Epiphan Video.

                  ]]>
                  https://www.epiphan.com/blog/srt-protocol/feed/ 0
                  How to look good on camera: 6 easy steps https://www.epiphan.com/blog/how-to-look-good-on-camera/ https://www.epiphan.com/blog/how-to-look-good-on-camera/#respond Wed, 29 Jun 2022 12:06:00 +0000 https://www.epiphan.com/?p=162132 Looking to make a great impression on video? Check out our six-step guide to looking better on camera to learn how.

                  The post How to look good on camera: 6 easy steps appeared first on Epiphan Video.

                  ]]>

                  To make a good impression over video, looking your best is important – and not only for the reasons you might think. When you look good, you feel good, which boosts your confidence. That’ll affect your performance in a big way, and, in turn, impact your audience’s perception of you and your message.

                  Whether you’re representing yourself or your brand on video, taking the time to set up a beautiful, high-quality image is worth your while. Not sure how to make this happen? No problem. Let this blog be your instruction manual for how to look good on camera.

                  Contents

                    1. Get that background sorted out

                    Think about what your viewers will see in the shot besides you. You want your background to be simple yet aesthetically pleasing. Things like large bookshelves, plants, and brick walls always look great. Avoid busy backgrounds, and eliminate distracting elements.

                    Make sure the background is nice and tidy – especially if it’s a shared-use space or you’re working from home. The last thing you want is to discover a rogue pair of socks in the frame after you’ve finished filming. Be sure to check all surfaces visible in the shot.

                    If your background looks too bland or flat, liven it up with a few decorations. For example, you can hang something on the wall (e.g., a painting, a guitar, string lights) or arrange a few of your favorite knickknacks on a shelf. Dim accent lights like salt lamps or LEDs can add some vibrance to the environment.

                    An alternative to using a practical background is to set up a vinyl, paper, or cloth backdrop. These can be solid colors, prints (e.g., brick), or even a green screen that you can replace with any image you want.

                    Get inspiration for your background

                    Learn by example: get some visual inspiration for your at-home setup, check out Room Rater on Twitter. This sassy account rates screenshots from video conferencing apps.

                    2. Use a good camera

                    If you want to look your absolute best, a USB webcam won’t cut it. The best option is to get a DSLR or mirrorless camera with a clean HDMI out signal. Some affordable options include models like the Canon M200, Sony a6400, and Panasonic Lumix GH5.

                    For example, in the video below, Dan is using a Canon M200, and George is using a Sony a5400 digital camera:

                    Epiphan video thumb

                    In the market for a camera? Be sure to check out our blog all about which cameras are suitable for live streaming and video calls.

                    Note that, to bring the video signal from your camera into your computer, you’ll need an HDMI to USB adapter, also known as a capture card.

                    AV.io HD: USB capture card

                    Go beyond webcam quality

                    Epiphan AV.iocapture cards make capturing video from HDMI cameras on Windows, Mac, or Linux as easy as using a plug-and-play webcam.

                    Discover AV.io capture cards

                    3. Position your camera properly

                    Position your camera at eye level or slightly above. The ideal distance between you and the camera will depend on the camera’s lens and how much space you have to work with.

                    The more distance you have, the better. By allowing enough space between the subject and the camera (at least 5 feet) and the subject and the background (another 5 feet), you’re creating depth of field and adding dimension to the shot. Plus, with the right camera lens, the background will have that neat blur effect (called bokeh) you’ve probably seen in lots of professional productions.

                    4. Light it right

                    Good lighting makes a huge difference, so investing in a few video lights is worth it. Using natural light as a source is possible, but it’s tricky since that lighting is apt to change with the weather and as the day rolls on. Artificial lights are best.

                    A set of LED lights will serve you well. These are easy to work with and won’t run hot. A tripod is the simplest solution for mounting these, though clamp mounts can also work.

                    Three-point lighting

                    The standard, most basic way of setting up lighting is known as three-point lighting. It goes like this:

                    • Place one light about four to five feet in front of you. This will be your main (or key) light.
                    • Set a second light the same distance, about four to five feet in front of you, spaced out about three feet from the first light. This second (or fill) light will counterbalance any harsh shadows on your face cast by the key light.
                    • Position a third light behind you to create a nice rim of light around your silhouette, helping separate you from the background.

                    For a more in-depth look at three-point lighting, check out our clip on that subject:

                    Epiphan video thumb

                    5. Get yourself ready

                    The clothing you wear and how you do your hair and makeup are important to consider beforehand. Reason being, not everything looks great on camera. Here are a few guidelines.

                    Clothing

                    A solid shirt and dress colors work best. Avoid:

                    • Prints with overly saturated colors, small prints, or stripes – they might create a jittery effect on video
                    • Anything green if you’re planning to shoot in front of a green screen
                    • Clothing that’s pure white or black – it creates too much contrast.

                    Hair and makeup

                    • Check for flyaway hair (a bit of hairspray will help if you find any)
                    • Control perspiration. It’s only natural to sweat more when you’re before bright lights and are under pressure to perform. Use blotting paper and translucent powder to help prevent shine.
                    • Choose natural lip, blush, and eye shadow colors if you’re wearing makeup. This is known as “no makeup” makeup, which is what looks best on camera.
                    Epiphan video thumb

                    6. Action!

                    Your on-camera performance matters just as much as your appearance. If you’re avoiding eye contact with the camera while mumbling under your breath, not even the best camera in the world could save you.
                    Remember to sit up straight and elongate your neck. This will improve your posture and help avoid a double-chin situation. Don’t forget to smile!

                    Often, feeling comfortable in front of the camera makes all the difference. Take a moment to loosen up before shooting the video. It may also help to record yourself and watch it back. If you’re recording, there’s no harm in doing a few takes before finding that keeper.

                    Go above and beyond with a branded layout

                    Are you joining a virtual event, a webinar, or an interview as a remote participant? Or perhaps you’re recording an elevator pitch for potential investors? Take your video presentation to the next level by adding branding elements like titling, a logo, or brand colors. Imagine the great impression you can make on those watching your video content.

                    Epiphan Unify can help you achieve that professional look. Combine, crop, and scale video sources to create dynamic, professional-quality layouts to fit any application. With the Epiphan Connect tool, you can even bring Microsoft Teams participant feeds into your productions in Full HD and with isolated audio.

                    Epiphan Unify is the perfect cloud-powered production platform for everything from on-demand content to live webinars and hybrid events – and it’s compatible with a wide range of cameras and encoders.

                    Check out the Epiphan Unify page to learn more

                    The post How to look good on camera: 6 easy steps appeared first on Epiphan Video.

                    ]]>
                    https://www.epiphan.com/blog/how-to-look-good-on-camera/feed/ 0
                    Remote video production: A practical guide https://www.epiphan.com/blog/remote-video-production/ https://www.epiphan.com/blog/remote-video-production/#respond Thu, 23 Jun 2022 20:56:59 +0000 https://www.epiphan.com/?p=161895 Even if you’re a seasoned video producer, producing video remotely isn’t so simple. Our practical guide covers the essentials and plots out three different remote video production workflows.

                    The post Remote video production: A practical guide appeared first on Epiphan Video.

                    ]]>

                    Remote video production is all about freedom, flexibility, and efficiency. But the way there can be confusing, even if you’re an old hand at traditional, in-person video production.

                    This blog covers all the bases and offers practical advice to help you set up and pull off successful remote live streaming and video recording sessions.

                    Contents

                      Epiphan Unify: Build a better hybrid workflow

                      Build a better remote production workflow – in the cloud

                      Record, switch, mix, and restream content from anywhere, alone or working virtually alongside others, with Epiphan Unify. Tap into all the cloud power you need to create high-quality video experiences that will wow viewers and maximize your impact.

                      What is remote video production?

                      Remote video production describes any situation where the producer isn’t in the same room, building, city, state, country, continent, whatever, as the people on camera.

                      Here’s an example to illustrate what we mean. Imagine there’s a conference underway that’s being streamed and recorded simultaneously. The panelists and participants are at the venue, on stage, doing what they do in front of the cameras. Meanwhile, the producer (or production team) could be anywhere: at the company’s head office a state away, in a local coffee shop, even their own living room. Not only that, but the producer could be monitoring a separate event happening on the other side of the country, all from the comfort and convenience of wherever they work best.

                      Maybe that all sounds like a stretch. In fact, it’s totally possible with today’s technology. Plenty of companies are producing video remotely to cut costs, drive efficiency, and unlock flexibility for their production teams. That’s especially crucial these days, when working from home is far more common.

                      A note about video quality

                      This blog speaks to remote video production in professional terms, covering contexts where professional-grade video is a must. That means going beyond webcams, USB microphones, and mobile phones or tablets.

                      If your project doesn’t demand broadcast quality – if using webcams or simply streaming out your video conferencing app’s UI won’t hurt the impact of your video – there are easier ways to go about it remotely. But if building your brand and engaging your audience are your goals, this guide will show you how to do it.

                      How remote video production works

                      Video production is quite a different beast when you’re doing things remotely: a different production process, different workflows, and different tools if your existing gear lacks remote capabilities.

                      What goes into your remote production kit will depend on the specifics of your project. For example, your setup might include:

                      • Pro-grade cameras (e.g., mirrorless, PTZ cameras)
                      • A video switcher for multiple inputs/layouts
                      • A remote controlled audio mixer (remotely accessible or with local controls extended through a VPN or remote desktop software)
                      • Lighting with Digital Multiplex (DMX) protocol support

                      Why the cloud is essential for remote production

                      It’ll come as no surprise that having a robust network is everything here, given that the producer is controlling all this gear remotely. In fact, the network layer is probably the biggest challenge of remote video production. It’s an especially persistent anxiety for live video producers, who don’t have the luxury of resetting and rolling again after a network hiccup.

                      Happily, you can mitigate this risk by favoring reliable solutions and using a cloud video service like Epiphan Edge, Epiphan Unify, or Epiphan Connect. Cloud-based services like these have built-in redundancy. And because they run on distributed server farms, they’ll keep your live stream online should your producer’s Internet take a dip. Even better if you have a backup producer on standby who can log in and take over while any issues are smoothed out.

                      How remote video production works How remote video production works

                      Cloud-based video production also makes it easier to collaborate with others. For example, you could have a producer taking care of the mixing and switching and a supervisor overseeing – both miles away from the venue and each other. It’s also handy for post-production because you can more easily transfer files to a video editor, who can then get started on stitching everything together.

                      SRT: The key to flawless remote contributions

                      Network issues don’t just plague producers. If any or all your contributors are remote, they too can battle fluctuating bandwidth, inconsistent quality, and other network maladies. These sorts of issues aren’t just frustrating to deal with; they can also really hurt the quality of your production.

                      It’s why the Secure Reliable Transport (SRT) protocol has been such a breakthrough for remote production. It’s designed for high-quality, low-latency streaming over virtually any network. This makes it ideal for remote contribution, since network conditions can vary so widely from location to location.

                      The best thing about SRT? It’s open source and widely adopted, making it easy to find SRT-capable gear (including Epiphan solutions). Case in point, more than 500 companies have signed on to the SRT Alliance. For these reasons and more, SRT should be an key component of your remote production setup.

                      Three setups for remote live streaming and recording

                      There are numerous ways to set up for remote production. Here we share three different workflows built on our own remote-capable hardware and cloud video services.

                      Setup 1: Remote-capable hardware encoders

                      Remote-capable hardware encoders
                      Epiphan Pearl video production systems feature full remote access and control capabilities. All you need is an Internet connection and Epiphan Edge – no VPNs or network tunnels required.

                      The Pearl product line comprises three systems: Pearl Nano, Pearl Mini, and Pearl-2. They differ in features, form factor, and the applications they’re best suited for. For example, Pearl-2 makes a mean production encoder since it offers the most processing power to support advanced features like chroma keying. Plus, Pearl-2 can accept up to six SRT inputs. Then there’s Pearl Nano, the best SRT encoder for remote contribution and guests. It’s an ultracompact and refined device that’s perfect for shipping out to remote participants to elevate the quality of their feeds.

                      Pearl hardware encoders Pearl hardware encoders

                      Another great advantage of Pearl-powered remote production: end-to-end configuration and control. From Epiphan Edge’s centralized dashboard, you can access and tweak any paired Pearl system. This way, your remote contributors don’t have to be video production pros to output SRT. The producer can take care of all the pre-show configuration and testing, minimizing the likelihood of errors.

                      Create broadcast-quality video anytime, anywhere

                      Pearl video production systems are reliable, intuitive – and fully remote controllable. Just log in to Epiphan Edge’s centralized dashboard to access these powerful edge devices from any location, enabling you to produce exceptional branded video from a distance.

                      Setup 2: Cloud-powered video production

                      Cloud-powered video production Cloud-powered video production

                      The cloud is a flexible, convenient, and ever-scalable resource that’s the basis for so much of the technology we use and rely on every day. And with Epiphan Unify, you can tap into this unlimited store of power to streamline your remote production even further.

                      Epiphan Unify empowers you to mix, switch, record, and restream video from anywhere, nixing any worries about local processing limitations. Whatever compute power you need, Epiphan Unify delivers, whether it’s for 4K restreaming or a multisite hybrid production with chroma keying.

                      And Unify can accept SRT signals from anywhere, whether it’s an Epiphan Pearl system or another SRT-compatible encoder or camera.

                      Setup 3: Microsoft Teams signal extraction

                      Microsoft Teams signal extraction Microsoft Teams signal extraction

                      Millions of people use Microsoft Teams every day. And not just for video communications: a growing number are looking to produce live events with Microsoft Teams.

                      Are you one of them? Microsoft Teams is convenient, familiar, and makes it easy to bring people together to collaborate in real time. It’s only natural to look at it as a possible solution for remote video creation. If it’s a business video you’re making, though, simply streaming out the Microsoft Teams UI won’t net the results you’re after – that is, to elevate your brand in the eyes of your target customers.

                      Epiphan Connect makes it easy to extract Microsoft Teams participant feeds (and screen-share content) and add them to branded layouts using your preferred production tools, whether that’s hardware or software. You can display those extracted feeds in Full HD and with isolated audio, making it feasible to use them for professional-quality live streams and recorded content.

                      Epiphan Connect: Produce broadcast-quality content with Microsoft Teams

                      Elevate your video content without sacrificing convenience

                      Combine the convenience of video conferencing and the quality of professional broadcast with Epiphan Connect. Compatible with a range of video production solutions. Coming late summer 2022.

                      Make remote video production look easy

                      Thriving in today’s hybrid world demands hybrid production solutions like Epiphan Pearl, Epiphan Unify, and Epiphan Connect. Learn more about them at epiphan.com/products.

                      The post Remote video production: A practical guide appeared first on Epiphan Video.

                      ]]>
                      https://www.epiphan.com/blog/remote-video-production/feed/ 0