Yealink VC Series: Prioritization and Firewalls

Today, we’re going to revisit the Yealink VC Series Video Conferencing System. We wanted to give you more information on the system, letting you know how to set it up in more depth so you’re getting the most out of it.

This post includes an introduction to video conferencing concerns, and suggestions about the types of things you can do to address these concerns.

The biggest concerns with video conferencing are bandwidth, latency, jitter, and packet loss. We’ll go through these concerns, and let you know what you can do to minimize your worries. Security is also a major concern. We’ll show you how to configure your Yealink video conferencing devices with your firewall.

Bandwidth

Video conferencing is a resource intensive, real-time application.

It’s no secret that digital video is a bandwidth hog. The amount of data involved in video is vastly greater than audio or text, so it eats up much more space. This is what we mean when we say video conferencing is resource intensive. It needs a lot of bandwidth to work smoothly.

It’s also a real-time application.

Imagine if you tried to hold a video conversation like you do an email chain. One person would say something and send the clip. Two people would reply. There’d be time delays, there’d be overlapping conversations, it’d take forever and be confusing. In other words, it wouldn’t work. Email is not a real-time application.

For video conferencing to be worthwhile, you need to be able to see and talk simultaneously—in real time—with the people you’re talking to.

With its VC Series Video Conferencing System, Yealink has provided you with an affordable way to get HD video conferencing—if you have the bandwidth.

To get their crisp and beautiful 1080P HD video, they recommend a 1 Mbps connection. To get their very good 720P video, they recommend a 521 Kbps connection. These bandwidth requirements increase, of course, if you incorporate 1080P content with your video, which is an amazing feature that comes with Yealink’s VC Series. And the requirements also increase as you add connections to your conference.

In other words, you’ll need to make sure you have the bandwidth available to take full advantage of any video conferencing system, let alone such a feature rich system as Yealink’s. Check with your Internet service provider (ISP) to make sure you have enough bandwidth. If you’re just setting up your office or if you’re thinking about renovating, you should probably look to structured cabling solutions that would give you the best cabling system you can get. Bandwidth, after all, mostly depends on infrastructure.

Even if you have the bandwidth, though, you could face problems.

Latency, Jitter and Packet Loss

The three main problems you might face with any video conferencing system are latency, jitter and packet loss. None of these is catastrophic in itself, but they can make life really annoying. Thankfully, Yealink has made it easy for you to limit the potential for latency, jitter and packet loss with their VC Series.

What do these words mean?


  • Latency. Latency is a fancy word for delay. You can see why delay would get really annoying with video conferencing.
  • Jitter. Jitter is simply the fluctuation in latency. Think about it as a herky-jerky, stop-start flow of data, which makes for useless conversations.
  • Packet loss. Data is transferred over the Internet in packets to increase efficiency. If packets get backed up, the system is designed to drop them: packet loss. Otherwise, it’d turn into one huge digital traffic jam.



Recall that video conferencing is a real-time application. If there’s too much delay as people speak, too much latency, it’d be so annoying that it wouldn’t be worth using. Ditto for jitter. If there’s too much jitter, the conversation would become choppy and ridiculous. And if you suffer from too much packet loss, you’d simply be missing whole chunks of the conversation. You want to avoid congestion, you want a smooth connection.

Yealink has incorporated Forward Error Correction (FEC) in the system, which prevents 8% of video and 20% of audio packet loss. Essentially, what this does is send more data than is strictly needed, so that if some of that data becomes corrupted or backed up, you have a buffer of redundant information. It helps, for sure.

They’ve also incorporated some excellent options for you to prioritize your video conferencing data, which is one of the best things you can do to prevent these problems.

Quality of Service Prioritization

Giving priority to your video conferencing data helps prevent latency, jitter and packet loss. To increase the quality of your video conferencing service, you can change your Quality of Service (QoS) prioritization.

Imagine your data as a highway. Let’s say that public buses have to drive on this highway. You want to make sure that the vehicles with the most people in them, the buses, always are moving smoothly. A simple answer to this problem is to dedicate a lane just for bus traffic.

It’s the same idea with QoS prioritization. Video conferencing is resource intensive—it takes a lot of data—like the buses carry a lot of people. So you can dedicate lanes of your bandwidth to video conferencing, which helps you avoid congestion. You can prioritize the data.

And Yealink makes it very simple to do this by implementing the prove Differentiated Services (DiffServ) QoS model. DiffServ provides a simple way to mark the importance of your data, so it will be prioritized. You do this by assigning a Differentiated Services Code Point (DSCP) to your data.

Yealink has already set your DSCPs to good settings but your IT administrator might be able to recommend even better set-ups. Don’t change these numbers without double checking what they might mean. These numbers can be found towards the bottom of the page under Network > Advanced.

You might also consider asking your IT administrator about other ways that you can optimize your video conferencing system, because there are definitely more things you can do. Yealink has included a lot in this this system.

Firewall

Yealink gives you some options when it comes to configuring the video conferencing system with your firewall. If you don’t set it up right and you place a call through the firewall to another system, you might find yourself literally having a one-way conversation.

Luckily, Yealink has made it simple for you to set up your video conferencing system with your firewall. Yealink likes to give you choice and flexibility without making you pay extra. For this task, you can use either the H.323 or the SIP communications protocol.

If you use the H.323 set-up, Yealink has included the H.460 firewall traversal extensions to the H.323 protocol. H.460 adds security policies and allows you to easily connect while passing through a firewall. The important thing to remember is that you need to enable it on all your Yealink VC Series devices:


  1. Go to Account > H323
  2. Towards the bottom of the page, there’s an option labelled H.460 Active
  3. By default, it’s Disabled
  4. Select Enabled from the drop-down menu
  5. Click Confirm



If you use the H.323 set-up, you might also think about enabling the H.235 security standard, which provides authentication and integrity protection. Like the H.460 firewall traversal, you need to activate the H.235 security standard on all the devices you’re incorporating in the conference. The option is just as easy to enable, being located two boxes below the H.460 Active box.

There is more that you can do, too. Unfortunately, a lot of what you can do depends on the specifics of your system, so we can’t go over every example.

And More!

There’s so much more packed into the Yealink VC Series Video Conferencing system. As we emphasized in our review, it’s a simple to use system. We didn’t emphasize enough how feature-rich it is. This post just scratches the surface of the deep features that you get with this economical video conferencing system.

We hope this article has given you some useful information that you can integrate when setting by your Yealink video conferencing system!

Interesting Times in the World of Open Source Hardware

We live in interesting times.

You know that “ancient Chinese curse,” right: “May you live in interesting times.” The idea is that the times of struggle and change that historians like to write about—the interesting times—tend to be miserable for the common people who have to live through them.

There’s no evidence about the phrase’s origin being either ancient or Chinese, by the way. Some people would dispute the curse part, too.

We have no idea what historians of technology will write about our period. There’s too much change, too quick. Any historian trying to track technological change in the early 21st century has their work cut out for them!

But we think we know one movement that this historian would talk about: open source hardware.

Open Source Hardware

Now, that might not seem like the most interesting of topics, but isn’t it notable that the largest tech companies, as they’ve expanded, have started working towards open standards for their data centers and servers—the physical gear that undergirds their whole businesses?

As Quentin Hardy put it in the title of his recent New York Times blog post: “For hardware makers, sharing their secrets in now part of the business plan.”

Isn’t that interesting?

Servers

What Does “Open Source Hardware” Even Mean?

Let’s start with our definitions.

The Open Source Hardware Association (OSHWA) provides a clear definition of open source hardware: “Open source hardware is hardware whose design is made publicly available so that anyone can study, modify, distribute, make, and sell the design or hardware based on that design.”

It’s the opposite of proprietary hardware. The impulse behind open source hardware is sharing ideas. Many minds are better than one, proponents claim. Because of that philosophy, open source hardware is also intended to be easy for other people or companies to pick up.

The OSHWA continues: “Ideally, open source hardware uses readily-available components and materials, standard processes, open infrastructure, unrestricted content, and open-source design tools to maximize the ability of individuals to make and use hardware.”

This is needed for accessibility. Hardware has a barrier to entry that software doesn’t, namely, the fact that it is physical. The materials and mechanisms that make hardware can’t be virtually copied like code. Innovations like 3D printers might reduce that barrier, but boxes will always take up more resources than bits.

Finally, Opensource.com clarifies the definition of open source hardware: “‘Open hardware’ is a set of design principles and legal practices, not a specific type of object.”

So how does this look in practice?

Facebook and Open Source Hardware

You might have heard that Facebook “open sourced” its designs for its data center servers (among other things). This news broke in 2011, when they announced the Open Compute Project. This project encourages members to download, modify and improve their hardware designs.

The Open Compute Project’s mission statement is clear about its objectives. They call themselves “a rapidly growing community of engineers around the world whose mission is to design and enable the delivery of the most efficient server, storage and data center hardware designs for scalable computing.” They say that sharing is “key to maximizing innovation and reducing operational complexity.”

If many people buy into the project, then many minds will work to solve common problems. What might not be clear at first is how this will reduce complexity. Won’t having many minds be more complex?

There are actually two parts to the answer to that question: people and standards.

The number of voices will be complex. Very complex. And that will lead to disagreements. Open source hardware (and software) will let dissenting voices go their own ways, as in the case of Arduino chronicled in ReadWrite.

However, in the end, having so many people agree upon standards will reduce the complexity of the product. It’s almost evolutionary in principle: many minds producing many ideas, and in the end the fittest idea survives. Then that standard can be adopted broadly, and the production of the product can be streamlined.

At least, that’s the hope, as Stephen Lawson writes in Computer World: “The idea is that if a lot of vendors build hardware to OCP specifications, IT departments will have more suppliers to choose from for gear they can easily bring into their data centers.”

So much talk is out there right now about the cloud and the Internet of Things. Without some form of standardization, these ideas will remain locked in their own spheres. Open source hardware has the potential to open up these.

Cade Metz recently reported in Wired about the broadening support for the Open Compute Project: “Apple has joined the effort, following in the footsteps of Microsoft, cloud computing giant Rackspace, and several of the country’s biggest financial companies, including Goldman Sachs, Fidelity, and Bank of America.” Many others companies are involved, too.

Facebook’s project is gaining traction.

Servers

Trickle-down Effects of Open Source Hardware?

The big question we’re wondering about is: What will the trickle-down effects of the open source hardware movement among major corporations be?

Facebook was critiqued for their decision to go open source because people felt that their open source products—server and data center design, etc—would have a very limited audience. How many companies need to store and process data on millions and millions people?

We’ve seen time and again, though, that innovation for specific purposes can lead to innovation for purposes the original makers weren’t even thinking about.

Another side of open source hardware that we didn’t have time to talk about is the maker/DIY style open products that everyday people make—and sell on places like Tindie. These products are wonderful and creative, but for the most part they’re basically curiosities for tech hobbyists.

Somewhere in the middle lies the world of open source hardware that will improve life for small and midsize businesses. HP’s new server line, Cloudline, based on Open Compute standards, could be a glimmer of what’s in store.

What’s for sure is that having quality technicians and engineers who will know the standards and be able to implement them well will become ever more important. It’s great to have the lofty Cloud as your goal, but all this hardware needs to be deployed in the real world first.

Interesting times, no?