Net Neutrality and Business Videoconferencing

Something that’s bothered me for a while now, but an issue I put to one side (until yesterday) is the question of what happens if the backbone ISP’s get to the point where they refuse to continue investing in bandwidth across peering points?

Many in the VC industry, including myself have relied on the perpetuation of bandwidth across the internet expanding ad-infinitum, keeping step with consumer demand.  This has meant, in the main, that VC call quality over the internet has been perfectly adequate for most meetings, and an enabler of new working practices (home and remote workers for example).

Whilst I don’t profess to be an expert in ISP peering economics, I did wonder if there would ever be a situation where diminishing returns (or increased power and control) for these ISPs would result in them holding the rest of the internet to ransom, to renegotiate the terms of how they generate money for their bandwidth.

And so now we see the first incidence of this behaviour with Comcast brokering a huge deal with Netflix, a large contributor to internet bandwidth growth in the past 5 years, and accounting for a third of US internet bandwidth consumption.


We saw something similar unfold in the UK, with ISP’s twice demanding more revenue from the BBC resulting from the launch of its iPlayer service, and again when iPlayer offered HD content.  It took UK government intervention to prevent the ISP’s from changing the model.

So what do this mean for business video communications?

The precedent set by Comcast/Netflix is concerning.  Already businesses are using the internet on an increasing basis to conduct its video meetings, whether this be WebEx, BlueJeans or other Internet based service providers.

Video service users within organisations are typically unaware that internet based services of this kind are provided without any kind of SLA.  But until now that’s been ok, the Internet has been fine at coping with the growth in demand for bandwidth.

This might be about to change.  Comcast/ Netflix sets a worrying precedent where the deliberate inaction of ISP to invest in their core and peering bandwidths becomes a legitimate practice to secure greater revenues from organisations that generate high bandwidth internet traffic.

We already see internet providers today specifically geared to providing ‘quality assured’ internet transit for video communications.  Could this become the norm for Business video communications?

Whatever we have in store, it’s only likely to drive up costs.

One mitigation strategy could be for customers to use fixed connectivity into videoconferencing clearinghouses / service providers.  This approach provides cost certainty, and importantly an SLA.


Good Enough?

The recent announcement of the Logitech CC3300e conference camera has caused me to reflect on what value video communications represents to organisations.

Having recently participated in a number of discussions about the future of video communications, I can’t help but be reminded of failed attempts by customers to use low cost peripherals to drive video meetings in meeting rooms.  There’s a swathe of reasons why these projects were doomed to fail, with the most common issues being:

  • Cameras with a field of view set for desktop use, not for meeting rooms.
  • Cameras with low quality sensors providing low resolutions or just bad video.
  • Cameras with fixed optics that can’t deal with people sat further away.
  • Microphones tethered by USB limiting optimal placement.
  • Sub-standard echo cancellation.

With the release of the CC3300, I think Logitech has a bit of a game changer. Every objection listed above, the CC3300 has as an answer that is ‘good enough’.  What’s more, it’s cheap (approximately £800).


If video is truly going to become pervasive, as the industry has promised for so many years, it’s going to be off the back of devices such as these.  Metcalfe’s law was never so applicable to any technology as it is for video communications.  Where organisations were being asked to spend around £10,000 minimum on meeting room solutions that use purpose built appliances, they can now enable meeting rooms with video for less than a tenth of the price.

Put another way, organisations can now enable more than ten times the number of meeting rooms for video collaboration for the same price.

The value that organisations will yield from this scale of deployment far outweighs the diminished user experience.

I see many threads on discussion communities asking the question ‘Is hardware dead?’.  At this new price point, it’s difficult to see how the current crop of dedicated hardware appliances have a place in this new pervasive world.  I still think they do however, but the market for these types of system is shrinking.

Here’s the thing.  Logitech have made good quality PC peripherals for as long as I can remember.  This CC3300 is no exception, and represents a big step up in functionality for Logitech.  Yet it’s still only ‘good enough’.

Whilst this device addresses many of the requirements of video meetings held in rooms, it’s still really only suitable for rooms of up to around 4 participants.  Any more than that and the experience begins to degrade.  The first thing that happens is the microphone becomes less effective.  Bad audio remains the number one issue that degrades video meetings today.

The camera itself still requires a PC to drive the experience.  This could either be a dedicated PC in the room, or a device that a participant brings along.  The most economical (and reliable) method is the ‘bring your own’ model.  This ensures that the device is in good working order before the meeting starts, and the host simply plugs the camera into their device.  Simple.  Meeting Room fixed PC’s are notoriously unreliable.

Then we look at the activities that happen in room based video meetings.  Content gets created and shared.  Whilst this Logitech solution allows for content sharing from its host PC or Laptop, it is to the exclusion of all other participants in the room.  But hey, this is ‘good enough’.  These issues are nothing compared to the fact that we’ve just increased our footprint tenfold.

I actually agree with this argument.  The price of ubiquity here is providing the users with a good enough experience in the meeting room.

I still believe however, that we can do better.  Good enough will only be good enough for so long, users will expect more.  We’ve seen this before, only this time the number of users has increased 10 fold or more. Videoconferencing room systems have evolved in the way they have because users demanded a better experience.  Better data sharing and collaboration capabilities.  Better audio.  Better, more clear pictures to give a natural experience.  Simple, intuitive user interfaces, and scheduling.

I watch with interest as to how Cisco, Polycom et al react to this.  In my ideal world we will see something that provides a consistent user experience to the systems we’re used to seeing today, but at a price point that allows for truly pervasive video in the meeting room.

Will these vendors rise to the challenge, and intriguingly, will it be with dedicated appliances?  I still think the key to providing a remarkable user experience lies as much in the hardware design as it does the user interface and peripherals.  Apple has shown is it is possible to build your own specific hardware at a price point acceptable to the market, so it’s not beyond the realms of possibility.

My parting shot is a quote from Rowan Trollope, in his analyst keynote at the 2013 Collaboration Summit.  “Cisco is at war with good enough”.  Let’s see.

Cisco and H.265 demonstrated at Collaboration Partner Summit



I felt whilst the demo was good, in that it shows Cisco leading in this space, the actual demo kinda missed the point of H.265, at least in the first couple of years of its life.  Whilst H.265 will give you the same quality as H.264 at half the bandwidth, that will only really become relevant as general purpose computing hardware becomes capable of processing it.

It took around 4 years for general purpose hardware to start processing H.264 HD video after H.264 was ratified.  Even today, most vendor implementations of H.264 require a quad core processor, with Cisco using dual core.  So when we finally see the new standard, I don’t think its reasonable to expect general purpose hardware to be touching H.265 for at least a couple of years.  The point about bandwidth saving it its only really relevant to mass deployment of video, although it could be argued that it also reduces the risk of congestion when making calls via the internet.

Where H.265 will make an impact is in the new applications it will enable in the meeting room.  4K and 8K video will provide new experiences and new capabilities, and it is here that I expect Cisco to market this new capability when we finally see something from them.

Cisco wouldn’t be drawn on the question of what hardware the demo was running on, but I really wouldn’t expect it to be on the C-series.

Cisco UC 9.0

Yesterday, Cisco announced its latest version of its popular Unified Communications platform, based on Cisco Unified Communication Manager, version 9.0.

Amongst the announcements of major new features was something Cisco are referring to as ‘Email Dialling’, which for anyone who follows the VC/TP industry will know as URI dialling, which has been around for some time now.

About time Cisco!!

This particular announcement should have a seismic impact upon the UC industry, but strangely, Cisco seem to be underplaying it in the messaging I’ve seen so far.

In the context of Enterprise Telephony, email dialling is revolutionary. Now for the very first time, an organisation is able to dial another via the internet because of Firewall Traversal technology, and leveraging the Internet DNS system to resolve calls to the remote destination rather than configuring a mesh of SIP trunks as was previously the case.

Why is that a big deal?

We’re starting to see the emergence of a new paradigm for media rich calling. With the ability now to make a high definition video call, or just an audio call, over the internet to any location, without any sort of IT intervention every time you want to connect with an organisation, means that finally, the user is in control of who they talk to, and by which means.

Why aren’t other organisations using URI?

In truth some are. Microsoft for example uses a URI structure within Microsoft Lync, although the underlying signalling protocols of Lync still mean that it’s a closed system unless a gateway is employed (such as Cisco VCS ironically). The problem though is that Microsoft aren’t really pushing this approach, they would much rather perpetuate the meshed approach by federating on an individual business to business basis.

Likewise Polycom, who actually are able to support URI dialling natively, choose not to push this capability in favour of a different, IP address based method, which simply isn’t user friendly.

So now Cisco have this capability, why are they underplaying it?

This capability truly sets Cisco out as a leader in driving standards in the Unified Comms and Collaboration space, so is it just a case of they don’t realise what it is they have on their hands here?

I don’t think it is. What’s more the case I believe is that the UC Vendors are still wrestling with the notion of ubiquitous connectivity via the Internet, because some of their biggest customers are the ones most at threat from this new paradigm, i.e. the carriers.

I personally believe that eventually the Internet will become the network of choice for carriage of even the most bandwidth intensive of applications, including Immersive TelePresence as it is today, but also the next generation of visual communications, such as 4k and 8k video in the future.

Whilst carrier revenue remains entrenched in the provision of private WANs though, and in the absence of any sort of means to monetize Internet transit, we will unfortunately continue to see the carriers dragging their feet on this.

If anything useful is going to come out of the OVCC then, it needs to be this, an agreed charging framework between carriers, just as the mobile networks have today, for the transit and inter-carrier handoff of real time video and audio traffic, across quality assured Internet links.

Cisco Launches the TX9000

So what’s the big deal?

With recent stats showing poor Q1 performance at the top end of Cisco’s Telepresence offering, could the launch of this new product signal Cisco being out of step with market demands?  Or something else?

Elliot Gold’s Telespan recently reported a sharp decline in Cisco’s Immersive Telepresence numbers, both Quarter on Quarter, and Year on Year.  Now I’m pretty sure that the appetite for these systems is pretty limited, its a niche market, and the sales of these units are almost exclusively led by the manufacturer.  But, from my time observing this market, one thing for sure I know is that customer that are making big investments in technology such as Immersive Telepresence are very aware of product roadmaps.  I might be wrong, but I would not be surprised to see a resurgence in Cisco’s numbers post-TX 9000 launch, simply because in Q1, customers were holding off orders as they wait for the new product.

But in real terms, these Immersive TP systems actually contribute very little to the overall market, certainly in terms of numbers shipped, but also revenue (despite the very high ticket price!).

What’s most exciting about the TX9000 then is the research and development implications.  Just as we see technology first seen in Formula 1 begin to appear in every day motor vehicles later on, the innovation we see appearing in the TX9000 will also start to appear in multipurpose and personal systems as new products are release.

We saw this with the Tandberg C90 on its release, and I fully expect to see the same with the TX series as it develops and matures.  I also expect to see H.265 support on this hardware platform.

So my belief is rather than Cisco investing into an imploding niche market, what we’re seeing is a continuation of the same trend that made Tandberg so successful in the first place.  Innovation of new features and functionality at the very top end, (in the knowledge that the market for these technologies may be limited), but then the exploitation of these technologies across the product range later.

The ‘Death’ of TelePresence

It’s interesting times in the Videoconferencing / TelePresence industry.  With market data from Polycom and Cisco showing downturns in recent times, and with notable sources such as also seeming eager to put the boot in, publishing a controversial article here, the industry is rife with speculation about the ‘death’ of TelePresence. Much of the speculation points to low cost start-up businesses such as Vidyo, who themselves reported some stunning growth numbers last quarter.

So what do this all mean?

Well first, lets put something into context.  Vidyo’s growth numbers may be impressive at 82%, but lets not forget that its far easier as a small business to achieve such levels of growth, simply because doubling a small revenue number is way easier than doubling a large one.  As Vidyo does not disclose its numbers, then a simple percentage growth figure is largely meaningless.  We certainly cannot assume that Vidyo’s growth is responsible for Cisco’s and Polycom’s decline in numbers.

This brings me to my second point, start-up video calling businesses that do not provide proper interoperability are doomed from the start, and it’s just a matter of time.

Already we are seeing free to market, standards based alternatives with HD video that are interoperable with the huge installed base of Polycom, Cisco and others.  These services are easy to use, and whilst feature limited, offer video calling instantly with the assurance of being able to connect to any other standards based system.

There’s one very notable exception to this of course, Skype.  Skype is different simply because it defined the market for domestic video calling, and owns that market.  It may be a walled garden but with such critical mass, they are able to call the shots.  Whenever I’m asked if our standards based service is able to call someone at home, the question is invariable can we connect with Skype.  It’s incredible to think that the EU would actually miss this when it approved the Microsoft acquisition, given its previous form with Cisco and the Tandberg acquisition, especially when ownership of Skype gives Microsoft far more power over a market than the Tandberg acquisition gave Cisco.  I watch with interest on that front, because should Skype remain a walled garden, I predict that the business community will ultimately reject it as a solution because of its closed nature.

Customers are becoming more educated as to how they want to connect, and they want to be able to connect to anyone, anywhere without fuss or conversations about why their investment doesn’t allow them to talk to X, Y or Z systems.

Which brings me back to Polycom and Cisco:  OK these companies have duked it out previously, and on a number of occasions about supporting standards, but actually their track record is pretty good at interoperability, and trying to ensure that customers are able to communicate with each other, even if sometimes certain ‘premium features’ aren’t available when its a multi-vendor call.  I strongly believe that its the interoperability approach here that will win out in the long term, versus the alternative, where a dominant business such as Microsoft attempts to leverage market forces to drive its agenda.

Another aspect being discussed here, as it has for many years is that of Software vs. Hardware.  Many believe now that software only solutions are primed to take over the market delivering the same quality and experience of hardware solutions, (which drive both Cisco and Polycom’s revenues in the video space).  Whilst it’s true that general purpose computing hardware is now more than capable of processing H.264 with High Definition, that’s only part of the picture.  Firstly, hardware systems provide an appropriate, resilient form factor for use in rooms where multiple input devices are required, with purpose built GUI’s designed to optimise the experience for the user.  That’s one part.

The other more telling part though, and what industry analysts seem to have overlooked, (despite having seen two cycles of what we are about to go through), is the continuing innovation that occurs within the standards.  I’m referring here to the ratification of H.265, expected early 2013.

As has been seen before with both H.263 and H.261 before it, PC based solutions became available toward the later stages of these standards because general purpose computing caught up over time.  H.265 promises to change the game in as significant a way as H.264 did 6 years ago, and general purpose computing will not cut it for at least a couple of years.

So what does H.265 promise?

Two main things; Firstly, as with H.264 and H.263 before it, the first aspect to be exploited will be the reduction in bandwidth required to transmit video of a certain resolution and bandwidth.  So for example, if a 1080p30 call today requires say 1.5Mbps with H.264, then H.265 could well provide this at 768kbps (if the bandwidth savings made by H.264 over H.263 are anything to go by).  OK so bandwidth is getting less expensive, but this is a big deal if the number of video calls is going to increase at the rate people expect it to.

Secondly, and again as we saw with h.264, we will then start to see adoption of new features that the protocol brings.  With H.264 we saw the almost immediate adoption of 720p video, and now we’re seeing 1080p deployed as standard more and more.  H.265, 4k and 8k video resolutions are possible, and in addition, expect to see more than just dual streams, but multi-stream calls with multiple video and content sources simultaneously.  And once again, this is driven by standards.

One might think ‘why do I need 4k or even 8k video when 1080p is perfect even on a 65″ screen’?  I leave you with this thought from none other than Microsoft:

Polycom’s big news

After watching the announcements by Polycom yesterday and having time to reflect, here’s my take on what I think this all really means:

Polycom’s aquisition of HP I suspect is of more benefit to the HP business unit than Polycom in the long term.  On the face of it, HP’s video business overlaps significantly with Polycom’s, with the only significant difference being the development work going on within HP to bring visual communications to a mass market via its WebOS platform.  So why not continue to partner then?

Why do Polycom now want a VNOC service when with their previous aquisition of Telesuite / Destiny, they  hived that element off?  The HP aquisition looks and feels very similar in scope to that of Destiny some years ago, in that Polycom have aquired some large Customers, an immersive Telepresence product, a network that services those systems and a VNOC service.  Will history repeat itself or will Polycom now enter the VNOC business proper?  My suspicion is that this will be ring fenced for existing HP customers, but as for the future of this service, it’s difficult to see Polycom continuing to grow this offering if they want to keep their partners happy.  Then of course comes the question of product overlap, as both HP and Polycom provide what are essentially competing products.  Expect to see a similar strategy to that adopted by Cisco in managing its portfolio of CTS and T-series systems.  What’s really compelling though are the links between Polycom and HP in terms of WebOS development.  Will HP / Polycom really have a play here to compete in the Tablet and Handheld space, and how will it integrate into the overall portfolio?

My suspicion is that this is not a great aquisition.  Polycom have not bought any unique competencies here.  The WebOS competencies still remain with HP, and meanwhile they’ve bought an overlapping product portfolio, a VNOC service that potentially competes with its customers, and they’ve bought HP’s customer base.

The Microsoft connection yielded no major suprises.  Microsoft needs to get into the immersive, big meeting room space if it wants to compete, and Polycom has the competence to make this happen.  The announcements around SVC should also come as no surprise, although again one wonders why the choice of SVC over H.264 baseline is more a case of market positioning than it is from a geniune desire to drive interoperability and ubiquitous video (Cisco don’t have SVC yet, 1-0 to Polycom).

Of most interest was the annoucement of the Telepresence network ecosystem, called the Open Visual Communications Consortium.  Notable by its absence again was Cisco.  No firm details were provided about how this consortium would actually work, or deliver the seamless ‘mobile telephony’ type experience, but the sentiment is one I generally applaud, but the devil is really in the detail.  How will this consortium deal with cross network billing?  How will they ensure that video traffic traversing their networks retain the quality of service required to support immersive telepresence calls?  And importantly, what if a call is placed from inside a consortium member’s network to a busienss that is outside it?

As I have previously suggested, we are beginning to see the hegemonisation of visual communications on the internet.  Polycom places itself here as the central player, which makes sense from a service provider perspective (as this means BT doesn’t have to play 2nd fiddle to AT&T and vice-versa), but it does make it difficult for Cisco to enter this ecosystem, and if Cisco aren’t playing, simply put the OVCC is little more than market positioning.

Despite this, if some good is to come of the OVCC, it has to be able to deliver a service in which customers, regardless of which vendor they use, are able to place calls to each other, across the internet between ISP’s, with guaranteed QoS with a cost structure that promotes inter-company communication rather than choking it.  I still feel that this will only happen when the industry appoints a regulator, in the meantime, let’s see how Polycom fares.