Pitfalls of monitoring Akamai SureStream streaming video performance.

A customer recently asked me to look into some details about Akamai SureStream for them.  They had just held a webinar across multiple WAN locations and found “it failed to some degree”.

SureStream encodes a recording at the highest selected bit rate (where bitrate is a function of framesize & framerate), and then adds additional “duress” rates.  At the streaming media server, SureStream enables the server to shift bitrates for each client as their connection performance varies.  IE., if session encounters congestion, server could “downshift” to a lower bitrate without dropping the session.

If you are trying to monitor the performance of these sessions from a centralized network location, there are some pitfalls to consider.

Pitfall #1:  if a customer serves a SureStream encoded media file from a host which isn’t configured for SureStream (such as http server), then the client will receive the full bit rate PLUS each of the embedded duress rates.  Assuming the content was encoded at 512kbps with additional duress rates of 128 kbps, 52kbps, and 28kbbs… if the content was host on a non-SureStream server, the client would need 720kbps (512+128+52+28) of bandwidth to stream without buffering delays or other problems.

Solution: Verify the hosting server is properly configured.

Pitfall #2:  SureStream is opportunistic.  The client/server will automatically try to use the highest bit rate available unless the session runs into problems or the user changes their client configuration to prefer a lower rate.  Remember those “cable hog” commercials.  Depending upon the software client, the end user may not even have an option to select a lower bit rate.

Solution:  This may require a create combination of options.  If possible, try to ensure the end user have a client which can be configured.  Additionally, try to deploy monitoring agents to the end users.  The OPNET ACE Capture Agent is very useful in these situations.

Pitfall #3:  it’s difficult to monitor performance of the various bit rates relative to the media server and the network.  Each client has a direct connection (assuming no multi-cast layer) to the media server, so each client receives only one of bit rate stream.  Depending upon congestion, that client’s bit rate stream might change frequently during the session.  Monitoring streaming performance at another bit rate and/or another location would require monitoring one or more additional client connections.

Solution:  This can be very challenging to monitor and analyze from a single centralized capture point.  Even if you can identify each of the incoming sessions, you really won’t get much insight into the user experience at the remote end.  You could capture everything, and then try to match the user complaints.  But it would be far more efficient and accurate to utilize remote capture agains.  Again, this is the sort of thing where the OPNET ACE Capture Agents are quite effective.

When looking at this customer’s network from the centralized monitoring tools, it was difficult to identify which end users were having problems.  When using the remote monitoring agents we could quickly see the problems and the causes.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s