MIT Grads Aim To Cut Congestion

Akamai will create a "smart" distributed network of servers to streamline traffic to the Web's five-star sites. By Chris Oakes.

A content-distribution service staffed with former MIT computer scientists is working to help ease congestion on highly trafficked Web sites.

Cambridge, Massachusetts-based Akamai Technologies said Friday it will launch FreeFlow in late March. The service redistributes content from popular Web sites to Akamai servers, shortening the distance between content and the users who want it.

Akamai's idea is that the Web's most popular sites -- say, a news site like CNN Interactive -- could sign up for FreeFlow service to reduce loads on their own Web servers and ensure faster service for their users.

Text, images, and movies would be redistributed across a network of Akamai servers. The "intelligence" in the server network determines when and where a site's content should be relocated.

"We look at both minimizing the hops to you and going only through the fastest links," explained Akamai chief operating officer Paul Sagan. "Our network has the intelligence to figure that out and make that decision."

Akamai's server network is similar in spirit to the concept of Web caching: Content is stored on a server that's closer on the network to end users. But while Web caches fill up with any manner of content, Akamai strategically distributes only content owned by its clients.

"The secret sauce at Akamai is great math," Sagan said. He credits chief scientist Thomas Leighton, who headed up the algorithm group at the MIT Laboratory for Computer Science, with making the necessary breakthroughs. Leighton led Akamai's research team with MIT graduate student Daniel Lewin, now the company's chief technology officer.

The math comes in the form of patent-pending algorithms FreeFlow uses to monitor requests for content as it comes in. The algorithms are responsible for determining, for example, that sudden bad weather in New England is resulting in high demand by Net users there for Weather Channel pages. Making these determinations on a system without centralized control was part of the challenge, Sagan said.

Akamai is currently beta testing the service by using the content of eight of the Web's 15 most popular sites. On average, the server load was reduced by as much as four to five times.

"If we can convert 30 percent of [the top Web sites participating in Akamai's beta test], then we'll probably carry 10 percent of the traffic of the Internet," said Todd Dagres, general partner at Battery Ventures, which contributed a total of US$8 million in financing to the project.

"If we converted 100 percent, we'd probably carry 30 to 40 percent [of the traffic] on a given day."

Mark Cuban, president of heavily trafficked Webcasting site Broadcast.com, is skeptical of any system that promises better content delivery using servers with duplicate content. For heavily changing content, more bandwidth to carry traffic is the only real answer, he said.

It comes down to "who has control of the routing, the bandwidth, of the CPUs [on the server]."

But Sagan is resolute. While not well suited to live streaming event sites, he said, Akamai is accomplishing many of the things Cuban is concerned about, like carrying content over routes that provide the best bandwidth over the shorter distance.

The algorithms used by the network were the result of conversations between Leighton and the Web's founding father Tim Berners-Lee on how to solve "flash crowding" problems on the Net.

Akamai hopes to have at least 1,000 servers co-located on networks around the world by the end of 1999. The beta testing is currently being performed with 300 servers deployed.