How facebook displays its websites so quick?

In the traditional way for a web browser to display pages, our browser software would send our request, connect ISP, get IP address from DNS and then send our request to proxy server, where web pages is being stored. Then the server would send back HTML file to browser’s IP address with an instruction for those sites to send graphics, videos contained in URL to PC as well. The browser would then download CSS as page required. But the process of request for to server and getting back HTML, then download CSS is in sequential and thus take time.

For facebook, they create a new modular solution called BigPipe. They divided face book web page into multiple chunks called pagelets. The home page consists of several pagelets: “composer pagelet”, “navigation pagelet”, “news feed pagelet”, “request box pagelet”, “ads pagelet”, “friend suggestion box” and “connection box”, etc. so the HTML document that being send back  is different part by part here. After receiving request from web browser, the server can send back unclosed HTML, so for each chunk’s CSS downloading and steps after can be done parallel in the meantime, and that would save half of time to display user a web page.

So all these chunks are independent from each other, the display of each part can be done separately. However, when getting URL of other chunks, the browser would look in the folder to check out cookie information. And because other parts is being downloaded, cookies would keep your information so the website can recognize your ID number, and provides you latest information according to your userid. Then the web page would become a combinatory of each chunks as modular and can work separately, the operating system would not need to combine each part information as a whole body to work and fits each other.

To address the scalability problem with such big data, facebook also used disaggregated network, so the software and hardware can be separate and allow compute and storage to be separate in different cluster. So that the network latency and bendwith would be same or better than local disk.

But I am not understanding that as HTML5 is now promising, it now has new multimedia elements and semantic elements, and better at handle unknow HTML information to browser, and storage function will be more powerful to replace cookie, would BigPipe not be useful and competitive here or being replaced?

and I am not that understanding the opposition problem between app vs world wide web, why the app is so deviate from the world wide web since these two feels like interact between each other.

Reference: Ron White, How Computers Work. 9th ed. Que Publishing, 2007. “How the World Wide Web Works.”

Janna Anderson, and Lee Rainie. “The Future of Apps and Web.”  Pew Research Center’s Internet & American Life Project, March 23, 2012.