THINTERNET - LIFE AT THE END OF A TETHER

Citation
H. Shrikumar et R. Post, THINTERNET - LIFE AT THE END OF A TETHER, Computer networks and ISDN systems, 27(3), 1994, pp. 375-385
Citations number
15
Categorie Soggetti
Computer Sciences","System Science",Telecommunications,"Engineering, Eletrical & Electronic","Computer Science Information Systems
ISSN journal
01697552
Volume
27
Issue
3
Year of publication
1994
Pages
375 - 385
Database
ISI
SICI code
0169-7552(1994)27:3<375:T-LATE>2.0.ZU;2-3
Abstract
As the Internet continues its exponential growth, the user profile is changing. Many of the newer Internet hosts are personal workstations, often connected by dial-up or other slow links. We examine some factor s that motivate or mandate ''thin'' (low-bandwidth) connections to the Internet. We notice that the motivation for adopting thin links in th e West can be different from those in developing countries. Using a pr ofile of such typical users, we show how techniques exist that allow p ractical and adequately efficient use of the Internet even ''at the en d of a tether''. We are exploring the use of these methods in routine Internet use from a site in India (a software development laboratory, multi-user LAN, connected to an Internet service provider through an e xpensive dial-up link) and from mobile computers (e.g., HP100LX and Ga teway Handbook) in the US. In each case the user's Internet access is through a thin link, with a bandwidth somewhere between 2400 bps and 2 8.8 kbps. Local caching and prefetching of resources naturally suggest s itself as a useful candidate. It appears that transparent replay of application protocols is a practical way to retrofit resource caching into existing (shrinkwrapped) software. One promising method which wor ks with most services of interest is Postel spoofing. Given the ''brow sing'' mode of network usage, progressive encoding mechanisms are show n to effectively reduce the access time for particularly large Interne t objects, such as Web pages. An ideal progressive encoding of a resou rce sends a gross quality rendering followed by successive refinements . Since only a fraction of the images retrieved in a session actually have long-term value, such techniques can reduce on-line bandwidth dem ands by an order of magnitude. Obviously, such encoding methods apply also to large archive and distribution files (such as from FTP archive s). Filtering and relevance feedback have been recognised as effective tools in overcoming information overload. Many sophisticated general techniques are a subject of active research. However, we found that ex ploiting certain behaviour patterns typical in Internet usage permits particularly efficient filtering using surprisingly simple methods. We apply this to USENET communication, and extend this to other services (FTP, HTTP, Gopher, mail, etc.), and outline a method of filtering ne twork hypermedia on the basis of relevance contours. Our method recogn ises the amount of selected information that can be digested by a user in a day, and maximises the value of the packet so selected. It also differs from others by integrating ail network hypermedia and selectin g and filtering items without regard to the service they were accessed from (USENET, Web, Gopher, FTP, etc.). It does not depend on a-priori categorisation such as news groups and the consequent need for explic it subscription and unsubscription.