As the Internet continues its exponential growth, the user profile is
changing. Many of the newer Internet hosts are personal workstations,
often connected by dial-up or other slow links. We examine some factor
s that motivate or mandate ''thin'' (low-bandwidth) connections to the
Internet. We notice that the motivation for adopting thin links in th
e West can be different from those in developing countries. Using a pr
ofile of such typical users, we show how techniques exist that allow p
ractical and adequately efficient use of the Internet even ''at the en
d of a tether''. We are exploring the use of these methods in routine
Internet use from a site in India (a software development laboratory,
multi-user LAN, connected to an Internet service provider through an e
xpensive dial-up link) and from mobile computers (e.g., HP100LX and Ga
teway Handbook) in the US. In each case the user's Internet access is
through a thin link, with a bandwidth somewhere between 2400 bps and 2
8.8 kbps. Local caching and prefetching of resources naturally suggest
s itself as a useful candidate. It appears that transparent replay of
application protocols is a practical way to retrofit resource caching
into existing (shrinkwrapped) software. One promising method which wor
ks with most services of interest is Postel spoofing. Given the ''brow
sing'' mode of network usage, progressive encoding mechanisms are show
n to effectively reduce the access time for particularly large Interne
t objects, such as Web pages. An ideal progressive encoding of a resou
rce sends a gross quality rendering followed by successive refinements
. Since only a fraction of the images retrieved in a session actually
have long-term value, such techniques can reduce on-line bandwidth dem
ands by an order of magnitude. Obviously, such encoding methods apply
also to large archive and distribution files (such as from FTP archive
s). Filtering and relevance feedback have been recognised as effective
tools in overcoming information overload. Many sophisticated general
techniques are a subject of active research. However, we found that ex
ploiting certain behaviour patterns typical in Internet usage permits
particularly efficient filtering using surprisingly simple methods. We
apply this to USENET communication, and extend this to other services
(FTP, HTTP, Gopher, mail, etc.), and outline a method of filtering ne
twork hypermedia on the basis of relevance contours. Our method recogn
ises the amount of selected information that can be digested by a user
in a day, and maximises the value of the packet so selected. It also
differs from others by integrating ail network hypermedia and selectin
g and filtering items without regard to the service they were accessed
from (USENET, Web, Gopher, FTP, etc.). It does not depend on a-priori
categorisation such as news groups and the consequent need for explic
it subscription and unsubscription.