INTERNET-DRAFT                                           Martin Hamilton
draft-hamilton-indexing-00.txt                   Loughborough University
Expires in six months                                   Daniel LaLiberte
                         National Center for Supercomputing Applications
                                                               June 1996


      Experimental HTTP methods to support indexing and searching
                Filename: draft-hamilton-indexing-00.txt


Status of this Memo

      This document is an Internet-Draft.  Internet-Drafts are working
      documents of the Internet Engineering Task Force (IETF), its
      areas, and its working groups.  Note that other groups may also
      distribute working documents as Internet-Drafts.

      Internet-Drafts are draft documents valid for a maximum of six
      months and may be updated, replaced, or obsoleted by other
      documents at any time.  It is inappropriate to use Internet-
      Drafts as reference material or to cite them other than as ``work
      in progress.''

      To learn the current status of any Internet-Draft, please check
      the ``1id-abstracts.txt'' listing contained in the Internet-
      Drafts Shadow Directories on ftp.is.co.za (Africa), nic.nordu.net
      (Europe), munnari.oz.au (Pacific Rim), ds.internic.net (US East
      Coast), or ftp.isi.edu (US West Coast).

Abstract

   This document briefly outlines current approaches to indexing and
   searching, proposes some experimental mechanisms which might be
   deployed within HTTP [1] in support of these activities, and
   concludes with a discussion of the issues raised.

   The key features which are seen as desirable are a standardized way
   of providing a local search capability on the information being made
   available by an HTTP server, and a way of reducing both the bandwidth
   consumed by indexing agents and the amount of work done by HTTP
   servers during the indexing process.

1. Introduction

   As the number of HTTP servers deployed has increased, providing
   searchable indexes of the information which they make available has
   itself become a growth industry.  As a result there are now a large



                                                                [Page 1]


INTERNET-DRAFT                                                 June 1996


   number of "web crawlers", "web wanderers" and suchlike.

   These indexing agents typically act independently of each other, and
   do not share the information which they retrieve from the servers
   being indexed.  This can be a major cause for frustration on the part
   of the server maintainers, who see multiple requests for the same
   information coming from different indexers.  It also results in a
   large amount of redundant network traffic - with these repeated
   requests for the same objects, and the objects themselves, often
   travelling over the same physical and routing infrastructure.  To
   minimize the problems which arise from this behaviour, a number of
   techniques may be used, e.g. caching proxy servers, conditional "GET"
   requests, restricting transfers to objects which can usefully be
   indexed - such as HTML [2] documents, and the robots exclusion
   convention [3].

   From the server administrator's point of view it would be preferable
   that the HTTP servers being indexed were capable of generating
   indexing information in a standardized format themselves.  Better yet
   if this information were made available in as bandwidth friendly a
   manner as possible - e.g. using compression, and sending only the
   indexing information for those objects which have changed since the
   indexing agent's last visit.  This would facilitate diverse
   approaches to indexing the Web, such as regional and subject-based
   indexes.

   It is also desireable that HTTP servers support a native search
   method, in order that (where a suitable search back end is
   available), HTTP clients may carry out a search of the information
   provided by an HTTP server in a standardized manner.  Current
   approaches to local searching typically involve running one or more
   third party search and retrieval tools in addition to the basic HTTP
   server.  It is usually the case that search results may only be
   returned as an HTML document, whereas a structured format which was
   intended specifically for delivering search results would be
   preferable.  This could add greatly to the flexibility of the World-
   Wide Web, e.g. by making it possible to write hyperlinks in HTML
   documents which cause searches to be carried out, using the results
   of web crawler searches to expand searches to HTTP servers where
   relevant documents were found, and so on.

2. Additional HTTP methods

   Of course, these indexing and searching capabilities need not be
   provided for within HTTP.  A number of networked search and retrieval
   protocols are already in existence, and several approaches exist for
   the local building of indexes of the information made available by
   HTTP servers.  Unfortunately, since these are usually third party



                                                                [Page 2]


INTERNET-DRAFT                                                 June 1996


   products, extra work is required in obtaining, installing and
   configuring them.  This is not going to happen unless the server
   maintainers are sufficiently motivated to devote extra time and
   effort to the tasks involved.

   Ideally, the HTTP server package would itself provide some degree of
   indexing and searching support - perhaps just by bundling third party
   software.  Unfortunately, these features tend to be seen as `value
   added', and may only be available at a price.  By redefining the HTTP
   base line to include support for them, it is hoped that the spread of
   these technologies can be encouraged, and that free software
   developers at least will implement built-in support as a standard
   feature.

   The normal HTTP content negotiation features may be used in any
   request/response pair.  In particular, the "If-Modified-Since:"
   request header should be used to indicate that the indexing agent is
   only interested in object which have been created or modified since
   the date specified.  The request/response pair of "Accept-Encoding:"
   and "Content-Encoding:" should be used to indicate whether
   compression is desired - and if so, the preferred compression
   algorithm.

   In the following examples, "C:" is used to indicate the client side
   of the conversation, an "S:" the server side, and the client and
   server sides are separated by a blank line for clarity.

2.1 The COLLECTIONS method

   The COLLECTIONS method provides a means for HTTP clients to determine
   which collections of information are made available by the HTTP
   server.  This may then be used, for example by the SEARCH and META
   methods, to localize activity to a particular collection.
   Implementors should note that this collection selection is in
   addition to the virtual host selection provided by the HTTP "Host:"
   header.

   In COLLECTIONS requests, the Request-URI (to use the jargon of [1])
   component of the HTTP request should be an asterisk "*", which
   specifies that the scope of the request is for all collections of
   information made available by the server.  Alternatively, the
   Request-URI may be the URI of a particular collection, in which case
   the request is for all subcollections of the identified collection -
   i.e. a recursive traversal is implied.

   It is assumed that these Request-URIs would likely be in the same
   namespace used by the server for regular HTTP requests.  This would
   be in accordance with the general practice of indicating hierarchy in



                                                                [Page 3]


INTERNET-DRAFT                                                 June 1996


   HTTP URLs using the forward slash character "/".

   e.g.

     C: COLLECTIONS * HTTP/1.1
     C: Accept: application/x-whois-data
     C: Accept-Encoding: gzip, compress
     C: Host: www.lut.ac.uk
     C:

     S: 200 OK collection info follows
     S: Content-type: application/x-whois-data
     S:
     S: [...etc...]

   Essentially, all the information which is strictly speaking required
   at this stage is a list of the URIs of the relevant collections of
   information.  The META method may be used to discover further
   information about individual collections or elements of collections.

   Since collections themselves may be objects, such as Unix
   directories, it is desirable that the Request-URI be able to refer to
   the collection object itself, or the objects which form the
   collection.  To distinguish between these two roles, we suggest that
   an asterisk "*" may be used to disambiguate between a Request-URI
   which identifies a collection object, and the objects which form the
   collection - e.g. "/departments/co/" might refer to the collection
   object, and "/departments/co/*" to the objects which form the
   collection.

2.2 The META method

   The META method is drawn from the Collector/Gatherer protocol used by
   the Harvest software [4].  It may be used to make a request for
   indexing information about a particular collection of information, or
   a request for indexing information about an individual object within
   the collection.

   The scope of the request may be indicated via the Request-URI.

   e.g.

     C: META * HTTP/1.1
     C: Accept: application/x-rdm, application/x-ldif
     C: Accept-Encoding: gzip, compress
     C: If-Modified-Since: Mon, 1 Apr 1996 07:34:31 GMT
     C: Host: www.lut.ac.uk
     C:



                                                                [Page 4]


INTERNET-DRAFT                                                 June 1996


     S: 200 OK metadata follows
     S: Content-type: application/x-rdm
     S:
     S: [...etc...]

   Since some servers might want indexing to be done by an associated
   server, rather than doing it themselves, a request for indexing
   information (or by extension searching services) might reasonably be
   redirected to another server.

2.3 The SEARCH method

   The SEARCH method embeds a query in the HTTP headers component of the
   request, using the search syntax defined for the WHOIS++ protocol
   [5].

   The Request-URI for a SEARCH request should be either "*", for the
   server as a whole, or the URI of a collection.  The parameters of the
   search should be in additional header lines.  The query header
   specifies what elements of the collection should be selected, just as
   for the META request.

   e.g.

     C: SEARCH /departments/co HTTP/1.1
     C: Accept: application/x-whois-data, text/html
     C: Host: www.lut.ac.uk
     C: Query: keywords=venona
     C:

     S: 200 OK search results follow
     S: Content-type: application/x-whois-data
     S:
     S: [...etc...]

   WHOIS++ requests normally fit onto a single line, and no state is
   preserved between requests.  Consequently, embedding WHOIS++ requests
   within HTTP requests does not add greatly to implementation
   complexity.

3. Discussion

   There is no widespread agreement on the form which the indexing
   information retrieved by web crawlers would take, and it may be the
   case that different web crawlers are looking for different types of
   information.  As the number of indexing agents deployed on the
   Internet continues to grow, it seems possible that they will
   eventually proliferate to the point where it becomes infeasible to



                                                                [Page 5]


INTERNET-DRAFT                                                 June 1996


   retrieve the full content of each and every indexed object from each
   and every HTTP server.

   This said, distributing the indexing load amongst a number of servers
   which pooled their results would be one way around this problem -
   splitting the indexing load along geographical and topological lines.
   To put some perspective on this discussion, the need to do this does
   not yet appear to have arisen.

   On the format of indexing information there is something of a
   dichotomy between those who see the indexing information as a long
   term catalogue entry, perhaps to be generated by hand, and those who
   see it merely as an interchange format between two programs - which
   may be generated automatically.  Ideally the same format would be
   useful in both situations, but in practice it may be difficult to
   isolate a sufficiently small subset of a rich cataloguing format for
   machine use.

   Consequently, this document will not make any proposals about the
   format of the indexing information.  By extension, it will not
   propose a default format for search results.

   However, it seems reasonable that clients be able to request that
   search results be returned formatted as HTML, though this in itself
   is not a particularly meaningful concept - since there are a variety
   of languages which all claim to be HTML based.  A tractable approach
   for implementors would be that HTML 2 should be returned unless the
   server is aware of more advanced HTML features supported by the
   client.  Currently, much of this feature negotiation is based upon
   the value of the HTTP "User-Agent:" header, but it is hoped that a
   more sophisticated mechanism will eventually be developed.

   The use of the WHOIS++ search syntax is based on the observation that
   most Internet based search and retrieval protocols provide little
   more than an attribute/value based search capability.  WHOIS++
   manages to offer a simple yet flexible serach capability in arguably
   the simplest and most readily implemented manner.  Other protocols
   typically add extra complexity in delivering requests and responses,
   e.g. by using binary encodings, and management type features which
   are rarely exercised over wide area networks - and features to aid in
   the management of result sets, which are desirable but add to
   implementation complexity.

   This document has suggested that search requests be presented using a
   new HTTP method, primarily so as to avoid confusion when dealing with
   servers which do not support searching.  This approach has the
   disadvantage that there is a large installed base of clients which
   would not understand the new method, a large proportion of which have



                                                                [Page 6]


INTERNET-DRAFT                                                 June 1996


   no way of supporting new HTTP methods.

   An alternative strategy would be to implement searches embedded
   within GET requests.  This would complicate processing of the GET
   request, but not require any changes on the part of the client.  It
   would also allow searches to be written in HTML documents without any
   changes to the HTML syntax - they would simply appear as regular
   URLs.  Searches which required a new HTTP method would presumably
   have to be delineated by an additional component in the HTML anchor
   tag.

   This problem does not arise with the collection of indexing
   information, since the number of agents performing the collection
   will be comparatively small, and there is no perceived benefit from
   being able to write HTML documents which include pointers to indexing
   information - rather the opposite, in fact.

   In a future development, the HTTP Protocol Extension Protocol [6]
   could provide a means for HTTP/1.1 based applications which use these
   HTTP extensions to share information about supported options, version
   numbers, and so on.  For example, the "Protocol:" header might be
   used to indicate an alternative query language instead of the simple
   WHOIS++ attribute-value syntax, but we suggest that the WHOIS++
   syntax should be supported by every implementation of the SEARCH
   method to provide a common base-line.

   A sample PEP enabled SEARCH...

     C: SEARCH * HTTP/1.1
     C: Accept: application/x-whois-data, text/html
     C: Host: www.lut.ac.uk
     C: Protocol: {ftp://ftp.internic.net/rfc/rfc1835.txt {str req}}
     C: Query: keywords=venona
     C:

     S: 220 OK search results follow
     S: Content-type: application/x-whois-data
     S: Protocol: {ftp://ftp.internic.net/rfc/rfc1835.txt {str req}}
     S:
     S: [...etc...]

   It may be noted that the three experimental methods proposed in this
   document are very similar - differing essentially in the scope of the
   information which they apply to.  It may be desirable to collapse at
   least the COLLECTIONS and META requests down to a single request,
   using an extra HTTP header, say "Scope:", to indicate the scope of
   the message.




                                                                [Page 7]


INTERNET-DRAFT                                                 June 1996


4. Security considerations

   Most Internet protocols which deal with distributed indexing and
   searching are careful to note the dangers of allowing unrestricted
   access to the server.  This is normally on the grounds that
   unscrupulous clients may make off with the entire collection of
   information - perhaps resulting in a breach of users' privacy, in the
   case of White Pages servers.

   In the web crawler environment, these general considerations do not
   apply, since the entire collection of information is already "up for
   grabs" to any person or agent willing to perform a traversal of the
   server.  Similarly, it is not likely to be a privacy problem if
   searches yield a large number of results.

   One exception, which should be noted by implementors, is that it is a
   common practice to have some private information on public HTTP
   server - perhaps limiting access to it on the basis of passwords, IP
   addresses, network numbers, or domain names.  These restrictions
   should be considered when preparing indexing information or search
   results, so as to avoid revealing private information to the Internet
   as a whole.

   It should also be noted that many of these access control mechanisms
   are too trivial to be used over wide area networks such as the
   Internet.  Domain names and IP addresses are readily forged,
   passwords are readily sniffed, and connections are readily hijacked.
   Strong cryptographic authentication and session level encryption
   should be used in any cases where security is a major concern.

5. Conclusions

   There can be no doubt that the measures proposed in this document are
   implementable - in fact they have already been implemented and
   deployed, though on nothing like the scale of HTTP.  It is a matter
   for debate whether they are needed or desirable as additions to HTTP,
   but it is clear that the additional functionality added to HTTP for
   search support would be at some implementation cost.  Indexing
   support would be trivial to implement, once the issue of formatting
   had been resolved.

6. Acknowledgements

   Thanks to Jon Knight, Liam Quinn, Mike Schwartz, and <<your name
   here!!>> for their comments on draft versions of this document.

   This work was supported by grants from the UK Electronic Libraries
   Programme (eLib) and the European Commission's Telematics for



                                                                [Page 8]


INTERNET-DRAFT                                                 June 1996


   Research Programme.

   The Harvest software was developed by the Internet Research Task
   Force Research Group on Resource Discovery, with support from the
   Advanced Research Projects Agency, the Air Force Office of Scientific
   Research, the National Science Foundation, Hughes Aircraft Company,
   Sun Microsystems' Collaborative Research Program, and the University
   of Colorado.

7. References

   Request For Comments (RFC) and Internet Draft documents are available
   from <URL:ftp://ftp.internic.net> and numerous mirror sites.

         [1]         R. Fielding, H. Frystyk, T. Berners-Lee, J. Gettys,
                     J. C. Mogul.  "Hypertext Transfer Protocol --
                     HTTP/1.1", Internet Draft (work in progress).  June
                     1996.

         [2]         T. Berners-Lee, D. Connolly.  "Hypertext Markup
                     Language - 2.0", RFC 1866.  November 1995.

         [3]         M. Koster.  "A Standard for Robot Exclusion."  Last
                     updated March 1996.
                     <URL:http://info.webcrawler.com/mak/projects/robots/
                     norobots.html>

         [4]         C. M. Bowman, P. B. Danzig, D. R. Hardy, U. Manber,
                     M. F. Schwartz, and D. P. Wessels. "Harvest: A
                     Scalable, Customizable Discovery and Access Sys-
                     tem", Technical Report CU-CS-732-94, Department of
                     Computer Science, University of Colorado, Boulder,
                     August 1994.
                     <URL:ftp://ftp.cs.colorado.edu/pub/cs/techreports/sc
                     hwartz/HarvestJour.ps.Z>

         [5]         P. Deutsch, R. Schoultz, P. Faltstrom & C. Weider.
                     "Architecture of the WHOIS++ service", RFC 1835.
                     August 1995.

         [6]         R. Khare.  "PEP: An Extension Mechanism for
                     HTTP/1.1", Internet Draft (work in progress).
                     February 1996.

8. Authors' Addresses

   Martin Hamilton
   Department of Computer Studies



                                                                [Page 9]


INTERNET-DRAFT                                                 June 1996


   Loughborough University of Technology
   Leics. LE11 3TU, UK

   Email: m.t.hamilton@lut.ac.uk

   Daniel LaLiberte
   National Center for Supercomputing Applications
   152 CAB
   605 E Springfield
   Champaign, IL 61820

   Email: liberte@ncsa.uiuc.edu

                  This Internet Draft expires XXXX, 1996.





































                                                               [Page 10]