• DocumentCode
    3081275
  • Title

    CrawlPart: Creating Crawl Partitions in Parallel Crawlers

  • Author

    Gupta, Swastik ; Bhatia, Komal Kumar

  • Author_Institution
    Dept. of Comput. Eng., YMCA Univ. of Sci. & Technol., Faridabad, India
  • fYear
    2013
  • fDate
    24-26 Aug. 2013
  • Firstpage
    137
  • Lastpage
    142
  • Abstract
    With the ever proliferating size and scale of the WWW [1], efficient ways of exploring content are of increasing importance. How can we efficiently retrieve information from it through crawling? And in this "era of tera" and multi-core processors, we ought to think of multi-threaded processes as a serving solution. So, even better how can we improve the crawling performance by using parallel crawlers that work independently? The paper devotes to the fundamental advantages and challenges arising from the design of parallel crawlers [4]. The paper mainly focuses on the aspect of URL distribution among the various parallel crawling processes. How to distribute URLs from the URL frontier to the various concurrently executing crawling process threads is an orthogonal problem. The paper provides a solution to the problem by designing a framework that partitions the URL frontier into a several URL queues by ordering the URLs within each of the distributed set of URLs.
  • Keywords
    Internet; multi-threading; multiprocessing systems; search engines; URL distribution; WWW; crawl partitions; multicore processors; multithreaded processes; parallel crawlers; search engine; Crawlers; Databases; HTML; Search engines; Servers; Web pages; World Wide Web; Scalability; URL distribution; WWW; Web-Partitioning; parallel crawler; search engine;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Computational and Business Intelligence (ISCBI), 2013 International Symposium on
  • Conference_Location
    New Delhi
  • Print_ISBN
    978-0-7695-5066-4
  • Type

    conf

  • DOI
    10.1109/ISCBI.2013.36
  • Filename
    6724340