How to use website-scraper - 1 common examples

To help you get started, we’ve selected a few website-scraper examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github joehand / web-to-dat / index.js View on Github external
var slug = urlSlug(url, {separator: '_'})
  // TODO: this should be an option in urlSlugger...
  slug = slug.replace('http_', '').replace('https_', '')
  var destDir = opts.destDir || path.join(DEST_DIR, slug + '_' + Date.now())
  var scrapeOpts = {
    urls: [url],
    directory: destDir,
    sources: [
      {selector: 'img', attr: 'src'},
      {selector: 'link[rel="stylesheet"]', attr: 'href'},
      {selector: 'script', attr: 'src'}
    ]
  }

  scraper.scrape(scrapeOpts, function (err, result) {
    if (err) return cb(err)
    console.log('site scraped to: ', destDir)
    dat.add(destDir, function (err, link) {
      if (err) return cb(err)
      console.log('dat created, link:', link)
      cb(null, destDir, dat)
    })
  })
}

website-scraper

Download website to a local directory (including all css, images, js, etc.)

MIT
Latest version published 2 years ago

Package Health Score

64 / 100
Full package analysis

Popular website-scraper functions