[NTLUG:Discuss] copying web documents

Jay Urish j at unixwolf.net
Thu May 18 09:28:30 CDT 2006


You need a proggy called website extractor.
It runs under windoze.


Fred wrote:
> How does one copy a many paged online html document? I tried wget but it tries
> to do the whole website (and is told to buzz off by the server). If the
> document was available in pdf form it would be moot, but someone stuck it on
> their web site in html. Y'know, link after bloody link... God only knows how
> many pages.
> Something like wget which is able to start at the table of contents and
> retrieve all pages.
> 
> Fred
> 
> __________________________________________________
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around 
> http://mail.yahoo.com 
> 
> _______________________________________________
> http://ntlug.pmichaud.com/mailman/listinfo/discuss

-- 
Jay Urish W5GM
ARRL Life Member	Denton County ARRL VEC
TXFCA President		N5ERS VP/Trustee
Denton County ARES AEC

Monitoring 444.850 PL-88.5 146.92 PL-110.9



More information about the Discuss mailing list