From sjbaker1 at airmail.net Mon Dec 31 00:45:05 2001 From: sjbaker1 at airmail.net (Steve Baker) Date: Mon, 31 Dec 2001 00:45:05 -0600 Subject: [NTLUG:Discuss] Why would a command stop working? References: Message-ID: <3C300971.B5ABD3E7@airmail.net> Rick Matthews wrote: > Apparently one of my input files contains some garbage. I'd be suprised if that stopped uniq - unless the garbage is actually within one of the supposedly duplicated lines and not in the other. > These should be > straight text files and they are being sorted by the full line (no > options used with sort or uniq). How can I validate the format of the > input files prior to processing? (I need to check to see if there is a > grep option to select only text lines...)? You could use 'tr' to delete characters in the range \000 to \011, \013 to \037 and \177 to \377. That should leave you with a clean ASCII file...unless of course some of this 'garbage' is in the form of printable characters. I suppose the most likely thing is that one file contains TAB characters and the other has spaces - or perhaps they have different line endings (eg if one file came from a UNIX/Linux box and the other from a Windoze machine or an old style Mac). You can fix those things using 'tr' to delete the offending characters or translate them into spaces. There are also options to sort to ignore leading blanks. Uniq has some options for that kind of thing too - but they are pretty much useless unless 'sort' has already placed the lines that you wish to eliminate into consecutive order. > > Is there some reason your script doesn't just do?: > > sort -u file1 > file2 BEWARE: Some older versions of 'sort' don't have '-u'. ----------------------------- Steve Baker ------------------------------- Mail : WorkMail: URLs : http://www.sjbaker.org http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net http://prettypoly.sf.net http://freeglut.sf.net http://toobular.sf.net http://lodestone.sf.net From jferg3 at swbell.net Mon Dec 31 08:26:35 2001 From: jferg3 at swbell.net (Jason Ferguson) Date: Mon, 31 Dec 2001 08:26:35 -0600 Subject: [NTLUG:Discuss] PHP Code for Lazy HTML Coders Message-ID: <1009808795.13540.0.camel@werewolf> I wanted to share the following PHP code I cobbled together so that if anyone else had a use for it, they are welcome to it. I created this code because after creating what feels like MILLIONS of web pages over the last several years, Im sick of the first section: the declaration and the section. Most of that is pretty repetitive, so I created a function to automate that section. Instead of all the repetitive type for each page, all I do is start each document like this: (Note: for the real HTML purists, I handle the by including a text file with only the DOCTYPE in it. That way I can change one small file and all my pages are updated). Anyhow, hope someone finds it of use. I tried to comment it pretty well in case something breaks Jason // // My Page // // // // // version info: // 0.1: It works for and <META>, but forgot to do style sheets! doh! // 0.2: Okay, okay, I added the <LINK> tag for stylesheets. Relative and absolute URLs to the stylesheet should work. Just put "" if there is no stylesheet $j=func_num_args(); // first, we get the number of arguments $arg_list=func_get_args(); // next, we take the first argument and make it the html title if ($j>=1) { echo "<html>\n"; echo "<head>\n"; echo "<title>\n"; echo $arg_list[0]; echo "\n"; } // next, lets link in the stylesheet if ($j>=2) { echo "\n"; } // next, if there are more arguments, they are assumed to be for META tags. Lord knows, I never use the things, but still. if ($j>1) { $j--; //subtract one from the number of arguments. This accounts for the first argument, which is the title for the document. $j--; // Do it again to account for the stylesheet // if the number of arguments is evenly divisable by four, the function is "clean", so we create the META tag // There is no checking (yet) of the first and third arguments to make sure they are correct. if ((j%4)==0) { for ($i=2; $i < func_num_args(); $i++) { echo "\n"; } } // if the number is NOT evenly divisable by four, we cant generate the META tag, so we skip it else { echo "Attempted to call header function with invalid number of arguments. Number of arguments must be (args*4)+1.\n"; } } echo "\n"; } ?> -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 232 bytes Desc: not available Url : http://ntlug.org/pipermail/discuss/attachments/20011231/bc750177/attachment.bin From greg at nas-inet.com Mon Dec 31 09:56:36 2001 From: greg at nas-inet.com (Greg Edwards) Date: Mon, 31 Dec 2001 09:56:36 -0600 Subject: [NTLUG:Discuss] List bounces Message-ID: <3C308AB4.63D47C6D@nas-inet.com> Has anyone else been having problems sending messages to the list? Over the last few months I've gotten allot of "message not delivered" responses and the message gets put in the "will try for 5 days" queue. This happens more on the weekends but sometimes during the week as well. In checking the response I've found that the NTLUG mail server is recieving the messages and then deferring them. This started after the list was moved. -- Greg Edwards New Age Software, Inc. http://www.nas-inet.com From idiotboy at cybermail.net Mon Dec 31 11:17:25 2001 From: idiotboy at cybermail.net (Jack Snodgrass) Date: Mon, 31 Dec 2001 11:17:25 -0600 Subject: [NTLUG:Discuss] Apache help References: <3C2DF8FD.5CC24A56@nas-inet.com> Message-ID: <014e01c1921f$04b3fab0$6964a8c0@jacks> ----- Original Message ----- From: "Greg Edwards" To: "ntlug discuss" Sent: Saturday, December 29, 2001 11:10 AM Subject: [NTLUG:Discuss] Apache help > I've got 2 problems that I can't seem to get resolved with Apache. > > The first is trying to get named vhosts to work through a NAT setup. > > Internally my virtual hosts resolve correctly. However, when I setup my > virtual hosts for outside access all requests resolve to the first > defined vhost. I use a single IP to the outside world and I'm routed > into my webserver through a NAT device. Has anyone been able to make > this work? How? do you have NameVirtualHost xxx.xxx.xxx.xxx where xxx.xxx.xxx.xxx is your 'real' IP Address and sections like: ServerAdmin webmaster at somesite.com lDocumentRoot /some/document/root ServerName somesite.com ErrorLog logs/somesite.com-error_log TransferLog logs/somesite.com-access_log the namevirtualhost with the IP Address says that users connecting to that address may be doing a virtutalhost request and the stuff next maps them to the servername that they are trying to get too. > The second is trying to use a single directory outside of the web space > to access include files for .shtml files. > > I've tried to declare an alias as "/include/" "/path/include/" and then > using it never resolves the > file. When I add a symbolic link to the directory that the web page is > in and use everything works > as advertised. Is this an apache bug, or never meant to work this way? > I am able to prevent access to the include directory but I'd rather not > have to create a link for every directory that I want to run a .shtml > file in. > > TIA, #include file can't start with a '/'. #inclide virutal can, but it's located off of your document root. jack > -- > Greg Edwards > New Age Software, Inc. > http://www.nas-inet.com > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss > From pauldy at wantek.net Mon Dec 31 11:24:42 2001 From: pauldy at wantek.net (Paul Ingendorf) Date: Mon, 31 Dec 2001 11:24:42 -0600 Subject: [NTLUG:Discuss] PHP Code for Lazy HTML Coders In-Reply-To: <1009808795.13540.0.camel@werewolf> Message-ID: <000901c19220$090f7020$7464a8c0@wantek.net> For some added security to those scripts I would change the extension of your include file to something that is parsed if accesses directly like functions.php. This way if someone stumble across it they don't receive all your php code. Do this esp if you ever do any db connectivity in your functions file. Also instead of using the echo statements you will find your code optimizes better when you use pass through more. This is not always the case most of the time an echo s the same but for those instances where it is not I find it more beneficial to do it this way. Also by creating an object and using an array to assign your meta tags it will make it a lot easier when creating them. See below. title) ) { ?> <?=$html->title?> css) ) { ?> meta ) ) { for ( $i = 0;$i < count($html->meta); $i++ ) { if ( count($html->meta[$i]) == 4 ) { ?> meta[$i][0]?>="meta[$i][1]?>" meta[$i][2]?>="meta[$i][3]?>"> meta[$i]); $x++ ) { ?> title = "My Page"; $html->css = "style.css"; $html->meta = ARRAY( ARRAY("NAME","keywords","CONTENT","Jason, misc stuff"), ARRAY("NAME","keywords","CONTENT","PHP coding example") ); htmlheader($html); ?> -- -->> mailto:pauldy at wantek.net -->> http://www.wantek.net/ Running ....... Cos anything else would be a waste... `:::' ....... ...... ::: * `::. ::' ::: .:: .:.::. .:: .:: `::. :' ::: :: :: :: :: :: :::. ::: .::. .:: ::. `::::. .:' ::. .:::.....................::' .::::.. -----Original Message----- From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org]On Behalf Of Jason Ferguson Sent: Monday, December 31, 2001 8:27 AM To: NTLUG Subject: [NTLUG:Discuss] PHP Code for Lazy HTML Coders I wanted to share the following PHP code I cobbled together so that if anyone else had a use for it, they are welcome to it. I created this code because after creating what feels like MILLIONS of web pages over the last several years, Im sick of the first section: the declaration and the section. Most of that is pretty repetitive, so I created a function to automate that section. Instead of all the repetitive type for each page, all I do is start each document like this: (Note: for the real HTML purists, I handle the by including a text file with only the DOCTYPE in it. That way I can change one small file and all my pages are updated). From pauldy at wantek.net Mon Dec 31 11:44:51 2001 From: pauldy at wantek.net (Paul Ingendorf) Date: Mon, 31 Dec 2001 11:44:51 -0600 Subject: [NTLUG:Discuss] Apache help In-Reply-To: <3C2DF8FD.5CC24A56@nas-inet.com> Message-ID: <000a01c19222$d975bd80$7464a8c0@wantek.net> The first problem can probably be fixed by removing the ip that NATs to you and using the ip of the actual machine. That is remove any ips that aren't on your internal network and use whatever ip you see on eth0 for example. As for the second question it is supposed to work like that for some advanced configs were the security could be compromised. As for the symbolic link that should only work if you have option for follow symlinks turned on in the config file. Now for what you want to do use the and you should be able to include the file based off whatever docroot you are in. -- -->> mailto:pauldy at wantek.net -->> http://www.wantek.net/ Running ....... Cos anything else would be a waste... `:::' ....... ...... ::: * `::. ::' ::: .:: .:.::. .:: .:: `::. :' ::: :: :: :: :: :: :::. ::: .::. .:: ::. `::::. .:' ::. .:::.....................::' .::::.. -----Original Message----- From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org]On Behalf Of Greg Edwards Sent: Saturday, December 29, 2001 11:10 AM To: ntlug discuss Subject: [NTLUG:Discuss] Apache help I've got 2 problems that I can't seem to get resolved with Apache. The first is trying to get named vhosts to work through a NAT setup. Internally my virtual hosts resolve correctly. However, when I setup my virtual hosts for outside access all requests resolve to the first defined vhost. I use a single IP to the outside world and I'm routed into my webserver through a NAT device. Has anyone been able to make this work? How? The second is trying to use a single directory outside of the web space to access include files for .shtml files. I've tried to declare an alias as "/include/" "/path/include/" and then using it never resolves the file. When I add a symbolic link to the directory that the web page is in and use everything works as advertised. Is this an apache bug, or never meant to work this way? I am able to prevent access to the include directory but I'd rather not have to create a link for every directory that I want to run a .shtml file in. TIA, -- Greg Edwards New Age Software, Inc. http://www.nas-inet.com _______________________________________________ http://www.ntlug.org/mailman/listinfo/discuss From pauldy at wantek.net Mon Dec 31 11:55:08 2001 From: pauldy at wantek.net (Paul Ingendorf) Date: Mon, 31 Dec 2001 11:55:08 -0600 Subject: [NTLUG:Discuss] ps2pdf/ghostscript Problem In-Reply-To: <1009579439.3921.0.camel@werewolf> Message-ID: <000b01c19224$49b28a00$7464a8c0@wantek.net> The first line seems to have a clue that the font it is trying to use is invalid. I would assume this is due to the way pdf works by including font defs in the document for portability. -- -->> mailto:pauldy at wantek.net -->> http://www.wantek.net/ Running ....... Cos anything else would be a waste... `:::' ....... ...... ::: * `::. ::' ::: .:: .:.::. .:: .:: `::. :' ::: :: :: :: :: :: :::. ::: .::. .:: ::. `::::. .:' ::. .:::.....................::' .::::.. -----Original Message----- From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org]On Behalf Of Jason Ferguson Sent: Friday, December 28, 2001 4:44 PM To: discuss at ntlug.org Subject: [NTLUG:Discuss] ps2pdf/ghostscript Problem Can anyone make heads or tails of this one? ps2pdf refuses to work for me to convert a .ps file to a .pdf. Here is the output: Error: /invalidfont in findfont Operand stack: F1 --nostringval-- --nostringval-- Helvetica Helvetica Font Helvetica 427433 Helvetica --nostringval-- Courier NimbusMonL-Regu (NimbusMonL-Regu) NimbusMonL-Regu (NimbusMonL-Regu) NimbusMonL-Regu Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop 1 3 %oparray_pop .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- %array_continue --nostringval-- 5 3 %oparray_pop 6 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 9 4 %oparray_pop --nostringval-- --nostringval-- --nostringval-- 5 -1 1 --nostringval-- %for_neg_int_continue --nostringval-- --nostringval-- Dictionary stack: --dict:1029/1476(ro)(G)-- --dict:0/20(G)-- --dict:155/200(L)-- --dict:17/17(ro)(G)-- --dict:1029/1476(ro)(G)-- Current allocation mode is local Last OS error: 2 Current file position is 8078 GNU Ghostscript 6.51: Unrecoverable error, exit code 1 Can anyone help me fix this one? Jason From Rick at Matthews.net Mon Dec 31 15:12:34 2001 From: Rick at Matthews.net (Rick Matthews) Date: Mon, 31 Dec 2001 15:12:34 -0600 Subject: [NTLUG:Discuss] Why would a command stop working? In-Reply-To: <3C300971.B5ABD3E7@airmail.net> Message-ID: > You could use 'tr' to delete characters in the > range \000 to \011, \013 to \037 and \177 to \377. That's what I did and it successfully removed the garbage; it runs properly again. Thanks for the suggestion! I appreciate the help! Rick Matthews From jferg3 at swbell.net Mon Dec 31 15:32:41 2001 From: jferg3 at swbell.net (Jason Ferguson) Date: Mon, 31 Dec 2001 15:32:41 -0600 Subject: [NTLUG:Discuss] PHP Code for Lazy HTML Coders In-Reply-To: <000901c19220$090f7020$7464a8c0@wantek.net> References: <000901c19220$090f7020$7464a8c0@wantek.net> Message-ID: <1009834361.14007.1.camel@werewolf> I made some changes to my original code based on Paul's suggestions. I decided to keep the echo statements, though I understand and agree with what he was saying. By keeping the echo statements, I find my html code to be a tad more readable. Anyhow, thanks for the help Paul. I submit the following code to the list for anyone who needs such a thing. Jason title="My Page"; // $html->stylesheet="stylesheet.css"; // $html->meta=ARRAY( // ARRAY("NAME","Keywords","CONTENT","Jason stuff"), // ARRAY("HTTP-EQUIV","Expires","CONTENT","Wed, 2 Jan 2001 00:00:01 GMT") // ); // // Version history: // 0.1: It works for and <META>, but forgot to do style sheets. Doh! // 0.2: Okay, okay, I added the <LINK> tag for stylesheets. Relative and // absolute URLs to the stylesheet should work. Just put "" if there is no // stylesheet // 0.3: Based on suggestions, I rewrote the code to use an onject, meaning I // no longer needed all those func_num_args and other functions to check all // that stuff that was originally passed echo "<html>\n"; echo "<head>\n"; // First, we see if a title needs to be set. If so, we generate the <title> tag if (isset($html->title)) { echo "<title>$html->title\n"; } else { echo ""; } // Second, We check if there is a stylesheet and generate the tag if necessary if (isset($html->stylesheet)) { echo "stylesheet\">\n"; } // Next. we check the passed meta tage info. This is supposed to be in the form of // an array. The count of the meta tags is checked to see if its evenly divisable // by 4, since there are four parts to the META tag (attribute=value, attribue=value) if (isset($html->meta)) { echo "meta);$i++) { if (count($html->meta)%4==0) { echo "$html->meta[$i]=\""; $i++; echo "$html->meta[$i]\" "; $i++; echo "$html->meta[$i]=\""; $i++; echo "$html->meta[$i]\">\n"; } else { echo "Incorrect number of META tag arguments (4 arguments per meta tag)."; } } } // Finally, we close the tag. echo "\n"; } ?> On Mon, 2001-12-31 at 11:24, Paul Ingendorf wrote: > For some added security to those scripts I would change the extension of your include file to something that is parsed if accesses directly like functions.php. This way if someone stumble across it they don't receive all your php code. Do this esp if you ever do any db connectivity in your functions file. Also instead of using the echo statements you will find your code optimizes better when you use pass through more. This is not always the case most of the time an echo s the same but for those instances where it is not I find it more beneficial to do it this way. Also by creating an object and using an array to assign your meta tags it will make it a lot easier when creating them. See below. > (snip) > > -- > -->> mailto:pauldy at wantek.net > -->> http://www.wantek.net/ > Running ....... Cos anything else would be a waste... > `:::' ....... ...... > ::: * `::. ::' > ::: .:: .:.::. .:: .:: `::. :' > ::: :: :: :: :: :: :::. > ::: .::. .:: ::. `::::. .:' ::. > .:::.....................::' .::::.. > > > -----Original Message----- > From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org]On Behalf > Of Jason Ferguson > Sent: Monday, December 31, 2001 8:27 AM > To: NTLUG > Subject: [NTLUG:Discuss] PHP Code for Lazy HTML Coders > > > I wanted to share the following PHP code I cobbled together so that if > anyone else had a use for it, they are welcome to it. > > I created this code because after creating what feels like MILLIONS of > web pages over the last several years, Im sick of the first section: the > declaration and the section. Most of that is pretty > repetitive, so I created a function to automate that section. Instead of > all the repetitive type for each page, all I do is start each document > like this: > > (Note: for the real HTML purists, I handle the by including a > text file with only the DOCTYPE in it. That way I can change one small > file and all my pages are updated). > > include("doctype.txt") > include("functions.inc"); > htmlheader(title, stylesheet location, meta tag stuff); > ?> > > > > > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 232 bytes Desc: not available Url : http://ntlug.org/pipermail/discuss/attachments/20011231/9cd27f14/attachment.bin From sjbaker1 at airmail.net Mon Dec 31 15:57:51 2001 From: sjbaker1 at airmail.net (Steve Baker) Date: Mon, 31 Dec 2001 15:57:51 -0600 Subject: [NTLUG:Discuss] Why would a command stop working? References: Message-ID: <3C30DF5F.96267E66@airmail.net> Rick Matthews wrote: > > > You could use 'tr' to delete characters in the > > range \000 to \011, \013 to \037 and \177 to \377. > > That's what I did and it successfully removed the garbage; it runs > properly again. Thanks for the suggestion! It was probably a switch from spaces to tabs in one of the files that did the dirt. ----------------------------- Steve Baker ------------------------------- Mail : WorkMail: URLs : http://www.sjbaker.org http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net http://prettypoly.sf.net http://freeglut.sf.net http://toobular.sf.net http://lodestone.sf.net From Rick at Matthews.net Mon Dec 31 16:18:21 2001 From: Rick at Matthews.net (Rick Matthews) Date: Mon, 31 Dec 2001 16:18:21 -0600 Subject: [NTLUG:Discuss] Why would a command stop working? In-Reply-To: <3C30DF5F.96267E66@airmail.net> Message-ID: > It was probably a switch from spaces to tabs in one > of the files that did the dirt. Actually, I scanned across the 3.5 meg file and it looks like about half of the records terminate with '0D 0A' and the other half with just '0A'. One of my two external sources probably sent me a windows file. But that's OK, the code's in there now! -----Original Message----- From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org]On Behalf Of Steve Baker Sent: Monday, December 31, 2001 3:58 PM To: discuss at ntlug.org Subject: Re: [NTLUG:Discuss] Why would a command stop working? Rick Matthews wrote: > > > You could use 'tr' to delete characters in the > > range \000 to \011, \013 to \037 and \177 to \377. > > That's what I did and it successfully removed the garbage; it runs > properly again. Thanks for the suggestion! It was probably a switch from spaces to tabs in one of the files that did the dirt. ----------------------------- Steve Baker ------------------------------- Mail : WorkMail: URLs : http://www.sjbaker.org http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net http://prettypoly.sf.net http://freeglut.sf.net http://toobular.sf.net http://lodestone.sf.net _______________________________________________ http://www.ntlug.org/mailman/listinfo/discuss From pac at fortuitous.com Tue Jan 1 14:23:10 2002 From: pac at fortuitous.com (pac@fortuitous.com) Date: Tue, 1 Jan 2002 14:23:10 -0600 Subject: [NTLUG:Discuss] MS Settlement Comments Message-ID: <20020101142310.A12532@bistro.marx> YOu can comment on the M$ settlement. Please review the settlement http://www.usdoj.gov/atr/cases/f9400/9495.htm and comment comment comment: ------------------------------------------------------- Submitting Comments Before you submit comments about the settlement, the Department of Justice recommends that you review the documents related to the settlement. You may submit comments about the settlement by e-mail, fax, or mail. Note: Given recent mail delivery interruptions in Washington, DC, and current uncertainties involving the resumption of timely mail service, the Department of Justice strongly encourages that comments be submitted via e-mail or fax. E-mail microsoft.atr at usdoj.gov In the Subject line of the e-mail, type Microsoft Settlement. Fax 1-202-307-1454 or 1-202-616-9937 Mail Renata B. Hesse Antitrust Division U.S. Department of Justice 601 D Street NW Suite 1200 Washington, DC 20530-0001 ------------------------------------------------------- Thanks, -Phil Carinhas -- .--------------------------------------------------------. | Dr. Philip A. Carinhas | pac at fortuitous.com | | Fortuitous Technologies Inc. | http://fortuitous.com | | Linux Consulting & Training | Tel : 1-512-467-2154 | `--------------------------------------------------------' From greg at nas-inet.com Tue Jan 1 18:48:55 2002 From: greg at nas-inet.com (Greg Edwards) Date: Tue, 01 Jan 2002 18:48:55 -0600 Subject: [NTLUG:Discuss] vhost problem Message-ID: <3C3258F7.E9482313@nas-inet.com> Thanks for the assist on this. The problem was solved by using the as opposed to directive. Now on to the #include issue. Thanks again! -- Greg Edwards From greg at nas-inet.com Mon Dec 31 16:43:01 2001 From: greg at nas-inet.com (Greg Edwards) Date: Mon, 31 Dec 2001 16:43:01 -0600 Subject: [NTLUG:Discuss] Apache help References: <000a01c19222$d975bd80$7464a8c0@wantek.net> Message-ID: <3C30E9F5.73CE8A9A@nas-inet.com> Paul Ingendorf wrote: > > The first problem can probably be fixed by removing the ip that NATs to you and using the ip of the actual machine. That is remove any ips that aren't on your internal network and use whatever ip you see on eth0 for example. > > As for the second question it is supposed to work like that for some advanced configs were the security could be compromised. As for the symbolic link that should only work if you have option for follow symlinks turned on in the config file. > > Now for what you want to do use the and you should be able to include the file based off whatever docroot you are in. > > -- > -->> mailto:pauldy at wantek.net > -->> http://www.wantek.net/ > Running ....... Cos anything else would be a waste... I took a look at "virtual" after you and Jack mentioned it and I'll give that a try. I've always used "file" and now after rereading "Apache - The Definitive Guide" section on include again I see what your talking both talking about. Thanks, -- Greg Edwards New Age Software, Inc. http://www.nas-inet.com From greg at nas-inet.com Mon Dec 31 16:32:41 2001 From: greg at nas-inet.com (Greg Edwards) Date: Mon, 31 Dec 2001 16:32:41 -0600 Subject: [NTLUG:Discuss] Apache help References: <3C2DF8FD.5CC24A56@nas-inet.com> <014e01c1921f$04b3fab0$6964a8c0@jacks> Message-ID: <3C30E789.28D3B00A@nas-inet.com> Jack Snodgrass wrote: > > > I've got 2 problems that I can't seem to get resolved with Apache. > > > > The first is trying to get named vhosts to work through a NAT setup. > > > > do you have > > NameVirtualHost xxx.xxx.xxx.xxx > where xxx.xxx.xxx.xxx is your 'real' IP Address > > and sections like: > > ServerAdmin webmaster at somesite.com > lDocumentRoot /some/document/root > ServerName somesite.com > ErrorLog logs/somesite.com-error_log > TransferLog logs/somesite.com-access_log > > > the namevirtualhost with the IP Address says that users connecting to that > address > may be doing a virtutalhost request and the > stuff next > maps them to the servername that they are trying to get too. > > > The second is trying to use a single directory outside of the web space > > to access include files for .shtml files. > > > > #include file can't start with a '/'. > #inclide virutal can, but it's located off of your document root. > > jack > > > -- [vhost] I've tried with both the internal and external IP declared as NameVirtualHost xxx.xxx.xxx.xxx and didn't make any difference. I haven't tried with section yet. Using with NameVirtualHost xxx.xxx.xxx.xxx was the setup I had working internally. [include] My understanding was that any directive could use an Alias to access another directory outside of the webspace. Is there a limit to which directives can use an Alias? I'm able to Alias my icons, cgi-bin (of course) and even a directory containing my Error pages. -- Greg Edwards New Age Software, Inc. http://www.nas-inet.com From madhat at unspecific.com Wed Jan 2 07:37:38 2002 From: madhat at unspecific.com (MadHat) Date: Wed, 02 Jan 2002 07:37:38 -0600 Subject: [NTLUG:Discuss] Apache help In-Reply-To: <3C30E789.28D3B00A@nas-inet.com> References: <3C2DF8FD.5CC24A56@nas-inet.com> <014e01c1921f$04b3fab0$6964a8c0@jacks> Message-ID: <5.1.0.14.0.20020102072919.04fdb670@pop.unspecific.com> At 04:32 PM 12/31/2001 -0600, Greg Edwards wrote: >Jack Snodgrass wrote: > > > > > I've got 2 problems that I can't seem to get resolved with Apache. > > > > > > The first is trying to get named vhosts to work through a NAT setup. > > > > > > > > > do you have > > > > NameVirtualHost xxx.xxx.xxx.xxx > > where xxx.xxx.xxx.xxx is your 'real' IP Address > > > > and sections like: > > > > ServerAdmin webmaster at somesite.com > > lDocumentRoot /some/document/root > > ServerName somesite.com > > ErrorLog logs/somesite.com-error_log > > TransferLog logs/somesite.com-access_log > > > > > > the namevirtualhost with the IP Address says that users connecting to that > > address > > may be doing a virtutalhost request and the > > stuff next > > maps them to the servername that they are trying to get too. > > > > > The second is trying to use a single directory outside of the web space > > > to access include files for .shtml files. > > > > > > > > > #include file can't start with a '/'. > > #inclide virutal can, but it's located off of your document root. > > > > jack > > > > > -- > > >[vhost] > >I've tried with both the internal and external IP declared as >NameVirtualHost xxx.xxx.xxx.xxx and didn't make any difference. I >haven't tried with section yet. Using > with NameVirtualHost xxx.xxx.xxx.xxx was the >setup I had working internally. Looks like you are mixing name based and IP based virtual hosting. you can also use * as a wildcard... NameVirtualHost * ServerName blah.dah.tld ServerAlias www.blah.dah.tld DocumentRoot /path/to/files ... http://httpd.apache.org/docs/vhosts/ >[include] > >My understanding was that any directive could use an Alias to access >another directory outside of the webspace. Is there a limit to which >directives can use an Alias? I'm able to Alias my icons, cgi-bin (of >course) and even a directory containing my Error pages. include file includes a *FILE* not a URL. The Alias directive sets up an alias for a URL. When calling a file, you use or not include virtual is what you want http://httpd.apache.org/docs/howto/ssi.html -- MadHat at unspecific.com From fredjame at concentric.net Wed Jan 2 09:46:31 2002 From: fredjame at concentric.net (Fred James) Date: Wed, 02 Jan 2002 09:46:31 -0600 Subject: [NTLUG:Discuss] MS Settlement Comments References: <20020101142310.A12532@bistro.marx> Message-ID: <3C332B57.1040501@concentric.net> Does someone understand that better than I? It sounds like a slap on the wrist; an admonishment to play nice, and a lot of opportunity to claim they are playing nice, even if they aren't; and no breakup of the corp. Is that is basically? pac at fortuitous.com wrote: > YOu can comment on the M$ settlement. Please review the > settlement http://www.usdoj.gov/atr/cases/f9400/9495.htm > and comment comment comment: > ------------------------------------------------------- > > Submitting Comments > > Before you submit comments about the settlement, the > Department of Justice recommends that you review the > documents related to the settlement. > > You may submit comments about the settlement by e-mail, > fax, or mail. > > Note: Given recent mail delivery interruptions > in Washington, DC, and current uncertainties > involving the resumption of timely mail service, > the Department of Justice strongly > encourages that comments be submitted > via e-mail or fax. > > E-mail > microsoft.atr at usdoj.gov > In the Subject line of the e-mail, type Microsoft > Settlement. > Fax > 1-202-307-1454 or 1-202-616-9937 > Mail > Renata B. Hesse > Antitrust Division > U.S. Department of Justice > 601 D Street NW > Suite 1200 > Washington, DC 20530-0001 > ------------------------------------------------------- > > Thanks, > > -Phil Carinhas > -- > .--------------------------------------------------------. > | Dr. Philip A. Carinhas | pac at fortuitous.com | > | Fortuitous Technologies Inc. | http://fortuitous.com | > | Linux Consulting & Training | Tel : 1-512-467-2154 | > `--------------------------------------------------------' > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss > > > -- ...make every program a filter... From bbyron at radit.com Wed Jan 2 10:54:11 2002 From: bbyron at radit.com (Bob Byron) Date: Wed, 2 Jan 2002 10:54:11 -0600 Subject: [NTLUG:Discuss] Why would a command stop working? References: Message-ID: <002901c193ae$25f684f0$0301a8c0@white> Rick, Just to make sure you are the entries are unique, you could just try surrounding each line with /'s (slashes). That would help to insure that they aren't different due to a space at the end. A command like this: cat file4 | sed -e 's/.*/\/&\//' That would help you to visually inspect the file to insure the lines truly are or are not unique. Bob ----- Original Message ----- From: "Rick Matthews" To: Sent: Sunday, December 30, 2001 10:29 PM Subject: RE: [NTLUG:Discuss] Why would a command stop working? > ----- Val W. Harris wrote: ----- > > > Did the processes generating the lines in file1 change? > > If so, perhaps there's a stray blank at the end of a line. > > I don't control the processes that generate file1. I apologize for not > being more specific before. I receive 2 files from external sources in a > common format, and I maintain a third file locally (in the same format). > The script currently combines them as follows: > > cat file1 file2 file3 | sort | uniq > file4 > > As Steve Baker questioned earlier, my test this evening was with over 3 > meg of data. I cut it down to about 50 lines and 'cat file1 | sort | > uniq > file2' worked properly. > > Apparently one of my input files contains some garbage. These should be > straight text files and they are being sorted by the full line (no > options used with sort or uniq). How can I validate the format of the > input files prior to processing? (I need to check to see if there is a > grep option to select only text lines...)? > > > Are you sorting entire lines or just one field of the line? > > Entire lines. > > > Is there some reason your script doesn't just do?: > > sort -u file1 > file2 > > Learning curve, and multiple input files. Shoot, I was tickled pink when > I found that I could use the cat|sort|uniq method! I've since learned of > the unique option in sort (merge, too), in fact, I also tested with > the -u option earlier. Using the 3.5 meg file the following two command > lines produced identical results, with the output file still containing > duplicates: > > cat file1 | sort | uniq > file2 > sort -u file1 > file2 > > Thanks for your help! > > Rick Matthews > > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss From cjcox at acm.org Wed Jan 2 13:29:07 2002 From: cjcox at acm.org (Chris Cox) Date: Wed, 02 Jan 2002 13:29:07 -0600 Subject: [NTLUG:Discuss] DDNS (dhcp 3.0) info Message-ID: <3C335F83.1060905@acm.org> I still haven't finished the presentation yet... but for those who want to see some dhcpd.conf info for DDNS.... http://www.biplane.com.au/~kauer/miscellaneous/dhcp/index.html Unfortunately, this site is not well advertised and the parent is restricted (not available yet). Hope this helps, Chris From sjbaker1 at airmail.net Wed Jan 2 17:59:41 2002 From: sjbaker1 at airmail.net (Steve Baker) Date: Wed, 02 Jan 2002 17:59:41 -0600 Subject: [NTLUG:Discuss] MS Settlement Comments References: <20020101142310.A12532@bistro.marx> <3C332B57.1040501@concentric.net> Message-ID: <3C339EED.9B5EFCA@airmail.net> Fred James wrote: > > Does someone understand that better than I? > It sounds like a slap on the wrist; an admonishment to play nice, and a > lot of opportunity to claim they are playing nice, even if they aren't; > and no breakup of the corp. Is that is basically? Yep. And some nonsense about if during the 5 years for which this lasts, they should happen to break the rules, the period gets extended out to 7 years (but if they can ignore the rules for the first five years - as they have with every injunction that's been slapped on them - then presumably they'll have no compunction in ignoring them for another two years). Basically, they've been let off the hook. ----------------------------- Steve Baker ------------------------------- Mail : WorkMail: URLs : http://www.sjbaker.org http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net http://prettypoly.sf.net http://freeglut.sf.net http://toobular.sf.net http://lodestone.sf.net From cjcox at acm.org Thu Jan 3 11:27:51 2002 From: cjcox at acm.org (Chris Cox) Date: Thu, 03 Jan 2002 11:27:51 -0600 Subject: [NTLUG:Discuss] Plea for Presentations for 2002 Message-ID: <3C349497.6000200@acm.org> (direct replies to cjcox at acm.org PLEASE) It's 2002 and time for me to plead for NTLUG presenters!! Presentations can cover any Linux related subject. They can be beginner oriented, advanced... whatever. You could demonstrate an appliation under Linux... or you could show us a tought configuration setup.... or perhaps you might present on how Linux is changing the world! Anyway... please consider volunteering. It's a great way to help sharpen your presentation skills. I haven't seen anyone laughed off of the platform yet (and I don't anticipate that ever happening btw).... so give me holler and let me know if you want to do a presentation in 2002. Send email replies to cjcox at acm.org Thanks, Chris From tim at coderite.com Thu Jan 3 16:05:37 2002 From: tim at coderite.com (John T. Willis) Date: Thu, 3 Jan 2002 16:05:37 -0600 Subject: [NTLUG:Discuss] Ok - Who's guilty for this!! HA!! In-Reply-To: <3C349497.6000200@acm.org> Message-ID: http://www.merge933.net/ From matt.cald at gte.net Thu Jan 3 16:06:01 2002 From: matt.cald at gte.net (Matt Caldwell) Date: Thu, 3 Jan 2002 16:06:01 -0600 Subject: [NTLUG:Discuss] Ok - Who's guilty for this!! HA!! In-Reply-To: Message-ID: <002d01c194a2$d6f3cfc0$6700a8c0@mcaldwel> At 5pm today 933fm closes the book on a project we are all very proud of: Merge Radio. Merge 933.net tried something new..."merging" traditional radio with the future of Internet technology. But, as with many dot.com companies we might have been ahead of our time. We want to thank everyone who supported Merge Radio/Merge 933.net over the past two years and we appreciate all the kind feedback received on what Merge was doing. The bottom line is that there just weren't enough of us to make the station viable. When Merge 933fm changes at 5pm today we hope you will give it a try. What we are going to do will have the same passion and same commitment to you as we did with Merge. The name might be different, the music different, but the love of radio and the love of the community will still be there. And as always we would love your feedback. In a couple of days drop us an e-mail to give us your thoughts. Matt, thanks for being with us in the past and we are looking forward to rocking with you in the future, Bossman Scott Jeff K Yvonne Keith Andrews -----Original Message----- From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org] On Behalf Of John T. Willis Sent: Thursday, January 03, 2002 4:06 PM To: discuss at ntlug.org Subject: [NTLUG:Discuss] Ok - Who's guilty for this!! HA!! http://www.merge933.net/ _______________________________________________ http://www.ntlug.org/mailman/listinfo/discuss From matt.cald at gte.net Thu Jan 3 16:06:37 2002 From: matt.cald at gte.net (Matt Caldwell) Date: Thu, 3 Jan 2002 16:06:37 -0600 Subject: [NTLUG:Discuss] Ok - Who's guilty for this!! HA!! In-Reply-To: Message-ID: <002e01c194a2$edce0800$6700a8c0@mcaldwel> I'm really bummed about this ( If this is in fact true ) We shall see at 5 PM. --Matt -----Original Message----- From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org] On Behalf Of John T. Willis Sent: Thursday, January 03, 2002 4:06 PM To: discuss at ntlug.org Subject: [NTLUG:Discuss] Ok - Who's guilty for this!! HA!! http://www.merge933.net/ _______________________________________________ http://www.ntlug.org/mailman/listinfo/discuss From tim at coderite.com Thu Jan 3 16:15:26 2002 From: tim at coderite.com (John T. Willis) Date: Thu, 3 Jan 2002 16:15:26 -0600 Subject: [NTLUG:Discuss] Ok - Who's guilty for this!! HA!! In-Reply-To: <002d01c194a2$d6f3cfc0$6700a8c0@mcaldwel> Message-ID: Ahh - thanks for the press release.... let's hope it was better than it was before... I'll give it a try, but if I puke, I'll turn it back off... again. Sorry to be so Off Topic... I'll go away now.. > -----Original Message----- > From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org]On Behalf > Of Matt Caldwell > Sent: January 03, 2002 16:06 > To: discuss at ntlug.org > Subject: RE: [NTLUG:Discuss] Ok - Who's guilty for this!! HA!! > > > > At 5pm today 933fm closes the book on a project we are all very proud > of: Merge Radio. > > Merge 933.net tried something new..."merging" traditional radio with the > future of Internet technology. But, as with many dot.com companies we > might have been ahead of our time. > > We want to thank everyone who supported Merge Radio/Merge 933.net over > the past two years and we appreciate all the kind feedback received on > what Merge was doing. The bottom line is that there just weren't enough > of us to make the station viable. > > When Merge 933fm changes at 5pm today we hope you will give it a try. > What we are going to do will have the same passion and same commitment > to you as we did with Merge. The name might be different, the music > different, but the love of radio and the love of the community will > still be there. And as always we would love your feedback. > > In a couple of days drop us an e-mail to give us your thoughts. > > Matt, thanks for being with us in the past and we are looking forward to > rocking with you in the future, > > Bossman Scott > Jeff K > Yvonne > Keith Andrews > > -----Original Message----- > From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org] On Behalf > Of John T. Willis > Sent: Thursday, January 03, 2002 4:06 PM > To: discuss at ntlug.org > Subject: [NTLUG:Discuss] Ok - Who's guilty for this!! HA!! > > > http://www.merge933.net/ > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss > > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss From tim at coderite.com Thu Jan 3 16:22:33 2002 From: tim at coderite.com (John T. Willis) Date: Thu, 3 Jan 2002 16:22:33 -0600 Subject: [NTLUG:Discuss] Ok - Who's guilty for this!! HA!! In-Reply-To: Message-ID: One last thing, however. I actually DO like what they are playing right now. Much better than what they played before. > Ahh - thanks for the press release.... let's hope it was better > than it was > before... I'll give it a try, but if I puke, I'll turn it back > off... again. > > Sorry to be so Off Topic... I'll go away now.. > > > -----Original Message----- > > From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org]On Behalf > > Of Matt Caldwell > > Sent: January 03, 2002 16:06 > > To: discuss at ntlug.org > > Subject: RE: [NTLUG:Discuss] Ok - Who's guilty for this!! HA!! > > > > > > > > At 5pm today 933fm closes the book on a project we are all very proud > > of: Merge Radio. > > > > Merge 933.net tried something new..."merging" traditional radio with the > > future of Internet technology. But, as with many dot.com companies we > > might have been ahead of our time. > > > > We want to thank everyone who supported Merge Radio/Merge 933.net over > > the past two years and we appreciate all the kind feedback received on > > what Merge was doing. The bottom line is that there just weren't enough > > of us to make the station viable. > > > > When Merge 933fm changes at 5pm today we hope you will give it a try. > > What we are going to do will have the same passion and same commitment > > to you as we did with Merge. The name might be different, the music > > different, but the love of radio and the love of the community will > > still be there. And as always we would love your feedback. > > > > In a couple of days drop us an e-mail to give us your thoughts. > > > > Matt, thanks for being with us in the past and we are looking forward to > > rocking with you in the future, > > > > Bossman Scott > > Jeff K > > Yvonne > > Keith Andrews > > > > -----Original Message----- > > From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org] On Behalf > > Of John T. Willis > > Sent: Thursday, January 03, 2002 4:06 PM > > To: discuss at ntlug.org > > Subject: [NTLUG:Discuss] Ok - Who's guilty for this!! HA!! > > > > > > http://www.merge933.net/ > > > > _______________________________________________ > > http://www.ntlug.org/mailman/listinfo/discuss > > > > > > _______________________________________________ > > http://www.ntlug.org/mailman/listinfo/discuss > > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss From greg at nas-inet.com Thu Jan 3 20:08:03 2002 From: greg at nas-inet.com (Greg Edwards) Date: Thu, 03 Jan 2002 20:08:03 -0600 Subject: [NTLUG:Discuss] Apache help References: <3C2DF8FD.5CC24A56@nas-inet.com> <014e01c1921f$04b3fab0$6964a8c0@jacks> <5.1.0.14.0.20020102072919.04fdb670@pop.unspecific.com> Message-ID: <3C350E83.D90D9FEC@nas-inet.com> MadHat wrote: > > >[include] > > > >My understanding was that any directive could use an Alias to access > >another directory outside of the webspace. Is there a limit to which > >directives can use an Alias? I'm able to Alias my icons, cgi-bin (of > >course) and even a directory containing my Error pages. > > include file includes a *FILE* not a URL. The Alias directive sets up an > alias for a URL. When calling a file, you use > > or > > not > > > include virtual is what you want > > http://httpd.apache.org/docs/howto/ssi.html > > -- > MadHat at unspecific.com Thanks this did the trick. Now I can get my includes out of the web space and use them from all my virtual hosts. -- Greg Edwards New Age Software, Inc. http://www.nas-inet.com From mhtexcollins at austin.rr.com Thu Jan 3 21:03:13 2002 From: mhtexcollins at austin.rr.com (Michael H. Collins) Date: Thu, 03 Jan 2002 21:03:13 -0600 Subject: [NTLUG:Discuss] Ok - Who's guilty for this!! HA!! References: Message-ID: <3C351B71.7000601@austin.rr.com> There is always kpig.com with more Texas music than any Texas station. John T. Willis wrote: > Ahh - thanks for the press release.... let's hope it was better than it was > before... I'll give it a try, but if I puke, I'll turn it back off... again. > > Sorry to be so Off Topic... I'll go away now.. > -- Michael H. Collins http://www.linuxlink.com Admiral Penguinista Navy International Portal Hosting? http:postnuke-hosting.com From Rick at Matthews.net Fri Jan 4 08:18:17 2002 From: Rick at Matthews.net (Rick Matthews) Date: Fri, 4 Jan 2002 08:18:17 -0600 Subject: [NTLUG:Discuss] Script question Message-ID: Would someone please help me over another "hump"? I'm still learning... I maintain the current version and 3 previous versions of the file "domains". The files are all in the same directory and the file name includes a unique date/time stamp and a version number. For example: domains.2002-01-04_050500.0 domains.2002-01-02_052515.1 domains.2002-01-01_043025.2 domains.2001-12-30_050500.3 '0' is the current version and '3' is the oldest. When a new file is created I want to "age" the files and create an open slot for a new "0" file (3 goes away, 2 becomes 3, 1 becomes 2, and 0 becomes 1). I don't know the date/time stamps, so this is what I tried in my script: mv -f /archive/domains.*.2 /archive/domains.*.3 mv -f /archive/domains.*.1 /archive/domains.*.2 mv -f /archive/domains.*.0 /archive/domains.*.1 The asterisk apparently acts as a wildcard during the file selection part of the command, but it is acting as a literal in the renaming porting of the command (I end up with files named domains.*.3). How can I rewrite this to accomplish the task? Thanks in advance! Rick Matthews From fredjame at concentric.net Fri Jan 4 08:55:30 2002 From: fredjame at concentric.net (Fred James) Date: Fri, 04 Jan 2002 08:55:30 -0600 Subject: [NTLUG:Discuss] Script question References: Message-ID: <3C35C262.2030600@concentric.net> I believe you could use "cut" to get what you need to build the new file name. Rick Matthews wrote: > Would someone please help me over another "hump"? I'm still learning... > > I maintain the current version and 3 previous versions of the file > "domains". The files are all in the same directory and the file name > includes a unique date/time stamp and a version number. For example: > > domains.2002-01-04_050500.0 > domains.2002-01-02_052515.1 > domains.2002-01-01_043025.2 > domains.2001-12-30_050500.3 > > '0' is the current version and '3' is the oldest. > > When a new file is created I want to "age" the files and create an open > slot for a new "0" file (3 goes away, 2 becomes 3, 1 becomes 2, and 0 > becomes 1). I don't know the date/time stamps, so this is what I tried > in my script: > > mv -f /archive/domains.*.2 /archive/domains.*.3 > mv -f /archive/domains.*.1 /archive/domains.*.2 > mv -f /archive/domains.*.0 /archive/domains.*.1 > > The asterisk apparently acts as a wildcard during the file selection > part of the command, but it is acting as a literal in the renaming > porting of the command (I end up with files named domains.*.3). > > How can I rewrite this to accomplish the task? > > Thanks in advance! > > Rick Matthews > > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss > > > -- ...make every program a filter... From pac at fortuitous.com Fri Jan 4 10:08:35 2002 From: pac at fortuitous.com (pac@fortuitous.com) Date: Fri, 4 Jan 2002 10:08:35 -0600 Subject: [NTLUG:Discuss] [Gary.McMillian@ieee.org: IEEE COM/SP Meeting - Adding Resiliency to IP/MPLS Routers and?Switches] Message-ID: <20020104100835.A6637@bistro.marx> Below is this months Austin IEEE COMSOC meeting announcement. Looks like a good topic. Enjoy. -Phil Carinhas -- .--------------------------------------------------------. | Dr. Philip A. Carinhas | pac at fortuitous.com | | Fortuitous Technologies Inc. | http://fortuitous.com | | Linux Consulting & Training | Tel : 1-512-467-2154 | `--------------------------------------------------------' ----- Forwarded message from Gary McMillian ----- Subject: IEEE COM/SP Meeting - Adding Resiliency to IP/MPLS Routers and Switches From: Gary McMillian Date: Thu, 03 Jan 2002 22:47:18 -0600 Happy New Year! The Central Texas Chapter of the IEEE Communications/Signal Processing Society invites you to attend our January meeting and a presentation by Tom Meehan of Redback Networks. Please note this presentation was originally scheduled for November, but was cancelled due to severe thunderstorms in the Austin area. DATE January 17, 2002 6:30 PM - 9:00 PM LOCATION SBC Technology Resources, Inc. (SBC-TRI) 9505 Arboretum Blvd, Austin, Texas TOPIC Adding Resiliency to IP/MPLS Routers and Switches SPEAKER Tom Meehan Director of Product Management SmartEdge Product Line Redback Networks, Inc Mr. Meehan joined Redback with the acquisition of Siara Systems, who originally developed the SmartEdge platform. Prior to joining Redback Networks, he was IP Product Manager at Wellfleet/Bay Networks. Prior to Bay Networks, he worked on the end-user side, designing and developing IP networks for various Wall Street institutions and trading houses. Thomas began his career as a software developer of data communications protocols. ABSTRACT As the Internet exploded through the 1990's, most research and development was invested in protocols and implementations that allowed the Internet to continue to scale to larger amount of routes with expanded connectivity. Less attention was paid to providing inherent resilience and robustness to IP routers. More recently, research and development has been focusing on the development of routing protocols, extensions and implementations that bring more reliability, resiliency and operational flexibility to IP/MPLS routers and switches. The presentation will review current standards work in this area and will examine state of the art implementations of "carrier grade" IP routers. ********* The presentation is open to chapter members and non-members. ********* Free food and beverages will be served to attendees. RSVP If you plan to attend, please send e-mail to Howard Headrick at hfrjr at swbell.net for planning purposes. UPCOMING MEETINGS If your company would like to give a presentation on a communications or signal processing topic, please contact Howard Headrick or Gary McMillian. MONTHLY MEETING NOTICE The Chapter meets on the 3rd Thursday of each month at 6:30 PM at SBC Technology Resources, Inc. (SBC-TRI) located at 9505 Arboretum Blvd in Austin, Texas. Please feel free to post meeting notices and invite guests. SOCIETY MEMBERSHIP We encourage you to join the Communications and Signal Processing Societies at http://www.ieee.org/membership/join/. If you're already a member, please encourage your associates to join one or both societies. IEEE membership provides a variety of benefits to its members ranging from technical publications to conferences to career development assistance to financial services. IEEE MEMBERSHIP RENEWAL AND UPDATE Current members can renew their membership and update their information (including e-mail address) at http://www.ieee.org/membership/coa.html. CTC IEEE COM/SP SOCIETY 2001 OFFICERS Howard Headrick Chairman hfrjr at swbell.net Booker Tyrone Vice-Chairman btyrone at tri.sbc.com Gary McMillian Secretary gary.mcmillian at ieee.org Philip Wisseman Treasurer wisseman at tri.sbc.com Mark Brockman Dir, Student Activities mark.brockman at sbc.com & Speakers Bureau CTC IEEE COM/SP SOCIETY DISTRIBUTION LIST To be added to or deleted from the chapter mailing list, please send name and e-mail address to Gary McMillian at gary.mcmillian at ieee.org. ----- End forwarded message ----- From tim at coderite.com Fri Jan 4 10:17:31 2002 From: tim at coderite.com (John T. Willis) Date: Fri, 4 Jan 2002 10:17:31 -0600 Subject: [NTLUG:Discuss] Microsoft-Free In-Reply-To: <20020104100835.A6637@bistro.marx> Message-ID: There's a good article in CIO this month (which is rare) about the best way to switch over to a Microsoft-Free enterprise. Just thought I'd share... TimW From cjcox at acm.org Fri Jan 4 10:42:16 2002 From: cjcox at acm.org (Chris Cox) Date: Fri, 04 Jan 2002 10:42:16 -0600 Subject: [NTLUG:Discuss] Script question References: <3C35C262.2030600@concentric.net> Message-ID: <3C35DB68.7080804@acm.org> Fred James wrote: > I believe you could use "cut" to get what you need to build the new > file name. > > Rick Matthews wrote: > >> Would someone please help me over another "hump"? I'm still learning... >> >> I maintain the current version and 3 previous versions of the file >> "domains". The files are all in the same directory and the file name >> includes a unique date/time stamp and a version number. For example: >> >> domains.2002-01-04_050500.0 >> domains.2002-01-02_052515.1 >> domains.2002-01-01_043025.2 >> domains.2001-12-30_050500.3 >> >> '0' is the current version and '3' is the oldest. >> >> When a new file is created I want to "age" the files and create an open >> slot for a new "0" file (3 goes away, 2 becomes 3, 1 becomes 2, and 0 >> becomes 1). I don't know the date/time stamps, so this is what I tried >> in my script: >> >> mv -f /archive/domains.*.2 /archive/domains.*.3 >> mv -f /archive/domains.*.1 /archive/domains.*.2 >> mv -f /archive/domains.*.0 /archive/domains.*.1 > This is somewhat unique to your situtation, due to your file naming.. but you can probably safely eval the statements prior to execution. Don't quote the arguments to echo below.... cmds=`echo mv -f ....; echo mv -f ...; echo mv -f ...; echo mv -f ..; ` echo "$cmds" | sh >> >> The asterisk apparently acts as a wildcard during the file selection >> part of the command, but it is acting as a literal in the renaming >> porting of the command (I end up with files named domains.*.3). >> >> How can I rewrite this to accomplish the task? >> >> Thanks in advance! >> >> Rick Matthews >> >> >> _______________________________________________ >> http://www.ntlug.org/mailman/listinfo/discuss >> >> >> > > From madhat at unspecific.com Fri Jan 4 12:00:43 2002 From: madhat at unspecific.com (MadHat) Date: Fri, 04 Jan 2002 12:00:43 -0600 Subject: [NTLUG:Discuss] Script question In-Reply-To: Message-ID: <5.1.0.14.0.20020104115456.034bce30@pop.unspecific.com> try something like this using awk for x do n=`echo $x | awk -F . '{ $3++ }; { print $1"."$2"."$3 }'` echo "mv -f $x $n" done I made this a shell script called 'rename' and run $ rename /var/log/filename* the echo is just to show what is going on, once verified, it is just 'mv -f ...'. If you are already in a loop in a script, just use the awk part. I am sure there are more elegant ways of doing this, but this works. At 08:18 AM 1/4/2002 -0600, you wrote: >Would someone please help me over another "hump"? I'm still learning... > >I maintain the current version and 3 previous versions of the file >"domains". The files are all in the same directory and the file name >includes a unique date/time stamp and a version number. For example: > >domains.2002-01-04_050500.0 >domains.2002-01-02_052515.1 >domains.2002-01-01_043025.2 >domains.2001-12-30_050500.3 > >'0' is the current version and '3' is the oldest. > >When a new file is created I want to "age" the files and create an open >slot for a new "0" file (3 goes away, 2 becomes 3, 1 becomes 2, and 0 >becomes 1). I don't know the date/time stamps, so this is what I tried >in my script: > >mv -f /archive/domains.*.2 /archive/domains.*.3 >mv -f /archive/domains.*.1 /archive/domains.*.2 >mv -f /archive/domains.*.0 /archive/domains.*.1 > >The asterisk apparently acts as a wildcard during the file selection >part of the command, but it is acting as a literal in the renaming >porting of the command (I end up with files named domains.*.3). > >How can I rewrite this to accomplish the task? > >Thanks in advance! > >Rick Matthews > > >_______________________________________________ >http://www.ntlug.org/mailman/listinfo/discuss -- MadHat at unspecific.com From greg at strand3.com Fri Jan 4 12:44:44 2002 From: greg at strand3.com (greg hewett) Date: Fri, 4 Jan 2002 12:44:44 -0600 Subject: [NTLUG:Discuss] Microsoft-Free In-Reply-To: References: <20020104100835.A6637@bistro.marx> Message-ID: <20020104124444.B23701@strand3.com> http://www.cio.com/archive/010102/shop.html On Fri, Jan 04, 2002 at 10:17:31AM -0600, John T. Willis wrote: > There's a good article in CIO this month (which is rare) about the best way > to switch over to a Microsoft-Free enterprise. > > Just thought I'd share... > > TimW > > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss From sjbaker1 at airmail.net Fri Jan 4 15:37:33 2002 From: sjbaker1 at airmail.net (Steve Baker) Date: Fri, 04 Jan 2002 15:37:33 -0600 Subject: [NTLUG:Discuss] Script question References: Message-ID: <3C36209D.C2B94A51@airmail.net> Rick Matthews wrote: > mv -f /archive/domains.*.2 /archive/domains.*.3 > mv -f /archive/domains.*.1 /archive/domains.*.2 > mv -f /archive/domains.*.0 /archive/domains.*.1 > > The asterisk apparently acts as a wildcard during the file selection > part of the command, but it is acting as a literal in the renaming > porting of the command (I end up with files named domains.*.3). You evidently expect whatever was matched by the first '*' to be copied into the location of the second '*' on each line. That's not how UNIX/Linux shells work. They simply expand each and every string that contains wildcards into the list of files that match. What *should* be happening is that the shell (not knowing anything about what the 'mv' command does) should blindly translate the asterisk into any existing filename. So, given files: domains.2002-01-04_050500.0 domains.2002-01-02_052515.1 domains.2002-01-01_043025.2 domains.2001-12-30_050500.3 ...the first 'mv' command will be expanded to: mv -f domains.2002-01-01_043025.2 domains.2001-12-30_050500.3 ...and thus rename domains.2002-01-01_043025.2 onto domains.2001-12-30_050500.3 - and not domains.2002-01-01_043025.3 as you aparrently expected. Given that, the second 'mv' command will not now find a match for /archive/domains.*.2 (because you renamed domains.2002-01-01_043025.2 and there are no other files ending with '.2'). Some shells will flag an error, others will simply ignore the second wildcard (because it didn't match any files)...and hence run the command with just one filename: mv -f domains.2002-01-02_052515.1 ...which will cause an error to be emitted by the 'mv' program instead. mv: missing file argument However, your shell is evidently set to treat unmatched wildcards literally. I think that's a BAD way to set up your shell. If you have files created with asterisks in their names, you can get into all kinds of nasty problems...and nobody *ever* really wants it to do that. Most shell programs let you choose which of these behaviours you want. Do a 'man bash' and look for 'nullglob' for an explanation. There are lots of options for setting this behaviour in various shells that Linux supports. ----------------------------- Steve Baker ------------------------------- Mail : WorkMail: URLs : http://www.sjbaker.org http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net http://prettypoly.sf.net http://freeglut.sf.net http://toobular.sf.net http://lodestone.sf.net From bbyron at radit.com Fri Jan 4 16:15:53 2002 From: bbyron at radit.com (Bob Byron) Date: Fri, 4 Jan 2002 16:15:53 -0600 Subject: [NTLUG:Discuss] Script question References: Message-ID: <01ce01c1956d$5fc34090$0301a8c0@white> You might give something like this a try: rm domains.*.3 filename=`ls domains.*.2 | cut -d '.' -f 1-2` mv ${filename}.2 ${filename}.3 Then repeat for the other files. Be careful not to delete something you do not intend to delete. Bob ----- Original Message ----- From: "Rick Matthews" To: "Discuss at Ntlug. Org" Sent: Friday, January 04, 2002 8:18 AM Subject: [NTLUG:Discuss] Script question > Would someone please help me over another "hump"? I'm still learning... > > I maintain the current version and 3 previous versions of the file > "domains". The files are all in the same directory and the file name > includes a unique date/time stamp and a version number. For example: > > domains.2002-01-04_050500.0 > domains.2002-01-02_052515.1 > domains.2002-01-01_043025.2 > domains.2001-12-30_050500.3 > > '0' is the current version and '3' is the oldest. > > When a new file is created I want to "age" the files and create an open > slot for a new "0" file (3 goes away, 2 becomes 3, 1 becomes 2, and 0 > becomes 1). I don't know the date/time stamps, so this is what I tried > in my script: > > mv -f /archive/domains.*.2 /archive/domains.*.3 > mv -f /archive/domains.*.1 /archive/domains.*.2 > mv -f /archive/domains.*.0 /archive/domains.*.1 > > The asterisk apparently acts as a wildcard during the file selection > part of the command, but it is acting as a literal in the renaming > porting of the command (I end up with files named domains.*.3). > > How can I rewrite this to accomplish the task? > > Thanks in advance! > > Rick Matthews > > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss From Kyle_Davenport at compusa.com Fri Jan 4 16:22:32 2002 From: Kyle_Davenport at compusa.com (Kyle_Davenport@compusa.com) Date: Fri, 4 Jan 2002 16:22:32 -0600 Subject: [NTLUG:Discuss] re: ps2pdf/ghostscript Problem Message-ID: From: Jason Ferguson > Can anyone make heads or tails of this one? ps2pdf refuses to work for > me to convert a .ps file to a .pdf. Here is the output: > Error: /invalidfont in findfont I just tried the same version 6.51 on Mandrake8.1. I did: ps2pdf /usr/share/doc/pam-doc-0.75/ps/pam.ps pam.pdf It had no problem producing pam.pdf, but xpdf didn't display it right. Acroread did. You might verify your dependencies: rpm -q --requires ghostscript From sjbaker1 at airmail.net Fri Jan 4 21:19:18 2002 From: sjbaker1 at airmail.net (Steve Baker) Date: Fri, 04 Jan 2002 21:19:18 -0600 Subject: [NTLUG:Discuss] Script question References: <01ce01c1956d$5fc34090$0301a8c0@white> Message-ID: <3C3670B6.DACAABDA@airmail.net> Bob Byron wrote: > > You might give something like this a try: > rm domains.*.3 > filename=`ls domains.*.2 | cut -d '.' -f 1-2` > mv ${filename}.2 ${filename}.3 It would be simpler & safer (I think) to do this: rm domains.*.3 mv domains.*.2 `basename domains.*.2 .2`.3 mv domains.*.1 `basename domains.*.1 .1`.2 The basename command removes the extension specified in the second argument from the filename mentioned in the first argument. Personally, I'd have structured the problem a little differently and placed the backup copies into subdirectories named '.1', '.2' and '.3' so I could say: rm .3/* mv .2/* .3 mv .1/* .2 ...that's elegant because you can have as many files as you like backed up in each directory and the 'aging' process happens in lockstep for all of them. ----------------------------- Steve Baker ------------------------------- Mail : WorkMail: URLs : http://www.sjbaker.org http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net http://prettypoly.sf.net http://freeglut.sf.net http://toobular.sf.net http://lodestone.sf.net From pauldy at wantek.net Fri Jan 4 23:40:18 2002 From: pauldy at wantek.net (Paul Ingendorf) Date: Fri, 4 Jan 2002 23:40:18 -0600 Subject: [NTLUG:Discuss] Script question In-Reply-To: Message-ID: <000501c195ab$7611aa60$7464a8c0@wantek.net> hmm shell scripts yum gotta throw in how I would do it even though I see a few fine examples have already been posted. The following would also move the files appropriately. Just copy and paste this in a file chmod +x it and remember to create the new backup file as /archive/domains.`date +%Y-%m-%d` and it should work properly every time. #!/bin/bash rm -f /archive/*.3 let x=3 for file in /archive/domains.[0-9][0-9][0-9][0-9]-[0-1][0-9]-[0-3][0-9].[0-3] do mv $file /archive/`echo $file | sed -e "s/\.[0-9]^//g"`.$x let x=$x-1 done mv /archive/domains.`date +%Y-%m-%d` /archive/`date +%Y-%m-%d`.0 -- -->> mailto:pauldy at wantek.net -->> http://www.wantek.net/ Running ....... Cos anything else would be a waste... `:::' ....... ...... ::: * `::. ::' ::: .:: .:.::. .:: .:: `::. :' ::: :: :: :: :: :: :::. ::: .::. .:: ::. `::::. .:' ::. .:::.....................::' .::::.. -----Original Message----- From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org]On Behalf Of Rick Matthews Sent: Friday, January 04, 2002 8:18 AM To: Discuss at Ntlug. Org Subject: [NTLUG:Discuss] Script question Would someone please help me over another "hump"? I'm still learning... I maintain the current version and 3 previous versions of the file "domains". The files are all in the same directory and the file name includes a unique date/time stamp and a version number. For example: domains.2002-01-04_050500.0 domains.2002-01-02_052515.1 domains.2002-01-01_043025.2 domains.2001-12-30_050500.3 '0' is the current version and '3' is the oldest. When a new file is created I want to "age" the files and create an open slot for a new "0" file (3 goes away, 2 becomes 3, 1 becomes 2, and 0 becomes 1). I don't know the date/time stamps, so this is what I tried in my script: mv -f /archive/domains.*.2 /archive/domains.*.3 mv -f /archive/domains.*.1 /archive/domains.*.2 mv -f /archive/domains.*.0 /archive/domains.*.1 The asterisk apparently acts as a wildcard during the file selection part of the command, but it is acting as a literal in the renaming porting of the command (I end up with files named domains.*.3). How can I rewrite this to accomplish the task? Thanks in advance! Rick Matthews _______________________________________________ http://www.ntlug.org/mailman/listinfo/discuss From sjbaker1 at airmail.net Sat Jan 5 00:04:17 2002 From: sjbaker1 at airmail.net (Steve Baker) Date: Sat, 05 Jan 2002 00:04:17 -0600 Subject: [NTLUG:Discuss] Script question References: <000501c195ab$7611aa60$7464a8c0@wantek.net> Message-ID: <3C369761.3068058E@airmail.net> Paul Ingendorf wrote: > > hmm shell scripts yum gotta throw in how I would do it even though I see a few fine examples have already been posted. > The following would also move the files appropriately. Just copy and paste this in a file chmod +x it and remember to create the new backup file as /archive/domains.`date +%Y-%m-%d` and it should work properly every time. > > #!/bin/bash > rm -f /archive/*.3 > let x=3 > for file in /archive/domains.[0-9][0-9][0-9][0-9]-[0-1][0-9]-[0-3][0-9].[0-3] > do > mv $file /archive/`echo $file | sed -e "s/\.[0-9]^//g"`.$x > let x=$x-1 > done > mv /archive/domains.`date +%Y-%m-%d` /archive/`date +%Y-%m-%d`.0 Just don't run this within a second or two of midnight! I think the last line should be replaced by: FNAME=/archives/domains.`date +%Y-%m-%d` mv ${FNAME} ${FNAME}.0 I'll forgive you for your Y10K bug though! :-) ----------------------------- Steve Baker ------------------------------- Mail : WorkMail: URLs : http://www.sjbaker.org http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net http://prettypoly.sf.net http://freeglut.sf.net http://toobular.sf.net http://lodestone.sf.net From pauldy at wantek.net Sat Jan 5 06:12:40 2002 From: pauldy at wantek.net (Paul Ingendorf) Date: Sat, 5 Jan 2002 06:12:40 -0600 Subject: [NTLUG:Discuss] Script question In-Reply-To: <3C369761.3068058E@airmail.net> Message-ID: <000001c195e2$46195f60$7464a8c0@wantek.net> It could be done that way this was just the way I choose to do it. BTW it would have to be a lot closer to midnight than one or two seconds. Although I agree it is safer to assign it to a var and just use the var. The better deal would be to simply pass the file as an argument from whatever script creates it. -----Original Message----- From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org]On Behalf Of Steve Baker Sent: Saturday, January 05, 2002 12:04 AM To: discuss at ntlug.org Subject: Re: [NTLUG:Discuss] Script question Paul Ingendorf wrote: > > hmm shell scripts yum gotta throw in how I would do it even though I see a few fine examples have already been posted. > The following would also move the files appropriately. Just copy and paste this in a file chmod +x it and remember to create the new backup file as /archive/domains.`date +%Y-%m-%d` and it should work properly every time. > > #!/bin/bash > rm -f /archive/*.3 > let x=3 > for file in /archive/domains.[0-9][0-9][0-9][0-9]-[0-1][0-9]-[0-3][0-9].[0-3] > do > mv $file /archive/`echo $file | sed -e "s/\.[0-9]^//g"`.$x > let x=$x-1 > done > mv /archive/domains.`date +%Y-%m-%d` /archive/`date +%Y-%m-%d`.0 Just don't run this within a second or two of midnight! I think the last line should be replaced by: FNAME=/archives/domains.`date +%Y-%m-%d` mv ${FNAME} ${FNAME}.0 I'll forgive you for your Y10K bug though! :-) ----------------------------- Steve Baker ------------------------------- Mail : WorkMail: URLs : http://www.sjbaker.org http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net http://prettypoly.sf.net http://freeglut.sf.net http://toobular.sf.net http://lodestone.sf.net _______________________________________________ http://www.ntlug.org/mailman/listinfo/discuss From sysmail at glade.net Sat Jan 5 14:02:29 2002 From: sysmail at glade.net (sysmail@glade.net) Date: Sat, 5 Jan 2002 14:02:29 -0600 (CST) Subject: [NTLUG:Discuss] Script question In-Reply-To: <000501c195ab$7611aa60$7464a8c0@wantek.net> Message-ID: Wouldn't logrotate do all this a lot easier? Just a thought, Carl On Fri, 4 Jan 2002, Paul Ingendorf wrote: > Date: Fri, 4 Jan 2002 23:40:18 -0600 > From: Paul Ingendorf > Reply-To: discuss at ntlug.org > To: discuss at ntlug.org > Subject: RE: [NTLUG:Discuss] Script question > > hmm shell scripts yum gotta throw in how I would do it even though I see a few fine examples have already been posted. > The following would also move the files appropriately. Just copy and paste this in a file chmod +x it and remember to create the new backup file as /archive/domains.`date +%Y-%m-%d` and it should work properly every time. > > #!/bin/bash > rm -f /archive/*.3 > let x=3 > for file in /archive/domains.[0-9][0-9][0-9][0-9]-[0-1][0-9]-[0-3][0-9].[0-3] > do > mv $file /archive/`echo $file | sed -e "s/\.[0-9]^//g"`.$x > let x=$x-1 > done > mv /archive/domains.`date +%Y-%m-%d` /archive/`date +%Y-%m-%d`.0 > > From fredjame at concentric.net Sat Jan 5 21:13:35 2002 From: fredjame at concentric.net (Fred James) Date: Sat, 05 Jan 2002 21:13:35 -0600 Subject: [NTLUG:Discuss] Script question References: Message-ID: <3C37C0DF.4070009@concentric.net> Don't get me wrong - that is a very interesting little gizmo, and one which I may find very useful, too. The credit lines at the end of the man page read: "Author, Erik Troan " - I haven't found logrotate on any of the other UNIXs I work with, is it on distributions of Linux other than Red Hat? sysmail at glade.net wrote: > Wouldn't logrotate do all this a lot easier? > > Just a thought, > > Carl > > On Fri, 4 Jan 2002, Paul Ingendorf wrote: > > >>Date: Fri, 4 Jan 2002 23:40:18 -0600 >>From: Paul Ingendorf >>Reply-To: discuss at ntlug.org >>To: discuss at ntlug.org >>Subject: RE: [NTLUG:Discuss] Script question >> >>hmm shell scripts yum gotta throw in how I would do it even though I see a few fine examples have already been posted. >>The following would also move the files appropriately. Just copy and paste this in a file chmod +x it and remember to create the new backup file as /archive/domains.`date +%Y-%m-%d` and it should work properly every time. >> >>#!/bin/bash >>rm -f /archive/*.3 >>let x=3 >>for file in /archive/domains.[0-9][0-9][0-9][0-9]-[0-1][0-9]-[0-3][0-9].[0-3] >> do >> mv $file /archive/`echo $file | sed -e "s/\.[0-9]^//g"`.$x >> let x=$x-1 >> done >>mv /archive/domains.`date +%Y-%m-%d` /archive/`date +%Y-%m-%d`.0 >> >> >> > > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss > > > -- ...make every program a filter... From ghaass1 at airmail.net Sat Jan 5 21:38:16 2002 From: ghaass1 at airmail.net (GWH Technical Training) Date: Sat, 05 Jan 2002 21:38:16 -0600 Subject: [NTLUG:Discuss] Logrotate... References: <3C37C0DF.4070009@concentric.net> Message-ID: <3C37C6A8.A9E4182D@airmail.net> Logrotate, although not included with the primary system of Solaris, has been available for download at least back as far as Solaris 2.6. It should be able to be downloaded from sunfreeware.com. Pretty much the same functionality as what you have in RH. Hope this helps... Gary Fred James wrote: > Don't get me wrong - that is a very interesting little gizmo, and one > which I may find very useful, too. > > The credit lines at the end of the man page read: "Author, Erik Troan > " - I haven't found logrotate on any of the other UNIXs > I work with, is it on distributions of Linux other than Red Hat? > > sysmail at glade.net wrote: > > > Wouldn't logrotate do all this a lot easier? > > > > Just a thought, > > > > Carl > > > > On Fri, 4 Jan 2002, Paul Ingendorf wrote: > > > > > >>Date: Fri, 4 Jan 2002 23:40:18 -0600 > >>From: Paul Ingendorf > >>Reply-To: discuss at ntlug.org > >>To: discuss at ntlug.org > >>Subject: RE: [NTLUG:Discuss] Script question > >> > >>hmm shell scripts yum gotta throw in how I would do it even though I see a few fine examples have already been posted. > >>The following would also move the files appropriately. Just copy and paste this in a file chmod +x it and remember to create the new backup file as /archive/domains.`date +%Y-%m-%d` and it should work properly every time. > >> > >>#!/bin/bash > >>rm -f /archive/*.3 > >>let x=3 > >>for file in /archive/domains.[0-9][0-9][0-9][0-9]-[0-1][0-9]-[0-3][0-9].[0-3] > >> do > >> mv $file /archive/`echo $file | sed -e "s/\.[0-9]^//g"`.$x > >> let x=$x-1 > >> done > >>mv /archive/domains.`date +%Y-%m-%d` /archive/`date +%Y-%m-%d`.0 > >> > >> > >> > > > > > > _______________________________________________ > > http://www.ntlug.org/mailman/listinfo/discuss > > > > > > > > -- > ...make every program a filter... > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss From Rick at Matthews.net Sat Jan 5 22:59:27 2002 From: Rick at Matthews.net (Rick Matthews) Date: Sat, 5 Jan 2002 22:59:27 -0600 Subject: [NTLUG:Discuss] Script question In-Reply-To: <3C35DB68.7080804@acm.org> Message-ID: Wow, thanks for all the answers! I'm just now getting a chance to work through them. Chris Cox wrote: > This is somewhat unique to your situtation, due to your file naming.. > but you can probably safely eval the statements prior to execution. > Don't quote the arguments to echo below.... > I can modify the naming convention, if that will help. How would you name them? Before you answer that question, you need to know a few additional facts: #- The create_domains_file job may not run for several days, or it may run more than once in a day. #- The use_domains_file job runs independently of the file creation job, and runs once or twice a day. It uses the "current" domains* file, selected by the '0'. #- Much of the data used in the create_domains_file job is received from outside sources. Occasionally, I find out that the data supplied by the outside sources was incorrectly selected (but it seems that is never discovered until after the use_domains_file job has run on the bad data). Today I delete the '0' file, roll back the '1' file to '0', and rerun the use_domains_file job. The current naming convention allows an easy roll back and easy identification of the creation date/time of the now-current file. I'll entertain any suggestions of alternate naming conventions. Thanks for your help! Rick From sjbaker1 at airmail.net Sat Jan 5 23:49:19 2002 From: sjbaker1 at airmail.net (Steve Baker) Date: Sat, 05 Jan 2002 23:49:19 -0600 Subject: [NTLUG:Discuss] Script question References: Message-ID: <3C37E55F.AE1C4BAF@airmail.net> Rick Matthews wrote: > I can modify the naming convention, if that will help. How would you > name them? Before you answer that question, you need to know a few > additional facts: Rather than rename the files, I'd create separate directories for each backup level. Naming the directories with a leading '.' (as in '.1', '.2', etc). This has several benefits. * During normal operations you don't even have to *see* the backups. The directories and the files inside them don't clutter normal operations but you can see them with 'ls -a' if you want to. * The whole backup 'aging' mechanism is very simple, extremely safe and it doesn't depend on peculiarities of how your shell is set up: rm .3/* mv .2/* .3 mv .1/* .2 mv * .1 * You can (probably) rerun your application on yesterday's data without upsetting todays data. eg: cd .1 run_my_application * Things don't screw up monumentally if you happen to somehow get more than one file with a '.1', '.2' or '.3' extension. Nearly every script offered in the process of this discussion has had that problem. * If you later change your application such that it generated multiple files each day, you don't have to re-think your entire process. * Programs that understand particular filename extensions will work OK with the backed up files. Also, I'd tend to let the OS's file time stamp do the job of keeping track of time rather than storing the date/time in the filename. Using 'mv' to move a file into a different directory doesn't alter the date/time associated with the file. The benefit of that is: * You aren't 'fighting' Linux - you are letting it's standard features do the work for you. Timestamps stored in the filename are meaningless to programs you don't create yourself - but relying on the creation date means that you can use programs like 'find' to do things like finding all files more recent than a particular date, most tape backup programs can be told to only back up and restore files that were written within in a certain time range, etc, etc. * Having your file always have the same name (no matter what date it was written on) will tend to simplify admin scripts too. Having scripts that say things like: rm myfile_*.dat ...(because the script doesn't want to parse the date)...leads to problems. Suppose for some reason someone does something out of the ordinary like: cp myfile_01-05-2002.dat myfile_saved.dat ...he'll find to his suprise that the admin script removed the copy he saved. It's better if the script be completely precise about the file it removes: rm myfile.dat ...then there is absolutely no risk of problems. All of this is largely a matter of personal taste though. I bet you could come up with a list of reasons NOT to do what I suggest. ----------------------------- Steve Baker ------------------------------- Mail : WorkMail: URLs : http://www.sjbaker.org http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net http://prettypoly.sf.net http://freeglut.sf.net http://toobular.sf.net http://lodestone.sf.net From Rick at Matthews.net Sun Jan 6 00:28:25 2002 From: Rick at Matthews.net (Rick Matthews) Date: Sun, 6 Jan 2002 00:28:25 -0600 Subject: [NTLUG:Discuss] Script question In-Reply-To: <5.1.0.14.0.20020104115456.034bce30@pop.unspecific.com> Message-ID: MadHat wrote: > try something like this using awk > > for x do > n=`echo $x | awk -F . '{ $3++ }; { print $1"."$2"."$3 }'` > echo "mv -f $x $n" > done Neat! For test purposes, I ran your script in a directory containing: domains.0000.0 domains.1111.1 domains.2222.2 domains.3333.3 The echoed output was: mv -f domains.0000.0 domains.0000.1 mv -f domains.1111.1 domains.1111.2 mv -f domains.2222.2 domains.2222.3 mv -f domains.3333.3 domains.3333.4 Great! Would you help me with one small detail? (I apologize that I don't know how to modify your script to accomplish this!) We need to make it run in inverse order i.e move 3 to 4, then move 2 to 3, then move 1 to 2, then move 0 to 1. (Otherwise 1, 2, 3 & 4 will all be copies of the same file.) Thanks for your help! Rick From cjcox at acm.org Sun Jan 6 02:17:53 2002 From: cjcox at acm.org (Chris & Angela Cox) Date: Sun, 06 Jan 2002 02:17:53 -0600 Subject: [NTLUG:Discuss] Script question References: Message-ID: <3C380831.2EDD67F4@acm.org> Rick Matthews wrote: > > Wow, thanks for all the answers! I'm just now getting a chance to work > through them. > > Chris Cox wrote: > > > This is somewhat unique to your situtation, due to your file naming.. > > but you can probably safely eval the statements prior to execution. > > Don't quote the arguments to echo below.... > > > > I can modify the naming convention, if that will help. How would you > name them? Before you answer that question, you need to know a few > additional facts: > Sorry... I wasn't clear... there's not a problem in your particular case because the extra level of evaluation done in the shell will not have adverse side-effects. > #- The create_domains_file job may not run for several days, or it may > run more than once in a day. > > #- The use_domains_file job runs independently of the file creation job, > and runs once or twice a day. It uses the "current" domains* file, > selected by the '0'. > > #- Much of the data used in the create_domains_file job is received from > outside sources. Occasionally, I find out that the data supplied by the > outside sources was incorrectly selected (but it seems that is never > discovered until after the use_domains_file job has run on the bad > data). Today I delete the '0' file, roll back the '1' file to '0', and > rerun the use_domains_file job. The current naming convention allows an > easy roll back and easy identification of the creation date/time of the > now-current file. > > I'll entertain any suggestions of alternate naming conventions. > > Thanks for your help! > > Rick > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss From Rick at Matthews.net Sun Jan 6 07:07:42 2002 From: Rick at Matthews.net (Rick Matthews) Date: Sun, 6 Jan 2002 07:07:42 -0600 Subject: [NTLUG:Discuss] Script question In-Reply-To: <3C36209D.C2B94A51@airmail.net> Message-ID: Steve Baker wrote: > However, your shell is evidently set to treat unmatched wildcards > literally. I think that's a BAD way to set up your shell. If you > have files created with asterisks in their names, you can get into > all kinds of nasty problems...and nobody *ever* really wants it to > do that. > Most shell programs let you choose which of these behaviours you > want. Do a 'man bash' and look for 'nullglob' for an explanation. > There are lots of options for setting this behaviour in various shells > that Linux supports. Thanks for the information. I'm currently running RH 7.1 and I don't remember making a choice to operate in the way you describe (and I typically choose the default when I don't understand the options). I'm wading through the bash control files; thanks again for the tip. Rick Matthews From Rick at Matthews.net Sun Jan 6 07:23:40 2002 From: Rick at Matthews.net (Rick Matthews) Date: Sun, 6 Jan 2002 07:23:40 -0600 Subject: [NTLUG:Discuss] Script question In-Reply-To: <3C37E55F.AE1C4BAF@airmail.net> Message-ID: Steve, I appreciate your taking the time to explain this method of archiving and its benefits. You build a convincing case! I'm going to review my processes with this in mind. Thanks! Rick -----Original Message----- From: discuss-admin at ntlug.org [mailto:discuss-admin at ntlug.org]On Behalf Of Steve Baker Sent: Saturday, January 05, 2002 11:49 PM To: discuss at ntlug.org Subject: Re: [NTLUG:Discuss] Script question Rick Matthews wrote: > I can modify the naming convention, if that will help. How would you > name them? Before you answer that question, you need to know a few > additional facts: Rather than rename the files, I'd create separate directories for each backup level. Naming the directories with a leading '.' (as in '.1', '.2', etc). This has several benefits. * During normal operations you don't even have to *see* the backups. The directories and the files inside them don't clutter normal operations but you can see them with 'ls -a' if you want to. * The whole backup 'aging' mechanism is very simple, extremely safe and it doesn't depend on peculiarities of how your shell is set up: rm .3/* mv .2/* .3 mv .1/* .2 mv * .1 * You can (probably) rerun your application on yesterday's data without upsetting todays data. eg: cd .1 run_my_application * Things don't screw up monumentally if you happen to somehow get more than one file with a '.1', '.2' or '.3' extension. Nearly every script offered in the process of this discussion has had that problem. * If you later change your application such that it generated multiple files each day, you don't have to re-think your entire process. * Programs that understand particular filename extensions will work OK with the backed up files. Also, I'd tend to let the OS's file time stamp do the job of keeping track of time rather than storing the date/time in the filename. Using 'mv' to move a file into a different directory doesn't alter the date/time associated with the file. The benefit of that is: * You aren't 'fighting' Linux - you are letting it's standard features do the work for you. Timestamps stored in the filename are meaningless to programs you don't create yourself - but relying on the creation date means that you can use programs like 'find' to do things like finding all files more recent than a particular date, most tape backup programs can be told to only back up and restore files that were written within in a certain time range, etc, etc. * Having your file always have the same name (no matter what date it was written on) will tend to simplify admin scripts too. Having scripts that say things like: rm myfile_*.dat ...(because the script doesn't want to parse the date)...leads to problems. Suppose for some reason someone does something out of the ordinary like: cp myfile_01-05-2002.dat myfile_saved.dat ...he'll find to his suprise that the admin script removed the copy he saved. It's better if the script be completely precise about the file it removes: rm myfile.dat ...then there is absolutely no risk of problems. All of this is largely a matter of personal taste though. I bet you could come up with a list of reasons NOT to do what I suggest. ----------------------------- Steve Baker ------------------------------- Mail : WorkMail: URLs : http://www.sjbaker.org http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net http://prettypoly.sf.net http://freeglut.sf.net http://toobular.sf.net http://lodestone.sf.net _______________________________________________ http://www.ntlug.org/mailman/listinfo/discuss From fredjame at concentric.net Sun Jan 6 09:57:47 2002 From: fredjame at concentric.net (Fred James) Date: Sun, 06 Jan 2002 09:57:47 -0600 Subject: [NTLUG:Discuss] Logrotate... References: <3C37C0DF.4070009@concentric.net> <3C37C6A8.A9E4182D@airmail.net> Message-ID: <3C3873FB.4050309@concentric.net> So, are you saying it is standard on Red Hat, and downloadable to other *nix's (other distributions of Linux, Solaris, and perhaps SVR4 and BSD derivatives)? GWH Technical Training wrote: > Logrotate, although not included with the primary system of Solaris, has been available for download at least back as far as Solaris 2.6. It should be able to be downloaded from sunfreeware.com. Pretty much the same > functionality as what you have in RH. > > Hope this helps... > > Gary > > > > > > > Fred James wrote: > > >>Don't get me wrong - that is a very interesting little gizmo, and one >>which I may find very useful, too. >> >>The credit lines at the end of the man page read: "Author, Erik Troan >>" - I haven't found logrotate on any of the other UNIXs >>I work with, is it on distributions of Linux other than Red Hat? >> >>sysmail at glade.net wrote: >> >> >>>Wouldn't logrotate do all this a lot easier? >>> >>>Just a thought, >>> >>>Carl >>> >>>On Fri, 4 Jan 2002, Paul Ingendorf wrote: >>> >>> >>> >>>>Date: Fri, 4 Jan 2002 23:40:18 -0600 >>>>From: Paul Ingendorf >>>>Reply-To: discuss at ntlug.org >>>>To: discuss at ntlug.org >>>>Subject: RE: [NTLUG:Discuss] Script question >>>> >>>>hmm shell scripts yum gotta throw in how I would do it even though I see a few fine examples have already been posted. >>>>The following would also move the files appropriately. Just copy and paste this in a file chmod +x it and remember to create the new backup file as /archive/domains.`date +%Y-%m-%d` and it should work properly every time. >>>> >>>>#!/bin/bash >>>>rm -f /archive/*.3 >>>>let x=3 >>>>for file in /archive/domains.[0-9][0-9][0-9][0-9]-[0-1][0-9]-[0-3][0-9].[0-3] >>>> do >>>> mv $file /archive/`echo $file | sed -e "s/\.[0-9]^//g"`.$x >>>> let x=$x-1 >>>> done >>>>mv /archive/domains.`date +%Y-%m-%d` /archive/`date +%Y-%m-%d`.0 >>>> >>>> >>>> >>>> >>> >>>_______________________________________________ >>>http://www.ntlug.org/mailman/listinfo/discuss >>> >>> >>> >>> >>-- >>...make every program a filter... >> >>_______________________________________________ >>http://www.ntlug.org/mailman/listinfo/discuss >> > +?i??0?{e? > +f?s(S(Ys(Y"?b???~??S(?.ss== > -- ...make every program a filter... From ghaass1 at airmail.net Sun Jan 6 14:42:04 2002 From: ghaass1 at airmail.net (GWH Technical Training) Date: Sun, 06 Jan 2002 14:42:04 -0600 Subject: [NTLUG:Discuss] Logrotate... References: <3C37C0DF.4070009@concentric.net> <3C37C6A8.A9E4182D@airmail.net> <3C3873FB.4050309@concentric.net> Message-ID: <3C38B69C.16C2E37E@airmail.net> Correct... It lives under /usr/sbin/logrotate on the RedHat side. The source RPM can be found on Disk 2 of 2 of the RedHat 7.2 Source Code CD, and is listed as logrotate-3.5.9-1.src.rpm. From this , you should be able to compile it on whatever unix system you would like. I assume that FreeBSD and others have something similiar. Gotta love opensource... g Fred James wrote: > So, are you saying it is standard on Red Hat, and downloadable to other > *nix's (other distributions of Linux, Solaris, and perhaps SVR4 and BSD > derivatives)? > > GWH Technical Training wrote: > > > Logrotate, although not included with the primary system of Solaris, has been available for download at least back as far as Solaris 2.6. It should be able to be downloaded from sunfreeware.com. Pretty much the same > > functionality as what you have in RH. > > > > Hope this helps... > > > > Gary > > > > > > > > > > > > > > Fred James wrote: > > > > > >>Don't get me wrong - that is a very interesting little gizmo, and one > >>which I may find very useful, too. > >> > >>The credit lines at the end of the man page read: "Author, Erik Troan > >>" - I haven't found logrotate on any of the other UNIXs > >>I work with, is it on distributions of Linux other than Red Hat? > >> > >>sysmail at glade.net wrote: > >> > >> > >>>Wouldn't logrotate do all this a lot easier? > >>> > >>>Just a thought, > >>> > >>>Carl > >>> > >>>On Fri, 4 Jan 2002, Paul Ingendorf wrote: > >>> > >>> > >>> > >>>>Date: Fri, 4 Jan 2002 23:40:18 -0600 > >>>>From: Paul Ingendorf > >>>>Reply-To: discuss at ntlug.org > >>>>To: discuss at ntlug.org > >>>>Subject: RE: [NTLUG:Discuss] Script question > >>>> > >>>>hmm shell scripts yum gotta throw in how I would do it even though I see a few fine examples have already been posted. > >>>>The following would also move the files appropriately. Just copy and paste this in a file chmod +x it and remember to create the new backup file as /archive/domains.`date +%Y-%m-%d` and it should work properly every time. > >>>> > >>>>#!/bin/bash > >>>>rm -f /archive/*.3 > >>>>let x=3 > >>>>for file in /archive/domains.[0-9][0-9][0-9][0-9]-[0-1][0-9]-[0-3][0-9].[0-3] > >>>> do > >>>> mv $file /archive/`echo $file | sed -e "s/\.[0-9]^//g"`.$x > >>>> let x=$x-1 > >>>> done > >>>>mv /archive/domains.`date +%Y-%m-%d` /archive/`date +%Y-%m-%d`.0 > >>>> > >>>> > >>>> > >>>> > >>> > >>>_______________________________________________ > >>>http://www.ntlug.org/mailman/listinfo/discuss > >>> > >>> > >>> > >>> > >>-- > >>...make every program a filter... > >> > >>_______________________________________________ > >>http://www.ntlug.org/mailman/listinfo/discuss > >> > > +?i??0?{e? > > +f?s(S(Ys(Y"?b???~??S(?.ss== > > > > -- > ...make every program a filter... > > _______________________________________________ > http://www.ntlug.org/mailman/listinfo/discuss From fredjame at concentric.net Sun Jan 6 17:16:09 2002 From: fredjame at concentric.net (Fred James) Date: Sun, 06 Jan 2002 17:16:09 -0600 Subject: [NTLUG:Discuss] Logrotate... References: <3C37C0DF.4070009@concentric.net> <3C37C6A8.A9E4182D@airmail.net> <3C3873FB.4050309@concentric.net> <3C38B69C.16C2E37E@airmail.net> Message-ID: <3C38DAB9.60402@concentric.net> Thank you for your reply. I run Red Hat, and so I was able to find "logrotate" and read all about it - and I think it is pretty nifty, if you are running Red Hat. The fourth tenet of "The UNIX Philosophy" (Mike Gancarz) is "Choose portability over efficiency", and I face a fair number of non Red Hat systems. I could spend my time making them all BASH (and perhaps Red Hat) compatible, but the owners would want justification, and if any of the critical software failed, the first thing support wants to know is what changed - I like "logrotate" but I don't think I'll port it. GWH Technical Training wrote: > Correct... > It lives under /usr/sbin/logrotate on the RedHat side. > The source RPM can be found on Disk 2 of 2 of the RedHat 7.2 Source Code CD, > and is listed as logrotate-3.5.9-1.src.rpm. From this , you should be > able to compile it on whatever unix system you would like. I assume > that FreeBSD and others have something similiar. > > Gotta love opensource... > > g > > > > > Fred James wrote: > > >>So, are you saying it is standard on Red Hat, and downloadable to other >>*nix's (other distributions of Linux, Solaris, and perhaps SVR4 and BSD >>derivatives)? >> >>GWH Technical Training wrote: >> >> >>>Logrotate, although not included with the primary system of Solaris, has been available for download at least back as far as Solaris 2.6. It should be able to be downloaded from sunfreeware.com. Pretty much the same >>>functionality as what you have in RH. >>> >>>Hope this helps... >>> >>>Gary >>> >>> >>> >>> >>> >>> >>>Fred James wrote: >>> >>> >>> >>>>Don't get me wrong - that is a very interesting little gizmo, and one >>>>which I may find very useful, too. >>>> >>>>The credit lines at the end of the man page read: "Author, Erik Troan >>>>" - I haven't found logrotate on any of the other UNIXs >>>>I work with, is it on distributions of Linux other than Red Hat? >>>> >>>>sysmail at glade.net wrote: >>>> >>>> >>>> >>>>>Wouldn't logrotate do all this a lot easier? >>>>> >>>>>Just a thought, >>>>> >>>>>Carl >>>>> >>>>>On Fri, 4 Jan 2002, Paul Ingendorf wrote: >>>>> >>>>> >>>>> >>>>> >>>>>>Date: Fri, 4 Jan 2002 23:40:18 -0600 >>>>>>From: Paul Ingendorf >>>>>>Reply-To: discuss at ntlug.org >>>>>>To: discuss at ntlug.org >>>>>>Subject: RE: [NTLUG:Discuss] Script question >>>>>> >>>>>>hmm shell scripts yum gotta throw in how I would do it even though I see a few fine examples have already been posted. >>>>>>The following would also move the files appropriately. Just copy and paste this in a file chmod +x it and remember to create the new backup file as /archive/domains.`date +%Y-%m-%d` and it should work properly every time. >>>>>> >>>>>>#!/bin/bash >>>>>>rm -f /archive/*.3 >>>>>>let x=3 >>>>>>for file in /archive/domains.[0-9][0-9][0-9][0-9]-[0-1][0-9]-[0-3][0-9].[0-3] >>>>>> do >>>>>> mv $file /archive/`echo $file | sed -e "s/\.[0-9]^//g"`.$x >>>>>> let x=$x-1 >>>>>> done >>>>>>mv /archive/domains.`date +%Y-%m-%d` /archive/`date +%Y-%m-%d`.0 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>_______________________________________________ >>>>>http://www.ntlug.org/mailman/listinfo/discuss >>>>> >>>>> >>>>> >>>>> >>>>> >>>>-- >>>>...make every program a filter... >>>> >>>>_______________________________________________ >>>>http://www.ntlug.org/mailman/listinfo/discuss >>>> >>>> >>>+?i??0?{e? >>>+f?s(S(Ys(Y"?b???~??S(?.ss== >>> >>> >>-- >>...make every program a filter... >> >>_______________________________________________ >>http://www.ntlug.org/mailman/listinfo/discuss >> > +?i??0?{e? > +f?s(S(Ys(Y"?b???~??S(?.ss== > -- ...make every program a filter... From brian at pongonova.net Sun Jan 6 20:45:55 2002 From: brian at pongonova.net (brian@pongonova.net) Date: Sun, 6 Jan 2002 20:45:55 -0600 Subject: [NTLUG:Discuss] Logrotate... In-Reply-To: <3C38DAB9.60402@concentric.net> References: <3C37C0DF.4070009@concentric.net> <3C37C6A8.A9E4182D@airmail.net> <3C3873FB.4050309@concentric.net> <3C38B69C.16C2E37E@airmail.net> <3C38DAB9.60402@concentric.net> Message-ID: <20020106204555.A4299@turquoise.pongonova.net> On Sun, Jan 06, 2002 at 05:16:09PM -0600, Fred James wrote: > The fourth tenet of "The UNIX Philosophy" (Mike Gancarz) is "Choose > portability over efficiency", and I face a fair number of non Red Hat > systems. I could spend my time making them all BASH (and perhaps Red > Hat) compatible, but the owners would want justification, and if any of > the critical software failed, the first thing support wants to know is > what changed - I like "logrotate" but I don't think I'll port it. Maybe newsyslog, then? It's been around since the 80's, probably can find the source code. I know it's part of the OpenBSD distro... --Brian