[NTLUG:Discuss] OT Perl vs C question

Chris J Albertson alb at chrisalbertson.com
Tue Apr 26 19:01:03 CDT 2005


It seems to me that it would be easier, more reliable, and save on disk
space to access the SQL server directly via PERL, instead of copying the
SQL result sets into temporary local files. When you read in the files,
you're going to have to parse the data, eating up cpu cycles. The SQL
result sets are going to be parsed into columns already, by the nature of
the RDBMS. PERL::DBI comes to mind here as a solution.
Flat files are evil. :)

Chris


> # man perlcc
>
> Produces huge binaries, but you might save that second.  :-)
>
>
> On 4/26/05, Albert Modderkolk <mdd at advwebsys.com> wrote:
>> Probably Perl will produce these reports a month earlier but a second
>> later...
>>
>> Ciao, Albert
>>
>> Fred James wrote:
>> > All
>> > I have a multi part data to report processing project
>> > (1) get some data (flat files produced by SQL queries)
>> > (2) process the data (dump it all into array(s), consolidating and
>> > totaling on the fly)
>> > (3) produce reports (back through the array(s), a few final
>> > calculations, formatting, and output formated report(s))
>> >
>> > I am considering Perl because it seems to have the nice feature of
>> > unlimited arrays without declaring them and allocating space (is that
>> > true?)
>> >
>> > So, my question:  In a moderate volume data processing project, say
>> > reading 7 flat files of 3 to 7 fields each, and > 500,000 records
>> each,
>> > and doing something like steps 2 and 3 (above), how does Perl compare
>> to
>> > C in terms of speed?
>> >
>> > Thank you in advance for any help you may be able to offer
>> > Regards
>> > Fred James
>> >
>>
>> _______________________________________________
>> https://ntlug.org/mailman/listinfo/discuss
>>
>
> _______________________________________________
> https://ntlug.org/mailman/listinfo/discuss
>






More information about the Discuss mailing list