Hi all.
Currently using BC v3 to compare our current program output vs. previous version output to make sure that only expected changes are present.
However we have a few CSV files that are very large (300+ megs) and have 60+ columns of data. Since this is a CSV and we need to ignore expected differences, we require the use of the DATA compare rather than text.
The problem that I'm having is the length of time it is taking to process this data. Currently I'm near the end of my first hour of the comparison (in this case the files are 200+ megs but only 3 or 4 columns wide) and am wondering if there is a way to speed this up. My thought is that if it's taking this long for 4 columns, I'm toast when I get to the files with 60+ columns.
Text compare works much faster, however we've had to add a few columns to the current program version and since text compare is a line compare, it's not really a benefit.
Any thoughts or suggestions would be appreciated.
Thanks,
Frank
Currently using BC v3 to compare our current program output vs. previous version output to make sure that only expected changes are present.
However we have a few CSV files that are very large (300+ megs) and have 60+ columns of data. Since this is a CSV and we need to ignore expected differences, we require the use of the DATA compare rather than text.
The problem that I'm having is the length of time it is taking to process this data. Currently I'm near the end of my first hour of the comparison (in this case the files are 200+ megs but only 3 or 4 columns wide) and am wondering if there is a way to speed this up. My thought is that if it's taking this long for 4 columns, I'm toast when I get to the files with 60+ columns.
Text compare works much faster, however we've had to add a few columns to the current program version and since text compare is a line compare, it's not really a benefit.
Any thoughts or suggestions would be appreciated.
Thanks,
Frank
Comment