Rank: Newbie
Joined: 2/28/2008 Posts: 1 Location: UK
|
Hi, I'm doing an eval of DiffDog and I'm running into a hard "Out of memory" error when opening a 2nd XML file. Both files are approx ~105MBytes. Is there any way around this? TIA
|
Rank: Advanced Member
Joined: 12/27/2006 Posts: 31 Location: Vienna
|
Hello Martin,
Due to the way DiffDog analyzes and processes the files, the only way around this would be to significantly increase the available hardware memory and processor speed.
Christophe Support Engineer
|
Rank: Newbie
Joined: 6/27/2011 Posts: 1 Location: Longview, TX
|
Since this is a topic of interest to me, I'll put a few numbers in simply to give a data point of success with a large file. One a Xeon 3 GHz processor with Windows XP and 2 GB of RAM, it was possible to load two ~55 GB XML data files and diff them as XML. When diff'd as text, the diff showed 1800+ differences, and when diff'd as XML, only about 450.
In my case, with a single processor, while it is diff'ing, the system is pegged at 100% and is basically unusable... though this might be due to a system disk that is quite full and so the page file might be very fragmented. It might be worth trying to move the page file to a clean, unfragmented disk.
I could retain a modicum of usability by taking the priority of the DiffDog process down a notch, but it is still very sluggish.
During the text diff, it had the appearance that my page file was about 1.1 GB.
I'm not exactly sure how much it matters under the hood, but the native encoding of the files being diff'd was UTF16-LE. I had converted them to UTF-8 before diff'ing, and that halved the size of the files down from over 100MB to the 55MB size.
|