Please Notice: For fastest support from PowerArchiver team, contact us via our Support page

Temp file handling with local vs. network destinations



  • In PA2010B3, when using “use current folder as temp” (UCFAT) in PA Backup, two things I notice, one curious, the other offering an opportunity for enhancement.

    1. With UCFAT on, if the destination for the archive is a local hard drive, the .TMP file appears there when the backup starts, but it’s size is stuck at 0. However, checking the available space on the destination drive during the backup shows it going down as expected. At the very end, the .TMP file disappears and is instantly replaced by the archive, having the expected size.

    This is fine and optimal AFAIK.

    2. With UCFAT on, if the destination for the archive is a network hard drive, the .TMP file also appears there when the backup starts, but it’s size grows continuously during the backup. When the green indicator shows all files have been processed, the file stops growing. Then the archive file appears in the same folder, and it starts growing from 0 up to the size of the .TMP file. Then the .TMP disappears, and voila, “Backup Done”.

    3. With UCFAT off, with destination also network drive, it creates the .TMP file locally, then copies it, which is the only possible thing to do.

    Would it be possible in the second case to simply rename the .TMP file, and save time?

    I ran tests doing exactly the same backup to a network drive, format=zip, method=optimal, compression=ultra, with the only difference being UCFAT either on or off. With it off, it took 20% less time.

    If you used choices that made the compression take less time, and therefore created a larger archive file (depending on content), then the file copy time goes up, too. That means the proportion of the time spent copying compared to compressing goes up even more quickly, making the potential savings from rename vs. copy even higher.


Log in to reply
 

Looks like your connection to PowerArchiver Forums was lost, please wait while we try to reconnect.