• Also enhance ZIPX-compression support (for JPEG and MP3)

    Unsolved
    1
    0 Votes
    1 Posts
    5k Views
    A
    Hello! While I think the PA is a great format I have the problem that most of my contacts use WinZIP and thus can only process ZIPX when exchanging JPEG and MP3. Unfortunately PowerArchiver supports extraction of all ZIPX-archives, but when compressing to ZIPX the really useful JPEG- and MP3- and … algorithms are not supported. Please consider adding these ZIPX-compression-capabilities for better interoperability. Thanks!
  • show estimated memory usage in add window (.pa)

    Solved
    13
    0 Votes
    13 Posts
    17k Views
    spwolfS
    @werve @Mili
  • Add Data Integrity Support (e.g. Parchive) to .pa Format

    Moved Unsolved
    6
    0 Votes
    6 Posts
    8k Views
    BinTechB
    It would be very nice to see some integrated parity/recovery record options for the .pa format. So far history wise, I think only WinACE and WinRAR has/had that feature. ACE is dead and RAR is not my cup of tea over all. PowerArchiver has grown a lot and the PA format is a nice development. It would really be great to see some sort of recovery data option, to add such directly to the archive, or just create PAR2/PAR3 data to the archive in one seamless process.
  • 1 Votes
    10 Posts
    12k Views
    spwolfS
    @tokiwarthoot @eugene would know more why exactly it does not work that way. Real benefit of fma-rep is that it works across whole data set but uses a lot less memory. For instance, if we quickly compress MS Office 2016 iso (conveniently picked to be around 2GB), we will see real benefit of fma-rep over lz coders with 64M dictionary: [image: 1485808490005-upload-2c8d7666-f844-4420-94cd-e9cdb2d9e801-resized.png] Most of the difference is solely because of the fma-rep and some is due to lzmarec. So improvement in new fma-rep2 will not only help speed but also these kinds of backups.
  • jpeg codec

    Unsolved
    4
    1 Votes
    4 Posts
    7k Views
    eugeneE
    Many popular formats (.jpg,.pdf,.docx) use outdated methods of data compression. So it started making sense to remove that outdated compression and apply better one. Then do it backwards to restore the file. It usually only makes sense to do it losslessly, because it is impossible to automate otherwise - some other file always can contain a hash of this one, or something. And that’s what we call “recompression”, while lossy transformation would be called “re-encoding”.