• Categories
    • Recent
    • Tags
    • Popular
    • Search
    • Register
    • Login
    Support for mobile devices and android?
    Chris Steinbeck BellC
    Hello. I would like to know if there is some way to open .pa files on mobile devices such as the ones using android. I wanted to use it on my samsung phone but i could not find an app or something that would let me to open those files. Do somebody knows how to open them or could it be that there is a powerarchiver ver for mobile?. Thanks in advance.
    General (Testing, Performance, Usage, Questions)
    Make .PA open to others programs?
    A
    Good morning, I bought Powerarchiver Toolbox because I find the .pa format brilliant! The problem with this format is that only Powerarchiver can open it, so it won’t be as famous as the .rar or .7z! In my opinion it is okay to have the exclusive of the creation of the format, but not of the extraction, otherwise it will never be used by anyone because few people have Powerarchiver! I hope that one day the other extractors will be able to extract the .pa format, I will be able to create archives and send them to whoever I want!
    General (Testing, Performance, Usage, Questions)
    PowerArchiver 2019 Toolbox International with Advanced Codec Pack
    F
    Hi, I have such a problem. 03.09.2020 I bought this program, I received the registration data that I entered into the application (copy and paste), the application restarted and again reported that it is not registered. I tried it several times, online and offline, I logged in to the web account, I sent the activation email again, I entered its data, but I still can’t register it. So I wrote to support and on September 3, 2020, three hours later, they sent me a screenshot that they managed to activate the program. I thought I might have something to do with windows, so I reinstalled. But that didn’t solve the problem. It cannot be activated in either version 19.00.59 or version 20.00.53. When I wrote them that I still can’t do it, I even sent it to them both in screenshots and in the video, even after reinstallation, and I even attached a dxdiag file to them, so they don’t even answer me anymore and I don’t know what to do with it :-( It’s been a week since it was shipped. Any suggestions please? Thanks.
    General (Testing, Performance, Usage, Questions)
    Poor compression of >20GB exe/msi/cab sample
    N
    It was raised poor PA compression ratio: Uncompressed: 21,657,900,590 bytes 7-Zip (your package): 2,662,732,158 bytes PowerArchiver 17.00.90 (Optimize Strong): 3,398,179,937 bytes Find package at: https://mega.nz/#!0aRDiAKQ!lrwtC64jnkk4d0ZKjcVGgLKPCcOqyUSyAQ62JJtQZOM[/QUOTE]
    General (Testing, Performance, Usage, Questions)
    Advanced Codec Pack - engine list of changes
    spwolfS
    Here is the list of changes for Advanced Codec Pack engine 13-12-2016 09:56 vE4 zstd2 enc. progress reporting fix plzma4 progress fix plzma4 buffering changes 21-12-2016 00:10 vE5 initial version of x64flt4 25-12-2016 06:39 vE6 x64flt4 update single compressed stream instead of 3 rc speed opt compression improvement rc flush after 256k bytes without any addr output x64comp filter, for addr stream compression with bcj2/x64flt3 25-01-2017 10:11 vE7 mpzapi filter potential support for external executables as .pa filters potential support for executables that don’t work with stdin/stdout (via winapi hooks) 26-01-2017 13:18 vE7 lepton filter, same exewrap lib bugfix: “Data error” with mpzapi when extracting non-solid archive 27-01-2017 07:48 vE7 lepton fallback: files are now stored if lepton quits without writing anything bugfix: lepton inputs that can’t be correctly restored are now reported during compression, not decompression bugfix: remove mpzapi.exe crashes during extraction 31-01-2017 07:44 vE7 packmp3 support 10-02-2017 20:00 vE7 plzma MT bugfix 12-02-2017 22:16 vE8 lepton fix to use two output streams; s0=fallback, s1=lepton lepton fix to use chunksize param for fallback; lepton:c=400M uses 400M inpbuf 13-02-2017 08:47 vE8 bsc support added (as “bsc3”), with :c#M,:x0,3-6,:a1/2 as params 13-02-2017 23:39 vE8 bsc3: added lc param - lc0-lc2 means cf/cp/ca, lc4 means -r; (lc6 = -ca -r) 20-02-2017 12:19 vE8 plzma4: 32-bit outpos bugfix plzma4: loop_enc EOF check fix 26-02-2017 19:51 vE8 bwt1/qlfc filters added 27-02-2017 05:30 vE8 divsufsort.dll rebuilt with gcc 28-02-2017 04:46 vE8 bwt1: chunksize bugfix bwt1/qlfc: disable chunksize alignment to 1M 01-03-2017 22:26 vE8 added qlfc2:mt=#:c=# - qlfc with integrated MT wrapper 03-03-2017 14:58 vE9 updated qlfc2/MTwrap added bwt2:mt=#:c=# 08-03-2017 06:56 vE9 BUG: plzma4 decoder memory leak BUG: workaround for divsufsort’s inverse_bw_transform doing nothing for n=1 09-03-2017 09:31 vE9 BUG: 7z function CHandler::IsFolderEncrypted is buggy (outdated) update bwt1 to 5N version (was 6N) x64flt3: remove zero padding at the end (left from debug) 10-03-2017 07:52 vE9 reflate update to ver 1l2 (bugfix) 10-03-2017 13:41 vE9 BUG: ppmd_sh incorrectly parses memory size BUG: ppmd_sh UpdateModel bugfix 13-03-2017 12:08 vE9 added coro_init() call to deltb::Init() 14-03-2017 17:25 vE9 BUG: all x64flt filters got stuck on files shorter than 8 bytes 15-03-2017 05:57 vE9 7z k_Scan_NumCodersStreams_in_Folder_MAX limit increased to 512 17-03-2017 15:45 VE9 reflate speed optimization (23% faster on x64, 8% on x86) 20-03-2017 03:30 vE9 BUG: lepton failed during encoding of some files; added exitcode check 25-03-2017 04:32 vF0 switched default encryption to winaes 30-03-2017 09:28 vF0 BUG: sometimes there’s not enough memory for winaes decrypting 04-04-2017 19:15 vF0 added MTwrap-based MT zstd as zstd3 - seems incompatible with zstd2 for some reason 06-04-2017 14:57 vF0 zstd3: update to 1.1.4 library 06-04-2017 15:31 vF0 zstd3: fall back to zstd 1.1.0 - 1.1.4 is slower 22-04-2017 17:42 vF1 plzma (plain single-threaded one) bwts,bwtsh,bwtsl,bwt1h,bwt1l,cdm,cdm2 24-04-2017 04:49 vF1 bwt2h,bwt2l mtwrap min_chunk workaround 06-06-2017 02:57 vF1 BUG: bwt2,bwt blklen=2 incorrect handling mtwrap decoder buffer increased to 2*chunksize mtwrap/bufring anti-MT updates IC17->IC18 for x64 build 08-06-2017 20:18 vF2 rep2 = rep1 + MTwrap // :c -> :d added PPMD codec from original 7z (vH) 14-06-2017 13:07 vF2 BUG: zstd cQuit called instead of dQuit 15-06-2017 13:27 vF2 BUG: mtwrap used memcpy on overlapped memory BUG: mtwrap had duplicate memcpys partial buffer flush at the end of BWT2 block updated version_info 16-06-2017 09:33 vF2 !!! all mtwrap codecs lost compatibility (rep2,cdm2,zstd3,bwt2,bwt2l,bwt2h,qlfc2) !!! BUG: mtwrap handling of l=0xFFFE blocks BUG: mtwrap handling of l=0x0001 blocks 17-06-2017 12:20 vF2 restored vF0-compatible zstd3,bwt2,qlfc2; test scripts included BUG: freezing bug is finally solved by adding dynamic buffering to MTwrap decoder mtwrap decoder input buffer reduced from 2C to 1C following codecs use MTwrap_v3: BWT3,BWT3H,BWT3L,QLFC3,cdm2,rep2,zstd4 18-06-2017 23:00 vF2 BUG: another mtwrap freezing bug - mtwrap didn’t notice when thread with empty input quits without outputting anything 32-bit variable was used for thread EOF flags, so max mt32 was supported; updated to 64. 19-06-2017 20:09 vF2 BUG: freezing/data error in zstd4 21-06-2017 16:01 vF2 BUG: data errors in bwt3 refactored mtwrap/loop_dec 21-06-2017 22:54 VF3 archive cleanup modded packmp3 for mp3 compression 25-06-2017 21:11 vF3 ppmd_sh2 added (dX can be used instead of mem=X) ppmd_sh reverted back into enc/dec template 01-07-2017 13:02 vF3 alpha version of mp3det+packmp3b combo (x86 and x64 are incompatible) 02-07-2017 21:14 vF3 packmp3b updated with mtwrap 03-07-2017 06:41 vF3 BUG: x86 version of packmp3b crashes on decoding (problem with IC and floats; /fp:strict fixed it) BUG: memory leak in packmp3b BUG: crash on decoding of test4a.mp3 04-07-2017 18:14 vF4 source cleanup removed some experimental codecs etc 06-07-2017 11:31 vF4 BUG: packmp3b formats created by 32-bit and 64-bit 7z.dll are different packmp3b compression slightly retuned towards 320kbit 16-08-2017 22:53 vF5 reflate2 = reflate/mtwrap; eg. reflate2:x9:c10M 27-08-2017 16:23 vF5 dropped packmp3,packmp3a codecs (and corresponding .exe) added lepton2 aka lepton-slow-best-ratio 11-09-2017 07:10 vF5 added precomp as precomp:mt4:c64M 12-09-2017 08:20 vF5 BUG: fixed precomp to not use same tempfile names in all instances disabled console input in precomp on file overwrite enabled ZIP/PNG/PDF/GZIP parsing in precomp updated precomp handler 13-09-2017 12:30 vF5 added jojpeg for jpeg compression (solid and with detection, but slow); s0=bin, s1=compressed 14-09-2017 15:23 vF5 added packmp3c (2x slower than packmp3b, 1-2% better compression) 15-09-2017 15:03 vF5 packmp3c bugfix (scfsi flags), a little worse compression 19-09-2017 16:00 vF5 BUG: forgot coro_init for jojpeg 19-09-2017 17:12 vF5 updated precomp to 0.4.6 20-09-2017 01:37 vF5 jojpeg switched to gcc dlls (35% faster)
    General (Testing, Performance, Usage, Questions)
    Some test results of mp3 to .pa .pa experimental
    pirrbeP
    Topic thumbnail image
    General (Testing, Performance, Usage, Questions)
    Experimental Codecs - info, updates
    spwolfS
    This is a thread about Experimental Codecs used with PowerArchiver when Experimental Codecs check is used. Currently used Experimental Codecs (from PA 17.00.81) : mp3det filter + Packmp3b codec = mp3 codec, currently around 2.5% better compression than WinZip ZIPX and 3x faster speed on 8t cpus. *** Please note, experimental codecs are for testing purposes only and will be used only when experimental checkbox is checked. Quite likely there will be no backwards compatibility with finished versions of codecs, so please use it only for testing.
    General (Testing, Performance, Usage, Questions)
    settings for wav/sf2 files (from 17.00.68)
    spwolfS
    thread for delta/plzma4:a0 settings discussion, moved from: https://forums.powerarchiver.com/topic/5747/fast-ring-powerarchiver-2017-17-00-67-68-69 since on version specific thread it will be pushed back on thread list quite fast.
    General (Testing, Performance, Usage, Questions)
    Releasing unpacking library
    J
    Do you plan to release unpacking library, so 3rd party software can extract PA format as well? It would be great and certainly would expand the format.
    General (Testing, Performance, Usage, Questions)
    brilliant format
    D
    this format compresses word files better than rar zip and 7zip thanks powerarchiver
    General (Testing, Performance, Usage, Questions)
    Optimized Strong, initial tests speed/compression
    spwolfS
    Hello @Alpha-Tester . Lets test a bit Optimized Strong methods and see what works and what can be improved. Relationship between codec and filter paramters as well as number of threads is complicated ones, and while we have tried to automate it in the best possible way, improvements are still possible. @skypx has a nice cpu for testing 16t performance for instance. It would be interesting to see whats maximized performance for Optimized Strong Maximum and Ultra options because they use different entropy models (a0 lzma, a1 or lzmarec) which provide different performance - lzmarec is much stronger but also slower to extract where our parallel decode helps. Debug mode can help to log all of this.
    General (Testing, Performance, Usage, Questions)
    Filters: Reflate - (pdf/docx recompression)
    spwolfS
    Filter: Reflate What is it? Reflate is advanced deflate recompression filter designed to improve compression of files with deflate streams. Obvious examples are pdf, docx, xlsx, swf, png but deflate streams can be found in many other files, usually in form of png images. Where to use it? Optimized Strong mode - PowerArchiver will compress all pdf, docx/xslx, pngs, swf, etc, files with Reflate filter automatically. PLZMA4 codec - You can enable reflate filter. Advantages Much better compression of PDF. DOCX and other files with deflate streams. Between 30%-50% on average (vs 5% for regular archivers). PDFs that are mostly big pictures wont be compressed well (especially if it is jpegs), but it will still be substantially better than regular codecs. Disadvantage: Slower speed. Examples: FY17_Proposed_Budget_Vol_1.pdf (Austin Texas Budget 2016/2017) - 20,157 kb PowerArchiver (Extreme): 9,994 kb WinRar (best): 18,356 kb 7zip (Ultra) : 18,336 kb WinZip (Zipx/Lzma) : 18,411 kb oig-work-plan-2016.pdf (Office of Inspector General plan 2016) - 4,165 kb PowerArchiver (Extreme): 1,346 kb WinRar (best): 3,790 kb 7zip (Ultra) : 3,791 kb WinZip (Zipx/Lzma) : 3,784 kb Analysis: Good case scenario. Images are likely pngs, and a lot of text that can be compressed great.
    General (Testing, Performance, Usage, Questions)
    FMA-REP - info and test results (.pa)
    spwolfS
    (This article is work in progress) What is fma-rep? Deduplication filter based on anchor hashing. Technically LZ77, but has no entropy coding, and only longer matches have a chance to be replaced with a reference. It has much lower memory requirements than lzma, so can be used to compensate lzma’s smaller window/dictionary size. Examples: Official ISOs from Microsoft for Windows 10 Pro and Office 2016: Due to large file sizes, and the fact that fma-rep takes a lot less memory than plzma4, it is very useful for large software installation DVDs that have a lot of compressed data already. Best idea is to use large window of fma-rep1 and fast codec, to achieve good compression and yet very fast speed. Tests AMD FX8320 with 16GB RAM and SSD Office 2016 Pro ISO - 1,985,392 kB .pa (Zstandard2, x64flt, bcj2, fma-rep1) 36s encode, 37s decode - 1,551,741 kB .rar (Normal) 128s encode, 13s decode, 1,892,471 kB Windows 10 Pro ISO .pa (Zstandard2, x64flt, bcj2, fma-rep1) 87s encode, 77s decode - 3,577,849 kB .rar (Normal) 314s encode, 27s decode - 3,838,188 kB Sharepoint Server 2013 .rar (Normal) 369s encode, 15s decode - 2,269,782 kB .zip (WZ 21 Normal) - 47s encode, 13s decode - 2,305, 755 kB .pa (Zstandard2, x64flt, bcj2, fma-rep1) - 61s encode, 41s decode - 1,955,468 kB
    General (Testing, Performance, Usage, Questions)

    Poor compression of >20GB exe/msi/cab sample

    Scheduled Pinned Locked Moved Solved General (Testing, Performance, Usage, Questions)
    52 Posts 6 Posters 42.9k Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D Offline
      diskzip
      last edited by

      I can test locally here, but I am having a hard time configuring PA for testing at maximum compression settings. Where and how can I configure PA for the best available compression settings?

      1 Reply Last reply Reply Quote 0
      • D Offline
        diskzip @eugene
        last edited by

        @eugene said in Poor compression support:

        See diskzip’s previous post about this - https://encode.ru/threads/?p=53578&pp=1
        PA can compress this set better, just not with default GUI settings or some such.
        My current theory is that diskzip compares single-threaded vs MT results here,
        where MT works by blockwise splitting of data.
        Basically, its not enough to compare archive size here, paq8px would also compress better, so what.

        DiskZIP’s compression here uses two threads precisely.

        1 Reply Last reply Reply Quote 0
        • D Offline
          diskzip @spwolf
          last edited by

          @spwolf said in Poor compression support:

          @diskzip … shelwien on encode.ru.

          Seems like some multimedia data that might like some multimedia filter applied… I will take a look sometimes next week to see where is the difference. Thanks.

          Not really multimedia at all. Primarily application binaries and pre-compressed application runtimes.

          spwolfS 1 Reply Last reply Reply Quote 0
          • spwolfS Offline
            spwolf conexware @diskzip
            last edited by

            @diskzip a lot of repeat data that works well with 1.5GB dictionary… you can set plzma to 2000M in settings, and mt to 1, and lets see how it works.

            I tested with 720m dictionary and that got it down another 400MB. But thats about the limit of my 12GB laptop.

            When it comes to testing, having a test case that requires 18GB-25GB of free RAM is just too hard and obscure to test. It would be better to have some sample that can use proper multithreading and reasonable dictionary that users will actually end up using - for instance 128m and 8t.

            When it comes to comparing lzma2 to our plzma with lzmarec entropy coder, you should see around 2%-4% improvement all other things being equal.

            For us, while lzmarec is nice and can always show improvement over same settings for lzma2, it is not the main point of the PA format… more important are all these other codecs - mp3, lepton/jpeg, reflate for pdf/docx/deflate, bwt for text, mt ppmd for some multimedia files, deduplication filter for everything that works 50-60MBs, etc, etc… and how it all works automatically and multithreaded.

            0_1502298580835_b96b6382-0690-4405-a6f9-0cc0a3b66773-image.png

            0_1502298658084_011f0676-5cdc-43f0-a17e-65a0a28c59cd-image.png

            D 2 Replies Last reply Reply Quote 1
            • D Offline
              diskzip @spwolf
              last edited by

              @spwolf Is it necessary to restrict PA to only one thread? DiskZIP obtained this result on two threads, not one.

              How about other filters - do I need to override any of those settings as well, especially in light of the large number of binaries included in my distribution?

              spwolfS 1 Reply Last reply Reply Quote 0
              • D Offline
                diskzip @spwolf
                last edited by

                @spwolf said in Poor compression support:

                @diskzip a lot of repeat data that works well with 1.5GB dictionary… you can set plzma to 2000M in settings, and mt to 1, and lets see how it works.

                I tested with 720m dictionary and that got it down another 400MB. But thats about the limit of my 12GB laptop.

                When it comes to testing, having a test case that requires 18GB-25GB of free RAM is just too hard and obscure to test. It would be better to have some sample that can use proper multithreading and reasonable dictionary that users will actually end up using - for instance 128m and 8t.

                When it comes to comparing lzma2 to our plzma with lzmarec entropy coder, you should see around 2%-4% improvement all other things being equal.

                For us, while lzmarec is nice and can always show improvement over same settings for lzma2, it is not the main point of the PA format… more important are all these other codecs - mp3, lepton/jpeg, reflate for pdf/docx/deflate, bwt for text, mt ppmd for some multimedia files, deduplication filter for everything that works 50-60MBs, etc, etc… and how it all works automatically and multithreaded.

                0_1502298580835_b96b6382-0690-4405-a6f9-0cc0a3b66773-image.png

                0_1502298658084_011f0676-5cdc-43f0-a17e-65a0a28c59cd-image.png

                So following the order of these instructions, the custom dictionary setting was lost. I had to repeat that step - glad I double-checked. Not the most intuitive UI, if you are open to a bit of negative feedback.

                Another negative tidbit, it took about 1 minute for the operation to initiate (for the compressing files window to appear) after I clicked the Finish button.

                Not the best user experience really, but I am excited to see what actual compression savings will result.

                spwolfS 1 Reply Last reply Reply Quote 0
                • spwolfS Offline
                  spwolf conexware @diskzip
                  last edited by

                  @diskzip yeah, i noticed i posted wrong order but i figured you will figure it out… we have to reset the settings so users who enter wrong ones can go back to defaults, but otherwise users can easily save a profile with those settings and then always use that profile.

                  1 Reply Last reply Reply Quote 0
                  • spwolfS Offline
                    spwolf conexware @diskzip
                    last edited by

                    @diskzip said in Poor compression support:

                    @spwolf Is it necessary to restrict PA to only one thread? DiskZIP obtained this result on two threads, not one.

                    How about other filters - do I need to override any of those settings as well, especially in light of the large number of binaries included in my distribution?

                    no, you dont need to do anything else… you are actually using 7z.exe and lzma2, right? lzma2 uses 2 threads per dictionary when it comes to memory - so in this case it is 11.5 x 1536M. Plzma is different not only due to different entropy coder, but also it is parallel version of lzma. So multiple threads are used for both compression and extraction. It also has larger maximum dictionary at 2000M.

                    Of course, even with mt1, there are multiple threads being used, depending on files, size, extension - for instance lzmarec entropy coder uses more than 1 thread anyway, and we also always use some extra filters.

                    In any case, what is maximum dictionary you use in your product? I am sure it is not 1.5G since thats 18GB of ram usage?

                    D 1 Reply Last reply Reply Quote 0
                    • D Offline
                      diskzip @spwolf
                      last edited by

                      @spwolf said in Poor compression support:

                      @diskzip said in Poor compression support:

                      @spwolf Is it necessary to restrict PA to only one thread? DiskZIP obtained this result on two threads, not one.

                      How about other filters - do I need to override any of those settings as well, especially in light of the large number of binaries included in my distribution?

                      no, you dont need to do anything else… you are actually using 7z.exe and lzma2, right? lzma2 uses 2 threads per dictionary when it comes to memory - so in this case it is 11.5 x 1536M. Plzma is different not only due to different entropy coder, but also it is parallel version of lzma. So multiple threads are used for both compression and extraction. It also has larger maximum dictionary at 2000M.

                      Of course, even with mt1, there are multiple threads being used, depending on files, size, extension - for instance lzmarec entropy coder uses more than 1 thread anyway, and we also always use some extra filters.

                      In any case, what is maximum dictionary you use in your product? I am sure it is not 1.5G since thats 18GB of ram usage?

                      DiskZIP doesn’t invoke 7z.exe, we have our own low-level wrapper around 7-Zip; unlike PowerArchiver though, we don’t actually implement our own custom algorithm(s) or change the default 7-Zip compression in any way (other than exposing 7-Zip functionality in a nice, structured API with callbacks, etc.) - we also license this 7-Zip library to third parties for their use.

                      The results with PA using your exact settings are 2.86 GB, I am at a loss to understand why PA has performed so poorly on this data set.

                      Our dictionary is indeed exactly 1.5 GB - this is the 7-Zip maximum for present time (and even already this presents some problems with extraction on 32 bit systems due to memory fragmentation). It is LZMA2, of course, and with 2 threads.

                      I may have misreported the memory requirements - but don’t blame me, blame the Windows Task Manager! I see it going up to 17.X GB (so cap it at 18 GB) with the 1.5 GB dictionary. With a 1 GB dictionary, it goes up to 10 GB (give or take a gigabyte).

                      spwolfS 1 Reply Last reply Reply Quote 0
                      • spwolfS Offline
                        spwolf conexware @diskzip
                        last edited by

                        @diskzip interesting, i got 2.83G with 720m dictionary… It just has a lot of similar files so large dictionary with lzma does the wonders there. Doesnt seem like there is anything else to it.

                        Memory usage is 11.5x the dictionary size each 2 threads in mt setting for lzma2.

                        But how many of users have =>24GB required for such setting though?

                        W 1 Reply Last reply Reply Quote 0
                        • N Offline
                          nikkho Alpha Testers
                          last edited by

                          @nikkho said in Poor compression support:

                          PowerArchiver 17.00.90 (Optimize Strong): 3,398,179,937 bytes

                          I used 1GB dictionary on PA, and things went reduced to 2,79GB:

                          • PowerArchiver 17.00.91 (Optimize Strong 1GB): 3,004,242,466 bytes
                          • PowerArchiver 17.00.90 (Optimize Strong): 3,398,179,937 bytes
                          spwolfS 1 Reply Last reply Reply Quote 0
                          • spwolfS Offline
                            spwolf conexware @nikkho
                            last edited by

                            @nikkho said in Poor compression support:

                            @nikkho said in Poor compression support:

                            PowerArchiver 17.00.90 (Optimize Strong): 3,398,179,937 bytes

                            I used 1GB dictionary on PA, and things went reduced to 2,79GB:

                            • PowerArchiver 17.00.91 (Optimize Strong 1GB): 3,004,242,466 bytes
                            • PowerArchiver 17.00.90 (Optimize Strong): 3,398,179,937 bytes

                            I tried both 7zip - Ultra and PA Strong at 128m, and 7z was 4.59GB while PA was 3.17GB.

                            This large difference is likely due to rep working on similar files. But rep has a limit of 2GB, so it likely misses a lot when it comes to 20GB samples. But sure is nice to work at mt8 and have it done in 3x less time :)

                            1 Reply Last reply Reply Quote 0
                            • D Offline
                              diskzip
                              last edited by

                              Well our target goal here is at least 2.48 GB, which is what DiskZIP is able to achieve with an out-of-the-box 7-Zip compression engine under the hood.

                              I was hoping for PA to reduce that further to the neighborhood of 2 GB even, or at least a symbolic reduction over the “raw” upload size.

                              It is great to see my own product outperforming all else, but in the interest of advancing the state-of-the-art in compression, I would hope for more third party competition :)

                              eugeneE 1 Reply Last reply Reply Quote 0
                              • eugeneE Offline
                                eugene conexware @diskzip
                                last edited by

                                It should be possible to reach a better result with PA.

                                1. plzma should support a larger window than 1536M
                                2. a1/lzmarec mode might provide a few % better compression than a0/lzma
                                3. rep1 dedup filter has parameters that can be tweaked too.
                                  Or it might be better to disable it instead, when p/lzma with huge window is used.
                                4. reflate might work on some files.
                                5. x64flt3 exe filter should be better than bcj2
                                6. deltb filter should have some effect on exes too
                                7. we can tweak file ordering
                                  Atm we don’t have a PC with >20GB of memory around, so we can’t do these experiments.
                                  And anyway, I’d not expect that much gain here, because we don’t have LZX recompression atm,
                                  which is what is necessary for many of these cab/msi files.
                                  As to .7z files, I guess I can integrate my existing lzma recompressor easily enough, but it won’t have that much effect.
                                D 1 Reply Last reply Reply Quote 1
                                • D Offline
                                  diskzip @eugene
                                  last edited by

                                  @eugene said in Poor compression support:

                                  It should be possible to reach a better result with PA.

                                  1. plzma should support a larger window than 1536M
                                  2. a1/lzmarec mode might provide a few % better compression than a0/lzma
                                  3. rep1 dedup filter has parameters that can be tweaked too.
                                    Or it might be better to disable it instead, when p/lzma with huge window is used.
                                  4. reflate might work on some files.
                                  5. x64flt3 exe filter should be better than bcj2
                                  6. deltb filter should have some effect on exes too
                                  7. we can tweak file ordering
                                    Atm we don’t have a PC with >20GB of memory around, so we can’t do these experiments.
                                    And anyway, I’d not expect that much gain here, because we don’t have LZX recompression atm,
                                    which is what is necessary for many of these cab/msi files.
                                    As to .7z files, I guess I can integrate my existing lzma recompressor easily enough, but it won’t have that much effect.

                                  Interesting thoughts. I myself have lost the 32 GB RAM machine access for the next 10 days or so, but I will be glad to retest as soon as I have that access. In the meanwhile, I have a 16 GB RAM machine which I will try to retest on.

                                  Some thoughts:

                                  1. I tried with 2 GB per the instructions.
                                  2. How to configure these?
                                  3. I was counting on dedup for huge savings. Would it conflict with LZMA or would it be best to enable it?
                                  4. I don’t think there’s many ZIP streams in the dataset.
                                  5. That sounds very exciting. Is it a custom PA filter? Is it for 64-bit binaries only, or does it also cover 32-bit binaries?
                                  6. Same as #5.
                                  7. This must be tweaked, even DiskZIP cannot compress well unless the file ordering is sorted instead of “random”.

                                  For LZX recompression, you probably won’t be hampered by digital signatures (for when you end up having it), right?

                                  On that note - some of the LZX’s may have Microsoft’s delta repacks, which may be more problematic than just ordinary LZX decompression.

                                  The one good news for the .7z files is that they are all stored uncompressed/raw - so the only benefit lost is proper file sorting across the bigger data set.

                                  spwolfS eugeneE 2 Replies Last reply Reply Quote 0
                                  • spwolfS Offline
                                    spwolf conexware @diskzip
                                    last edited by

                                    @diskzip do you plan to add full MT support for 7z? I think that is a must have if you want people to use your tool over 7z. Otherwise, it is much easier to test 7z with just using 7zFM since we can use 8t cpus properly and it cuts down testing on 20gb files by significant margin (35m vs 140m for this test on my computer).

                                    Or does DiskZip do anything else for 7zip that affects compression, are results different between 7z and diskzip using 7z?

                                    D 1 Reply Last reply Reply Quote 0
                                    • eugeneE Offline
                                      eugene conexware @diskzip
                                      last edited by eugene

                                      In the meanwhile, I have a 16 GB RAM machine which I will try to retest on.

                                      You should be able to use 1G dictionary there, at least.

                                      I tried with 2 GB per the instructions.

                                      2GB is wrong, I suggested 2000M; 2GB is 2048M.
                                      Problem is, bt4 matchfinder uses 32-bit indexes and there’s a dual buffer for window
                                      (to avoid special handling for wrap-around).
                                      Then, there’re also some special margins, so using precisely 2^32/2 for window size
                                      is also impossible.
                                      I’m not sure about the precise maximum for window size, so you can start with 1536M
                                      and try increasing it, I guess.

                                      How to configure these?

                                      Try looking around in all options windows/tabs?
                                      Otherwise, just try testing with 7zdll/7zcmd, you should have the links?

                                      I was counting on dedup for huge savings.
                                      Would it conflict with LZMA or would it be best to enable it?

                                      Current dedup filter (rep1) only supports up to 2000M window too,
                                      due to the same issues as lzma, so it would only hurt lzma compression,
                                      when it has the same window.

                                      Compression-wise, srep should be better atm, but its also slower
                                      and relies on temp files too much.
                                      And in any case, you should understand that we can’t just use Bulat’s
                                      tools in a commerical app.

                                      In fact, dedup filter improvement is planned, I’m just busy
                                      with other codecs atm.

                                      I don’t think there’s many ZIP streams in the dataset.

                                      There’re plenty of cab archives with MSZip compression though.
                                      Like all .msu files, for example.

                                      1. That sounds very exciting. Is it a custom PA filter?
                                        Is it for 64-bit binaries only, or does it also cover 32-bit binaries?

                                      It does more or less the same as bcj2 for 32-bit binaries (hopefully better),
                                      and it also adds support for RIP addressing in x64 binaries.

                                      1. Same as #5.

                                      Yes. Normal delta filter in 7z simply subtracts all bytes with a given step.
                                      (For example, it would be delta:4 for 16-bit stereo wavs.)

                                      While deltb is an adaptive delta filter, which tries to detect binary tables
                                      in the data. Its not very good for multimedia, but can be quite helpful for exes.

                                      For LZX recompression, you probably won’t be hampered by digital signatures
                                      (for when you end up having it), right?

                                      All our recompression is lossless, so hashes/crcs/signatures should still match
                                      on decoding, because extracted archive should be exactly the same.

                                      There’s a bigger problem with LZX though - it supports window size up to 2M,
                                      and does optimal parsing, so a reflate equivalent for LZX might appear too slow,
                                      or would generate too much recovery data (if optimal parsing is not reproduced in recompressor).

                                      But at least for LZX it might still be possible, while for LZMA it likely isn’t.

                                      On that note - some of the LZX’s may have Microsoft’s delta repacks, which
                                      may be more problematic than just ordinary LZX decompression.

                                      Yes, there’s also LZMS, which is a newer LZX upgrade with support for >2M windows,
                                      x64 code preprocessing, etc.
                                      And then, MS also uses quite a few other compression algorithms (xpress,quantum,LZSS,…).
                                      But its a lot of work to write a recompressor even for a single format,
                                      so we don’t have any plans for these atm.

                                      Its much more interesting to look into direct applications of what we already have first,
                                      like reflate-based recompression for png/zip/pdf, adding level/winsize detector to reflate, etc.

                                      The one good news for the .7z files is that they are all stored
                                      uncompressed/raw - so the only benefit lost is proper file sorting across
                                      the bigger data set.

                                      Yes, it could be a good idea to write recompressors for popular archive formats,
                                      even without support for their codecs - just turn archive into a folder
                                      and extract whatever data in archive corresponding to files with names from archive.

                                      1 Reply Last reply Reply Quote 1
                                      • D Offline
                                        diskzip @spwolf
                                        last edited by

                                        @spwolf said in Poor compression support:

                                        @diskzip do you plan to add full MT support for 7z? I think that is a must have if you want people to use your tool over 7z. Otherwise, it is much easier to test 7z with just using 7zFM since we can use 8t cpus properly and it cuts down testing on 20gb files by significant margin (35m vs 140m for this test on my computer).

                                        Or does DiskZip do anything else for 7zip that affects compression, are results different between 7z and diskzip using 7z?

                                        DiskZIP is fully multi-threaded, but the default compression profiles all favor smaller archive size over processing speed, so you would need to edit your compression settings in the DiskZIP GUI to spread usage over more cores. I am escalating this request internally to see where the magic happens here.

                                        Note that with standard 7-Zip (or DiskZIP that consumes standard 7-Zip from a structured DLL interface), you need to limit thread counts to two for obtaining the best results. While LZMA2 has been optimized to spread the workload across multiple threads, doing so always does very substantial harm to the compression savings realized.

                                        DiskZIP does not do anything that affects compression, so results should be 100% identical between 7-Zip and DiskZIP.

                                        spwolfS 1 Reply Last reply Reply Quote 0
                                        • spwolfS Offline
                                          spwolf conexware @diskzip
                                          last edited by

                                          @diskzip said in Poor compression support:

                                          @spwolf said in Poor compression support:

                                          @diskzip do you plan to add full MT support for 7z? I think that is a must have if you want people to use your tool over 7z. Otherwise, it is much easier to test 7z with just using 7zFM since we can use 8t cpus properly and it cuts down testing on 20gb files by significant margin (35m vs 140m for this test on my computer).

                                          Or does DiskZip do anything else for 7zip that affects compression, are results different between 7z and diskzip using 7z?

                                          DiskZIP is fully multi-threaded, but the default compression profiles all favor smaller archive size over processing speed, so you would need to edit your compression settings in the DiskZIP GUI to spread usage over more cores. I am escalating this request internally to see where the magic happens here.

                                          Note that with standard 7-Zip (or DiskZIP that consumes standard 7-Zip from a structured DLL interface), you need to limit thread counts to two for obtaining the best results. While LZMA2 has been optimized to spread the workload across multiple threads, doing so always does very substantial harm to the compression savings realized.

                                          DiskZIP does not do anything that affects compression, so results should be 100% identical between 7-Zip and DiskZIP.

                                          i could not find anything, even for smaller sets and less memory needed, it only uses around 20% of my cpu (8t cpu), while with same files and settings 7z would use up to 100%.

                                          With only dictionary changed to d128M, i get 60MB smaller file by using 7zip vs using DiskZip. Something you can try on your end as well, maybe some other setting to be changed?

                                          At that point, PA is smaller by 960M… with big lzma dictionary, it is basically used as dedup. I tested d768m with 7z and difference went down to 30-40M.

                                          It will be interesting to see more results on my test computers once I am back from vacation, in some 15 days. I will be able to get it to work with various settings at that point while right now I can only use laptop and it takes more than 2hrs.

                                          D 1 Reply Last reply Reply Quote 0
                                          • D Offline
                                            diskzip @spwolf
                                            last edited by

                                            @spwolf said in Poor compression support:

                                            end

                                            OK, DiskZIP uses all available CPU cores with a 16 MB dictionary or smaller, and a maximum of 3 CPU cores with a 32 MB dictionary. A 64 MB dictionary or larger results in a core limit of 2.

                                            Apparently these numbers are heuristic limits from a long time ago. Do you think we should move up the dictionary limits somewhat?

                                            D 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post