• Experimental Codecs - info, updates

    Pinned
    21
    0 Votes
    21 Posts
    18k Views
    spwolfS
    @nikkho said in Experimental Codecs - info, updates: @spwolf I will look for a 320Kpbs test set. It is strange, but most of my download music, is VBR, and not CBR. any kind of samples will be interesting for sure…
  • Advanced Codec Pack - engine list of changes

    Pinned
    7
    2 Votes
    7 Posts
    11k Views
    spwolfS
    @nikkho we are not using jojpeg yet… we will see if we end up using it, it is still too slow for something like PA.
  • Filters: Reflate - (pdf/docx recompression)

    Pinned Moved
    2
    0 Votes
    2 Posts
    5k Views
    A
    OK, this sounds awesome. Can’t wait to test this out.
  • FMA-REP - info and test results (.pa)

    Pinned Moved
    1
    0 Votes
    1 Posts
    4k Views
    spwolfS
    (This article is work in progress) What is fma-rep? Deduplication filter based on anchor hashing. Technically LZ77, but has no entropy coding, and only longer matches have a chance to be replaced with a reference. It has much lower memory requirements than lzma, so can be used to compensate lzma’s smaller window/dictionary size. Examples: Official ISOs from Microsoft for Windows 10 Pro and Office 2016: Due to large file sizes, and the fact that fma-rep takes a lot less memory than plzma4, it is very useful for large software installation DVDs that have a lot of compressed data already. Best idea is to use large window of fma-rep1 and fast codec, to achieve good compression and yet very fast speed. Tests AMD FX8320 with 16GB RAM and SSD Office 2016 Pro ISO - 1,985,392 kB .pa (Zstandard2, x64flt, bcj2, fma-rep1) 36s encode, 37s decode - 1,551,741 kB .rar (Normal) 128s encode, 13s decode, 1,892,471 kB Windows 10 Pro ISO .pa (Zstandard2, x64flt, bcj2, fma-rep1) 87s encode, 77s decode - 3,577,849 kB .rar (Normal) 314s encode, 27s decode - 3,838,188 kB Sharepoint Server 2013 .rar (Normal) 369s encode, 15s decode - 2,269,782 kB .zip (WZ 21 Normal) - 47s encode, 13s decode - 2,305, 755 kB .pa (Zstandard2, x64flt, bcj2, fma-rep1) - 61s encode, 41s decode - 1,955,468 kB
  • Support for mobile devices and android?

    Unsolved android mobile devices cloud
    4
    0 Votes
    4 Posts
    7k Views
    A
    @Mili 6 years later I see no support for .PA on Android :) Currently using ZArchiver on Android. It does the job but .PA usually gets a better compression ratio. Are there any plans to release an “Unrar” like component for other archivers to use?
  • This topic is deleted!

    Unsolved
    1
    0 Votes
    1 Posts
    2 Views
  • This topic is deleted!

    Unsolved
    1
    0 Votes
    1 Posts
    2 Views
  • This topic is deleted!

    Unsolved
    1
    0 Votes
    1 Posts
    3 Views
  • This topic is deleted!

    Unsolved
    1
    1
    0 Votes
    1 Posts
    8 Views
  • Make .PA open to others programs?

    1
    0 Votes
    1 Posts
    904 Views
    A
    Good morning, I bought Powerarchiver Toolbox because I find the .pa format brilliant! The problem with this format is that only Powerarchiver can open it, so it won’t be as famous as the .rar or .7z! In my opinion it is okay to have the exclusive of the creation of the format, but not of the extraction, otherwise it will never be used by anyone because few people have Powerarchiver! I hope that one day the other extractors will be able to extract the .pa format, I will be able to create archives and send them to whoever I want!
  • PowerArchiver 2019 Toolbox International with Advanced Codec Pack

    Solved
    14
    0 Votes
    14 Posts
    4k Views
    spwolfS
    Worked fine with new codes, closing this one… thank you!
  • Poor compression of >20GB exe/msi/cab sample

    Moved Solved
    52
    0 Votes
    52 Posts
    43k Views
    D
    @nikkho said in Poor compression of >20GB exe/msi/cab sample: New record with Razor: 2,413,444 kB https://encode.ru/threads/130-PowerArchiver?p=54599&viewfull=1#post54599 Yes, while we have been bickering here about DLLs and EXEs vs native implementations; someone has actually improved something :)
  • Some test results of mp3 to .pa .pa experimental

    Unsolved
    6
    1
    1 Votes
    6 Posts
    9k Views
    spwolfS
    @pirrbe yes, keep in mind that this works best with at least 4t cpus and 64bit. But it scales well to 8t too. But in your case for instance, speed can be optimized for optimized fast modes, where we can gain a lot of speed to be the same as zstd fast. Just right now, a lot of effort goes into optimizing stronger modes since thats where the compression is. With modern i7/Ryzen cpu, people can easily get 8MB/s speed and >20% on 320kbs MP3s. I need to test it on our dual core thats on x64. My i7 with limit to 2 threads still does 5 MB/s, while your dual core does 1.1 MBs. I am sure we can optimize it with some settings.
  • settings for wav/sf2 files (from 17.00.68)

    6
    0 Votes
    6 Posts
    10k Views
    spwolfS
    .70 adds larger dictionary and chunk sizes in stronger options for wav mode, also added sf2 to that mode. https://forums.powerarchiver.com/topic/5747/fast-ring-powerarchiver-2017-17-00-67-68-69-70
  • Releasing unpacking library

    Unsolved
    2
    2 Votes
    2 Posts
    6k Views
    spwolfS
    @joakim_46 said in Releasing unpacking library: Do you plan to release unpacking library, so 3rd party software can extract PA format as well? It would be great and certainly would expand the format. yes, but only after we are finished with 1.0… it is not done yet, we plan to add more codecs to it in upcoming months, as well as optimize current ones. Thanks!
  • brilliant format

    5
    2 Votes
    5 Posts
    9k Views
    M
    Nice! Thanks @davidsplash!!
  • Optimized Strong, initial tests speed/compression

    1
    1 Votes
    1 Posts
    4k Views
    spwolfS
    Hello @Alpha-Tester . Lets test a bit Optimized Strong methods and see what works and what can be improved. Relationship between codec and filter paramters as well as number of threads is complicated ones, and while we have tried to automate it in the best possible way, improvements are still possible. @skypx has a nice cpu for testing 16t performance for instance. It would be interesting to see whats maximized performance for Optimized Strong Maximum and Ultra options because they use different entropy models (a0 lzma, a1 or lzmarec) which provide different performance - lzmarec is much stronger but also slower to extract where our parallel decode helps. Debug mode can help to log all of this.