@nikkho said in Experimental Codecs - info, updates:
@spwolf I will look for a 320Kpbs test set. It is strange, but most of my download music, is VBR, and not CBR.
any kind of samples will be interesting for sure…
(This article is work in progress)
What is fma-rep?
Deduplication filter based on anchor hashing. Technically LZ77, but has no entropy coding, and only longer matches have a chance to be replaced with a reference.
It has much lower memory requirements than lzma, so can be used to compensate lzma’s smaller window/dictionary size.
Examples: Official ISOs from Microsoft for Windows 10 Pro and Office 2016:
Due to large file sizes, and the fact that fma-rep takes a lot less memory than plzma4, it is very useful for large software installation DVDs that have a lot of compressed data already. Best idea is to use large window of fma-rep1 and fast codec, to achieve good compression and yet very fast speed.
AMD FX8320 with 16GB RAM and SSD
Office 2016 Pro ISO - 1,985,392 kB
.pa (Zstandard2, x64flt, bcj2, fma-rep1) 36s encode, 37s decode - 1,551,741 kB
.rar (Normal) 128s encode, 13s decode, 1,892,471 kB
Windows 10 Pro ISO
.pa (Zstandard2, x64flt, bcj2, fma-rep1) 87s encode, 77s decode - 3,577,849 kB
.rar (Normal) 314s encode, 27s decode - 3,838,188 kB
Sharepoint Server 2013
.rar (Normal) 369s encode, 15s decode - 2,269,782 kB
.zip (WZ 21 Normal) - 47s encode, 13s decode - 2,305, 755 kB
.pa (Zstandard2, x64flt, bcj2, fma-rep1) - 61s encode, 41s decode - 1,955,468 kB
@nikkho said in Poor compression of >20GB exe/msi/cab sample:
New record with Razor: 2,413,444 kB
Yes, while we have been bickering here about DLLs and EXEs vs native implementations; someone has actually improved something
It is nice to know that it will be implemented in the future. I hope it does not mean that it will be a long wait. On the meantime it seems that I should let tmy files be stored in cloud and open them in computer and have a back up copy of those but in other format that powerarchiver does create such as 7zip which it happens to have apps to open in android devices. To have two sets of compressed files in my cloud takes out some of the space. It will be of much help when it is supported on mobile devices. Thanks anyways.
@pirrbe yes, keep in mind that this works best with at least 4t cpus and 64bit. But it scales well to 8t too. But in your case for instance, speed can be optimized for optimized fast modes, where we can gain a lot of speed to be the same as zstd fast.
Just right now, a lot of effort goes into optimizing stronger modes since thats where the compression is.
With modern i7/Ryzen cpu, people can easily get 8MB/s speed and >20% on 320kbs MP3s.
I need to test it on our dual core thats on x64. My i7 with limit to 2 threads still does 5 MB/s, while your dual core does 1.1 MBs. I am sure we can optimize it with some settings.
@joakim_46 said in Releasing unpacking library:
Do you plan to release unpacking library, so 3rd party software can extract PA format as well? It would be great and certainly would expand the format.
yes, but only after we are finished with 1.0… it is not done yet, we plan to add more codecs to it in upcoming months, as well as optimize current ones. Thanks!
Hello @Alpha-Tester . Lets test a bit Optimized Strong methods and see what works and what can be improved. Relationship between codec and filter paramters as well as number of threads is complicated ones, and while we have tried to automate it in the best possible way, improvements are still possible.
@skypx has a nice cpu for testing 16t performance for instance. It would be interesting to see whats maximized performance for Optimized Strong Maximum and Ultra options because they use different entropy models (a0 lzma, a1 or lzmarec) which provide different performance - lzmarec is much stronger but also slower to extract where our parallel decode helps.
Debug mode can help to log all of this.