Compression through lookups

Home Forums Code, Bugs, Suggestions, Questions Compression through lookups

This topic contains 8 replies, has 2 voices, and was last updated by  Brad G. 1 year, 3 months ago.

Viewing 9 posts - 1 through 9 (of 9 total)
  • Author
  • #5978 Reply

    Jeremy Swartwood (Kaltkinrikl)

    I don’t know enough about your algorithm, it’s abilities, and the length of a book references.

    Could something like this be used to create a compression format?

    Existing systems use lookups to compress commonly used words/phrases in a file.

    1. Take 3200 characters of a file.
    2. Find the shelf/book/page/etc that it is on.
    3. Create a new file with the reference data separated with some character.

    1. read file and find real 3200 character output
    2. create file with real characters

    But, if your shelf/book/page is more than 3200 characters, there would be no compression.

    #5979 Reply


    The reference data has much mor characters than 3200. So this not a way to compress data but to inflate.

    #5982 Reply

    Jonathan Basile

    Yeah, it definitely doesn’t perform any compression. But – if you were to use something like to save the URLs…it wouldn’t be performing any compression, but someone else would be doing your storage for you.

    #14158 Reply

    Gertjan van den Broek

    I had a very similar idea to Jeremy, and I’ve been thinking on it for a very long time now.

    I agree it’s not really compression, it’s more like storing all your information in a generated sequence, where all you need is the adress of the first and last character in that sequence.

    It might inflate your data when working with 3200 characters.
    But if you ramp up the sequence to say something of the size of a few gigatbytes (10^9)
    Then wouldn’t adresses start becoming smaller than the data they respresent?

    I’d love to give this thing a shot, but sadly I don’t know what kind of algorythms I’d need to use to reverse find a gigabyte worth of data in an infinite pool of data.

    #14189 Reply


    Your address only links to the content of one single page, which has 3200 characters, which means that if you want to link mulitple gigabytes, it will require lots of different pages, so lots of different addresses which are all above 3200 characters in length.

    If you, on the other hand, had a version of the library, where every page has gigabyte lengths, you would have way more pages than before (Every single character in length you add to the pages, will multiply the amount of possible pages by the amount of characters you have. That means the amount of pages will higher and higher, and thus you will need longer and longer addresses to map all of those pages.

    You will find that the shortest possible address format that uniquely identifies a page of 3200 bytes among all others, is 3200 bytes in length. If the page size is 10 gigabyte, your shortest possible address format will be… 10 gigabyte. Do you see the problem here?

    #14549 Reply


    It’s such an intriguing idea though. If an algorithm existed that could shuffle the 29^3200 values so that it’s 100% effective and the size of the resulting set is exactly 29^3200. For any given string there would be a chance that it can be compressed by up to 99.9%. But that probably breaks logic or something. Of course on the other hand shorter messages would occasionally get inflated.

    #14552 Reply


    Well, there is certainly some strings that can be “compressed” like this in the library, for example 0-w1-s1-v01:1 is way shorter than the string it represents. This is essentially because leading zeros are emitted. So, to get a sort-of compression going, you’d need to generate a special library of babel like formula, that would have this string early on.

    This may be possible, but is unplausible, because the formula would probably be bigger than the string in the first place, which is a dealbreaker since you’d have to store/send the formula along the “compressed” data, and generating a special formula for every string you want to compress sounds like a lot of work. Conventional compressing algorithms are faster, more reliable, and, presumably, more effective.

    #14607 Reply


    >So, to get a sort-of compression going, you’d need to generate a special library of babel like formula, that would have this string early on.

    Like you say, that would defeat the whole purpose. The idea is that the magic formula doesn’t have to change. And then when your string is 3200 characters long, there’s still always a 50% chance that the key is shorter than 1600 characters.

    Probably impossible. Still fun to think about. Of course most things can already be compressed greatly with existing algorithms, because data has patterns.

    #21629 Reply

    Brad G.

    This was one of my very first thoughts when I saw this tool. I don’t see any insurmountable problems with using this as a compression tool.

    If you made a new library algorithm that contained all ASCII characters, then you could compress ANY file that can be expressed in ASCII(anything that isn’t some sort of machine code, I should think..) into some large number of strings, each about 15 characters long, comprised of a number for the starting point on the page(124 for example), the page address(fde, 1r5, 5va, O3f for example), and the length of the included text(1000 for example). Once you have finished your first round of compression, you can now compress your chain of compressed text, as these are all stored as ASCII strings, into a smaller and smaller number of chains, with the number of compression levels involved.

    I think I’m going to try doing this manually, to see where it takes me.

Viewing 9 posts - 1 through 9 (of 9 total)
Reply To: Compression through lookups
Your information: