Babel Image Archives

Home Forums Babel Image Archives Babel Image Archives

This topic contains 51 replies, has 2 voices, and was last updated by  鞓る笇鞐?頂岆灚 鞀堨 1 month, 1 week ago.

Viewing 15 posts - 16 through 30 (of 52 total)
  • Author
  • #1891 Reply


    I wonder if with the downloadable version you would be able to run it through software that would specifically look for “stuff” rather than nonsense. Somewhat like facial recognition software, but looking for faces, shapes, and other things that weren’t just white noise. Maybe running it much much faster than it does now with such a program that would automatically screen shot images and save them could yield some sort of result. Hopefully I can try to put something into motion and see what kind of images I can find! I think you’ve just inadvertently consumed all my free/bored time!

    #2472 Reply


    Is there any way to try to put in perspective a number as high as 10^961755??? …it’s just crazy!
    And is it possible to download the pictures as PNGs?

    #2558 Reply

    Jonathan Basile

    Hey Lolo,

    Right now I don’t have a PNG option, unfortunately. I made something like that when I was testing the site – but simplicity won out. Maybe I’ll add that to a future update.

    I don’t think there is any way to put the number in perspective – like you say, it’s just too large!

    #2576 Reply


    I really like the idea behind this site, that all information in the universe pre-exists like this in some kind of virtual form. The concept is old (like everything exists in the decimals of PI) but the way you make is searchable makes it easier for people to understand. Very good work! I have one question though. How does the bookmark feature works for images? Do you store the image itself or the input data for your algorithm to find that exact image? If you store the input data, does that take up more space than the image itself and does it always require the same amount of data? For example: Could a 1MB image be found using a 500kb string if your’re lucky? If that´s the case then you have just invented the most incredible compression algorithm in the world..

    #2577 Reply

    Jonathan Basile

    Only if you’re veeeeeeerrrrryyyyyyy lucky – it’s possible but the chances are infinitesimal. The input data is almost always greater than the size of the image

    #2582 Reply


    You use a base-10 number to address your images. What happens if you use base-16 (hexadecimal) or why not base-2048 which would be the whole UTF-8 table. How much smaller would that string be? According to my test your images are about 200kb and the string about 400kb zipped. If your string size would be just 1/3 of the original your algorithm would be more efficient than jpeg. I’m probably miss something, it would be too easy..

    #2588 Reply


    Regarding my previous post. Maybe ASCII is better because it has 255 characters (one byte/char). UTF-8 is two bytes I think. So base-256 instead of base-10 should compress the index a bit.

    #2609 Reply


    I see a lot of people talking about using such an algorithm as for data compression, but this would be completely infeasible. For any given amount of data the index for the location of the data will average out to about the same size. Sure, a few might have an index smaller than the data, but this is counteracted by those with indexes larger than the data. In the end you get a net gain of zero, and use a lot of processing power to get there. In practice this is nothing but a hashing algorithm, like base64.

    #2780 Reply

    Chris A

    Hello Mr. Basile!

    To what extent does the number of possible files your algorithm generates affect the processing time that it takes to fetch the file location from a search request?

    #2784 Reply

    Jonathan Basile

    That’s a major factor in setting the limits I did – having every possible page is pretty fast, as you can see, whereas every possible book is just too slow for the internet. The Image Archives are already just a little bit sluggish, and they are about half the size of the full-book library, which is why I’m going to make that downloadable. The letters/pixels are created by a function that is essentially a base-conversion, so every one you add adds an operation.

    #2808 Reply


    So let me get this straight, you’ve got an image library that contains
    -every picture anyone has or will ever draw?
    -every person in the word in every possible situation at every possible location?
    -every alien species that exists in the universe?
    -every top secret document, even the ones that were destroyed?

    This has pretty much limitless potential. There’s now no reason to have any internet image libraries at all for instance, when all you need to do is reference an image location in the Babel archives. But more importantly if this can be hooked up to an image recognition software, rather than searching the library for an image and getting that exact image in return, you might be able to search for a typed term. Which would be handy since we wouldn’t need to create images anymore, just find where the masterpiece you were about to create is located.

    #2809 Reply


    You should really take this idea to the patents and then goto a major corp. like Microsoft, Apple or Google.

    I havent really read into tohe specs and you probably have heard this before.

    This technonolgy has the possibilty of virtually unlimited picture storage. If the core program can be expanded to accommodate larger pictures and a deeper color depth, this would be a revolutionary technology.

    Even if at the cost of a GB for the core program you would still only have to save a bookmark of about a KB.

    Best regards

    #2810 Reply


    Oh yea, google deep dream art processed.…62328493

    #2813 Reply

    Andrew Gibson

    I think there seems to be a lot of confusion regarding how this actually works. For example some people seem to think that the bookmark you generate for the file isn’t actually stored on a server somewhere. Clearly it must be, as all of the images I have matched in the library have an index file of millions of digits.

    The issue is (as I see it at the moment) is that the index data of the location of an image or file in the soup will nearly always be larger than the data it set it points to. Therefore, there can never be any magical storage in the near infinite soup. Index can be seen as a hash for the actual data, image or book.

    So what the library of Babel actually is is hashing algorithm dressed up in Philosophical clothes. This doesn’t detract from the beauty of the idea and the skill of it’s implementation. It just means that nothing truly useful can ever be done with it. For example it would not be possible to interrogate the soup for anything meaningful. It would not be possible to use it as an infinite storage system for MP3 files or Bluray Discs. Simply because there would be not benefit to doing this in terms of file sizes.

    #2821 Reply

    Jonathan Basile

    Hey Andrew,

    I’ve certainly never tried to claim that the library is a compression algorithm. We have a tendency to only recognize things as having a purpose in our society if they perform some work or make some money – that is, if they’re practical or profitable. This project interests me (along with many other programming projects like it, that tend to classify themselves as generative art or something like that) because it subverts these expectations. It certainly won’t perform work in these practical senses, but whether or not there is use to objects of art or contemplation is still an open question.

Viewing 15 posts - 16 through 30 (of 52 total)
Reply To: Babel Image Archives
Your information: