How to Make a Backup of a FFFFOUND Account

Andy Baio:

Next month, two seminal image-sharing communities, FFFFOUND! and MLKSHK, will close their doors within a week of each other.

[…]

These two communities shared a lot in common. Both were very creative, focused on curating imagery, but how they’re shutting down are very, very different — how it was communicated, the tools for saving your contributions, and the future of the community.

FFFFOUND provides no export or backup tools. A handful of user-created scraping scripts exist for those tech-savvy enough to use them, but in general, most users will be unable to preserve their contributions.

More upsetting is the fact that FFFFOUND only allows Google, Bing, and Yahoo to crawl their archives in their robots.txt file, which outlines which crawlers can access their site and how frequently.

I frequently used FFFFOUND between 2008 and 2013, bookmarking nearly two thousand images in that time. I somehow accumulated 703 followers on the site, and I loved its close-knit communal feeling. It was a really cool little service — like Pinterest without the commercial focus. I know a lot of photographers, designers, and other creative types who used it for collecting inspiration wherever they found it on the web. So you can imagine how much it stings not to have an export feature.

I was determined to create a backup of my collection tonight. I tried fiddling with wget first, but the site is built in such a way that scraping it is beyond my expertise — though, much to my amazement, it doesn’t appear to be against the site’s terms of service. I really didn’t want to manually create a webarchive file of every page in my profile.

Thankfully, Baio sent me links to a few scripts for saving FFFFOUND profiles. Because I’m a complete idiot when it comes to command-line software that requires a bunch of dependencies, I’ve been struggling with this all evening.

But, at last, I think I found a relatively straightforward way to archive the images in your FFFFOUND profile on MacOS:

  1. Open Safari and copy Aaron Hildebrandt’s excellent ffffind.py script.

  2. Open your favourite plain text editor and paste it into a new file. Save it as ffffind.py in the directory of your choice. I went with a new “ffffound” directory in my Pictures folder.

  3. Open a Terminal window. You’re going to download and install a copy of the Python virtualenv package by running the command sudo pip install virtualenv. You’ll need to type your administrator-level password to do this.

    I’ve found installing it at the system level is more reliable than it is at the user level, likely because of SIP in recent versions of MacOS. You can try installing it at the user level by omitting sudo, however.

  4. Once that’s installed, navigate to the folder you created earlier for this project. That’s cd ~/Pictures/ffffound for me.

  5. We’re going to set up a virtual environment. First, run virtualenv ffffind to get the basics set up. Next, type source ./ffffind/bin/activate and press return to enter the virtual environment. The command line should now begin with (ffffind) instead of $.

  6. Next, within this virtual environment, we’ll need to install the latest release of Beautiful Soup, an HTML scraper. To install it, just run pip install BeautifulSoup and wait until it confirms that it has been installed.

  7. Now, just run python ffffind.py USERNAME with your FFFFOUND user name. Sit back, because this is going to take a while.

There are, of course a few caveats with this script. First, while I don’t believe it violates FFFFOUND’s terms of service, please don’t get annoyed at me if that changes. Besides, they’re the ones who didn’t provide an export function.

Second, while this will give you a copy of every image you saved to FFFFOUND, it won’t preserve page numbers or creation dates. If the order in which you saved the images is important to you, you’ll have to try to get ffffexport to work for you. It only downloaded my most recent 32 images, and I’m not sure why.

Third, ffffind doesn’t work perfectly. I saved a few images from a museum’s search engine. Their URLs included a .exe in the string, and that made ffffind very confused, so it stopped working. The easiest way to resolve this is to open the FFFFOUND page it stalled on and save that image manually, then delete it from your FFFFOUND. Unfortunately, the ffffind script doesn’t have the provision to restart on a specific page, so you’ll have to run it again from the start.

I hope this guide is helpful if you’re a FFFFOUND member hoping to save your bookmarks from annihilation. Many, many thanks to Andy Baio and Aaron Hildebrandt.

Update: I tried to make a Workflow for this but couldn’t. Dean Young made one quickly, though, and it seems to work really well. It only scrapes the FFFFOUND-cached versions of the images, and you may wish to adjust the /post/ part of the URL to /found/ for a more complete archive, but if you don’t want to mess around with the Python nonsense above, it’s a terrific option.