Via Ajaxian, just saw an announcement for Persevere, a network-centric, JSON-based generic storage engine. It features:
- A REST-based interface over regular old HTTP
- JSON as the native data going in and out, including circular references and such
- Search interface based around JSONPath
- RPC interface based on JSON-RPC
- Seemingly buzzword compliant across the board
I’ve been thinking about these sorts of servers a lot lately (couchdb and strokedb are two others) in the context of the “not-the-catalog” data we track here at the library.
For some stuff, clearly we need the power and speed of a real database. That power and speed isn’t free, though — you have to set up the tables, map relationships, build an interface on top of it, etc. While it’s not rocket science by any stretch of the imagination, it’s a lot of screwing around and involves a few levels of security and has a friendly red sign on the door that reads “Programmers only, please.”
For other data, though, a structured or semi-structured data store based on a plain text format like JSON would be great. Since everything is a URL, we can handle security at the HTTP-auth/authz level. Library hours, lists of databases we subscribe to, staff directory data — these are data that could, if we wanted, be moved into a generic store like this.
This is the flip-side of my last post. We’re not talking about hard-core, multiply-linked, core-business metadata. For that, we need ridiculously smart people figuring out how to best leverage the, say, 8 million MARC records we’ve got lying around. But for other stuff…this seems really, really cool.