A memcached client library.
A client library for a Memcached cluster. Memcached is an in-memory key-value store typically used as a distributed and shared cache. Clients connect to a group of Memcached servers and perform out-of-band caching for things like SQL results, rendered pages, or third-party APIs.
It supports the binary Memcached protocol and SASL authentication. No support for the ASCII protocol is provided. It supports connecting to a single, or a cluster of Memcached servers. When connecting to a cluser, consistent hashing is used for routing requests to the appropriate server. Timeouts, retrying failed operations, and failover to a different server are all supported.
Complete coverage of the Memcached protocol is provided except for multi-get and other pipelined operations.
Basic usage is:
import qualified Database.Memcache.Client as M
mc <- M.newClient [M.ServerSpec "localhost" "11211" M.NoAuth] M.def
M.set mc "key" "value" 0 0
v <- M.get mc "key"
You should only need to import Database.Memcache.Client
, but for now other modules are exposed.
memcache: Haskell Memcached Client
A client library for a memcached cluster.
It supports the binary memcached protocol and SASL authentication. No support for the ASCII protocol is provided. It supports connecting to a single, or a cluster of memcached servers. When connecting to a cluser, consistent hashing is used for routing requests to the appropriate server.
Complete coverage of the memcached protocol is provided except for multi-get and other pipelined operations.
Licensing
This library is BSD-licensed.
Tools
This library also includes a few tools for manipulating and experimenting with memcached servers.
OpGen
-- A load generator for memcached. Doesn't collect timing statistics, other tools like mutilate already do that very well. This tool is useful in conjunction with mutilate.Loader
-- A tool to load random data of a certain size into a memcached server. Useful for priming a server for testing.
Architecture Notes
We're relying on Data.Pool
for thread safety right now, which is fine but is a blocking API in that when we grab a socket (withResource
) we are blocking any other requests being sent over that connection until we get a response. That is, we can't pipeline.
Now, use of multiple connections through the pool abstraction is an easy way to solve this and perhaps the right approach. But, could also implement own pool abstraction that allowed pipelining. This wouldn't be a pool abstraction so much as just round-robbining over multiple connections for performance.
Either way, a pool is fine for now.
Other clients
Get involved!
We are happy to receive bug reports, fixes, documentation enhancements, and other improvements.
Please report bugs via the github issue tracker.
Master git repository:
git clone https://github.com/dterei/memcache-hs.git
Authors
This library is written and maintained by David Terei ([email protected]).
Contributions have been made by the following great people:
- Alfredo Di Napoli ([email protected])
- Amit Levy
- Steven Leiva.