LDCache implements a modular architecture which allows using different kinds of backends for storing cache entries. Some of these backends are purely in-memory (i.e. don’t survive a restart and might even expire when memory runs out), others are file-based or even database-backed. In principle, an LDCache backend needs to be able to store the following two kinds of data:
The following sections describe the LDCache backends that are currently available or will be available in the near future.
The KiWi backend for LDCache relies on an underlying KiWi triple store to store caching information. It will use the KiWi store’s JDBC connection to add additional tables and information to the database. The two kinds of caching data are stored as follows:
The KiWi backend is the backend you should choose when you are using a KiWi triple store, except if you want to completely keep apart your local and your cached data. You can include the KiWi backend in your project using the following Maven artifact:
<dependency> <groupId>org.apache.marmotta</groupId> <artifactId>ldcache-backend-kiwi</artifactId> <version>3.3.0</version> </dependency>
Setting up an LDCache instance with a KiWi backend requires the following configuration steps:
KiWiStore store = new KiWiStore("test",jdbcUrl,jdbcUser,jdbcPass,dialect, "http://localhost/context/default", "http://localhost/context/inferred"); Repository repository = new SailRepository(store); repository.initialize(); LDCachingBackend backend = new LDCachingKiWiBackend(store, CACHE_CONTEXT); backend.initialize(); LDCache ldcache = new LDCache(new CacheConfiguration(),backend);
Note that the underlying KiWi repository must be initialized before using it in the LDCachingKiWiBackend, because otherwise the necessary database tables might not be present. The argument CACHE_CONTEXT is the URI of the resource to use as context (named graph) for storing and accessing cached triples.
The EHCache backend for LDCache relies on an EHCache caching infrastructure for storing caching information. When using the Open Source version of EHCache, this usually means in-memory caching only. However, for enterprise systems it is also possible to build a high-performance caching cluster that can be used by the LDCache backend (please refer to the EHCache documentation on how to set this up).
In the EHCache backend, both the caching metadata and the cached triples for a resource are stored in the same cache entry. To allow EHCache to serialize cache entries for distribution over the cluster or for swapping to disk, the triples are represented in a serializable in-memory representation.
Note: the EHCache backend is currently still under development. We therefore don’t publish Maven artifacts for it yet.
The MapDB backend uses the embedded NoSQL database MapDB (formerly known as JDBM) for storing cache information in a persistent disk-based hash map.
In the MapDB backend, both the caching metadata and the cached triples for a resource are stored in the same cache entry. To allow MapDB to serialize cache entries when persisting the hash map to disk, the triples are represented in a serializable in-memory representation.
Note: the MapDB backend is currently still under development. We therefore don’t publish Maven artifacts for it yet.