Cacheonix in Public Cloud Environments without Multicast

Posted: July 2nd, 2016 | Author: | Filed under: Distributed Caching

Multicast is an amazing protocol when used wisely because it provides a medium for delivering messages in a group with minimal network overhead. On the ideal network a single packet maybe enough to deliver a message to thousands of hosts.

Multicast has its disadvantages. First, the packet delivery is not guaranteed, and Cacheonix addresses this by building a reliable multicast protocol on top of the standard unreliable multicast. Second, multicast has a potential for abuse becuase it’s easy to write a program that floods the network with UDP packets and even makes it impossible to communicate of the network. Unlike TCP/IP, there is nothing in standard multicast that prevents this. TCP/IP has a built-in collaborative features such as back-off in case of errors, and multicast being a lower-level protocol has none. It’s this second disadvantage that most likely forces some network administrators and all public cloud providers such as Amazon with its AWS and Google with its GAE to disable support multicast on their networks.

The good news is that Cacheonix takes in account the lack of support for multicast by public cloud providers and some LAN environments. To address this issue we developed a multicast-over-TCP/IP protocol. What’s cool about it that it allows to designate some hosts as “known addresses”, so whenever a new host joins a Cacheonix cluster or members of the cluster need to send a message to the group, the protocol will use those known-address hosts as broadcasters for the cluster. To ensure resiliency of the infrastructure it makes sense to designate more then one known-address host. This way if a broadcaster fails, other broadcasters can continue to support normal cluster operation. You can learn about configuring multicast-over-TCP/IP here.

See also:

  • Multicast Standard
    • (No) Comments: Post a response

Using Distributed ConcurrentHashMap

Posted: March 21st, 2016 | Author: | Filed under: Distributed Caching

Before, if you wanted to make atomic modifications to the distributed hash table provided by Cacheonix, you could use the distributed locks, such as in the example below:


      Cacheonix cacheonix = Cacheonix.getInstance();
      ReadWriteLock readWriteLock = cacheonix.getCluster().getReadWriteLock();
      Map map = cacheonix.getCache(mapName);
      Lock lock = readWriteLock.writeLock();
      lock.lock();
      try {
         if (!map.containsKey(key)) {
            return map.put(key, value);
         } else {
            return map.get(key);
         }
      } finally {
         lock.unlock();
      }

That approach worked but it required multiple network round trips to get things done. The latest Cacheonix release adds a distributed implementation of ConcurrentHashMap. Now you can modify distributed hash maps atomically, with minimal latency, by using the new JDK-compatible distributed ConcurrentMap API:


     Cacheonix cacheonix = Cacheonix.getInstance();
     java.util.concurrent.ConcurrentMap concurrentMap = cacheonix.getCache(mapName);
     return concurrentMap.putIfAbsent(key, value);


Distributed ConcurrentMap by Cacheonix supports all four methods of ConcurrentMap:

  • V putIfAbsent(K key, V value);
  • boolean remove(Object key, Object value);
  • boolean replace(K key, V oldValue, V newValue);
  • V replace(K key, V value);

Hope this helps.

Regards,

Slava Imeshev
Cacheonix Open Source Java Cache

(No) Comments: Post a response

Cacheonix as Distributed DataNucleus L2 Cache

Posted: September 18th, 2011 | Author: | Filed under: Distributed Caching

DataNucleus can now use Cacheonix as a distributed L2 Cache.

DataNuclues access platform is an open source standards-based JPA API that provides persistence and query services for  a wide set of datastores. Andy Jefferson, the project lead at DataNucleus.org, has let me know today that DataNucleus Access Platform has added support for Cacheonix. Now DataNucleus applications can scale out with ease by using Cacheonix as a distributed L2 cache.

It’s great to see growing adoption of Cacheonix by persistence frameworks and we are looking forward to further collaboration with DataNucleus.

See Also:

Regards,

Slava Imeshev

(No) Comments: Post a response

Per-select Caching for MyBatis

Posted: August 21st, 2011 | Author: | Filed under: Distributed Caching

Good news: We’ve just added support for per-select caching for MyBatis to our open source Java cache Cacheonix that addresses major problems with default MyBatis caching.

Problems With Default MyBatis Caching

There are a few problems with the way caching works in MyBatis:

  • MyBatis keeps results of all selects in a single, namespace-wide, cache. This means any invalidation caused by inserts or updates will flush cached results of all selects, including those that are really not affected by the changes.
  • It is impossible to ignore invalidation completely and to have pure time-only expiration for some selects.
  • It is impossible to have different cache configurations for particular selects.

Benefits of Cacheonix Adapter for MyBatis

Cacheonix have addressed these concerns by adding the following features:

  1. Per-select MyBatis caches
  2. Ability to turn off invalidation
  3. Ability to turn off namespace cache
  4. Cache templates for select caches

For more information and examples check Cacheonix documentation: Configuring MyBatis Cache Adapter.

Enjoy!

Regards,

Slava Imeshev

(No) Comments: Post a response

Testing Keys and Values for Distributed Java Caching

Posted: July 23rd, 2011 | Author: | Filed under: Distributed Caching

In distributed caching, cache keys and cached values routinely travel across the network. That’s why it is critical to write proper unit tests for keys and values in order to avoid unpleasant production surprises. This post tells how to test keys and values for distributed caching.

Distributed caching imposes an additional requirement towards unit tests for cache keys and cached values. Particularly, you must ensure that the object that was received at another end is the object that was sent. Here is how to tests it:

First, serialize an object to a byte array and then deserialize the byte array back to an object

      // Serialize the object
      ByteArrayOutputStream baos = new ByteArrayOutputStream(100);
      ObjectOutputStream oos = new ObjectOutputStream(baos);
      oos.writeObject(originalInvoiceKey);
      oos.close();

      // Deserialize the object in serialized form
      final byte[] serializedInvoiceKey = baos.toByteArray();
      ByteArrayInputStream bais = new ByteArrayInputStream(serializedInvoiceKey);
      ObjectInputStream ois = new ObjectInputStream(bais);
      final InvoiceKey deserializedInvoiceKey = (InvoiceKey) ois.readObject();
      ois.close();

Second, assert that the deserialized object and the original object are equal:

      assertEquals(originalInvoiceKey, deserializedInvoiceKey);

While you have already implemented methods equals() and hashCode() for the key, cached values need to have that methods added. For value objects, equals() and hashCode() need to satisfy a requirement for ‘literal equality’. In other words, to test that a value object can be serialized and deserialized, its equals() and hashCode() should include all non-transient fields:

public final class Invoice implements Externalizable {

   private int invoiceID;
   private Date invoiceDate = null;
   private int invoiceNumber;
   private int customerID;

...

   public boolean equals(final Object o) {
      if (this == o) return true;
      if (o == null || getClass() != o.getClass()) return false;
      final Invoice invoice = (Invoice) o;
      if (invoiceID != invoice.invoiceID) return false;
      if (invoiceNumber != invoice.invoiceNumber) return false;
      if (customerID != invoice.customerID) return false;
      if (invoiceDate != null ? !invoiceDate.equals(invoice.invoiceDate) : invoice.invoiceDate != null) return false;
      return true;
   }

   public int hashCode() {
      int result = invoiceID;
      result = 31 * result + (invoiceDate != null ? invoiceDate.hashCode() : 0);
      result = 31 * result + invoiceNumber;
      result = 31 * result + customerID;
      return result;
   }
}

If adding equals() and hashCode() to the cached value is impossible, use explicit assertEquals() for object’s fields:

      assertEquals(invoiceID, deserializedInvoiceKey.getInvoiceID());

Regards,

Slava Imeshev

(3) Comments: Post a response