Skip to main content

Zookeeper, Netflix Curator and ACLs

If you have one or more Zookeeper "multi-tenant" clusters you may want to protect znodes against unwanted modifications.
Here is a very simple and short introduction to the ACL and custom authentication features.
This post is not intended to give you best practices about security and Zookeeper, the only goal is to give you a complete example of a custom authentication handler.

Complete source code with JUnit test is available here :
https://github.com/barkbay/zookeeper-acl-sample/

Use case

Let say that your Zookeeper cluster is used by several users. In order to restrict user actions you have decided that each user must prefix all paths with the first letter of his name.
  • User foo is only allowed to create, read, delete and update znodes under the /f znode.
  • User bar is only allowed to create, read, delete and update znodes under the /b znode.

Get client authentication data on the server side

Zookeeper client authentication can be easily customized , all you have to do is to create a class that extends org.apache.zookeeper.server.auth.AuthenticationProvider :
Once it is done you must tell Zookeeper server to use it, this can be done by setting the Java system property zookeeper.authProvider.x (where x is an integer)
You can do this in a startup script but we want to use a JUnit test :

Let Zookeeper client automatically set ACLs with Curator

Curator helps you to automatically set right ACLs on znodes. This can be done providing an ACLProvider implementation :
Register you ACL provider using the Curator client builder :
Add client authentication data using the previous builder : Complete JUnit test with a Zookeeper testing server using the Curator framework is available here

Comments

Popular posts from this blog

Row Count : HBase Aggregation example

With the coprocessors HBase 0.92 introduces a new way to process data directly on a region server. As a user this is definitively a very exciting feature : now you can easily define your own distributed data services.

This post is not intended to help you how to define them (i highly recommend you to watch this presentation if you want to do so) but to quickly presents the new aggregation service shipped with HBase 0.92 that is built upon the endpoint coprocessor framework.

1. Enable AggregationClient coprocessor

You have two choices :

You can enable aggregation coprocessor on all your tables by adding the following lines to hbase-site.xml :
<property> <name>hbase.coprocessor.user.region.classes</name> <value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value> </property> or ...you can enable coprocessor only on a table throught the HBase shell :

1. disable the table
hbase> disable 'mytable'

2. add the coprocessor
hbase…

Analyse d'un "thread dump" d'une JVM IBM sous AIX

Dans quels cas le thread dump est utile ?
Le thread dump est un instantané de l'activité des threads de la JVM. Leur analyse est intéressante dans les cas où l'activité de la JVM ne semble pas normale :
Activité suspendue (deadlock/interblocage) ou partiellement suspendue (starvation/famine)Activité existante mais le "débit" est en deçà de ce qui est attendu (Goulot d'étranglement / Bottleneck)Activité existante mais le "débit" reste nul (Boucle infinie / Infinite Loop)Comment avoir un thread dump ?
Nous nous limitons ici à la machine virtuel IBM sous AIX. Dans ce cas là il est extrêmement simple de déclencher la création d'un thread dump : il suffit de faire un kill -3 sur le processus Java.

Un fichier dont le nom est javacore.[date].[numero_processus].[compteur].txt est produit. Sur la sortie standard du processus vous devriez voir la ligne suivante s'afficher :
JVMDUMP010I Java Dump written to .....
En général le dump est produit dans le réper…