Recently we encountered a problem with duplicate time UUIDs while loading a lot of data into Cassandra. Duplicates are not normally a problem with UUIDs but occasionally you need to generate time UUIDS from a low resolution clock and/or load a lot of data really fast. In these situations you can overwhelm the ability of
The data modeling training at #CassandraSummit validated most of our choices. Not sure if that makes me happy or sad.
My first commit to Cassandra. Not a big change but it is important if you are using Spark.