While taking a quick look at the newly released docs for Falcon, the new storage engine for the MySQL database from MySQL AB (and this is important, as actually InnoDB and the now ditched BDB storage engines for MySQL database are from Oracle) I noticed two "strange" things, at least to someone thinking of large databases running on dedicated hardware.
First of all the data storage, chapter 126.96.36.199. Falcon data file and data structures reads:
A single Falcon database file stores all record data, indexes, database structure and other information. ...
This doesn't sound like a good idea for me as it's better to keep indexes and data separated on different disks and something like Oracle's implementation of tablespaces and the ability to assign indexes and data to different tablespaces hosted on different disks is important (I noticed that in some cases up to 40% of the size of my datawarehouse is made up of indexes).
Then data compression, chapter 188.8.131.52. Data Compression reads:
Data stored in the Falcon tablespace is compressed on disk, but is stored in an uncompressed format in memory. Compression occurs automatically when data is committed to disk.
won't this compress/uncompress loop badly affect bulk load speed and concurrency (forced writes ...)?
Hope the engine will be made available in binaries quickly and proper benchmarks will clear any doubt.