Saturday, January 27, 2007

Powerline, the good, the bad and the ugly ...

After much esitance I decided to jump on the bandwagon of Powerline (a.k.a. Homeplug), that's it, ethernet over power!
At first it sounds great, I mean, no cabling, no pesky radiations going round like with WiFi and so on ... an easy way of sharing internet connection for my 2 PCs at home.
I tested WiFi 802.11b before and wasn't impressed with it's performance, so I reached my purse and got a pair of Powerline adaptors, plugged them in and ... voilĂ , instant networking [the good].
This looked extremely good, easier than WiFi, but after some usage I noticed that it was a quite slow connection, so, unfortunately, I took a peak at the manual and discovered that max theoretical (including channel control data) speed is 85Mbit/s, while effective maximum network speed can't exceed 25Mbit/s :-( [the bad (1)] and that you need to install more software/drivers in order to add security to your network, much like WEP for WiFi, which adds to complication and is likely to subtract network speed [the bad (2)].
Network speed was unimpressive, so I run a quick test with JGaa's NetCPS and here are the results for you to read, as seen from both sides


C:\>netcps -s

NetCPS 1.0 - Entering server mode. Press ^C to quit

Waiting for new connection...

Client connected from 192.168.1.9

---> CPS 1519616.00 KPS: 1484.00 MPS: 1.45

Avrg CPS 1515546.00 KPS: 1480.03 MPS: 1.45

Peek CPS 1542144.00 KPS: 1506.00 MPS: 1.47

Client disconnected. 104857600 Kb transferred in 69.19 seconds.



C:\>netcps 192.168.1.5

NetCPS 1.0 - Entering client mode. Press ^C to quit

Connecting to 192.168.1.5 port 4455... Connected!

---> CPS 1518097.88 KPS: 1482.52 MPS: 1.45

Avrg CPS 1515524.13 KPS: 1480.00 MPS: 1.45

Peek CPS 1553373.25 KPS: 1516.97 MPS: 1.48

Done. 104857600 Kb transferred in 69.19 seconds.


I'd not say very fast ...
You read about [the good], [the bad] and here is [the ugly]



(when I'll be able to retrieve a photo from my mobile) Ok, I've made it

the plug is very light, but quite big, or at least bigger than what I expected.
After all this could seem a bashing post, so I'm adding the results of the same test under 802.11b

C:\>netcps -s

NetCPS 1.0 - Entering server mode. Press ^C to quit

Waiting for new connection...

Client connected from 192.168.1.10

---> CPS 495616.00 KPS: 484.00 MPS: 0.47

Avrg CPS 487532.91 KPS: 476.11 MPS: 0.46

Peek CPS 526336.00 KPS: 514.00 MPS: 0.50

Client disconnected. 104857600 Kb transferred in 215.08 seconds.

As you can see homeplug is much faster than 802.11b, don't have a more recent 802.11x device to compare :-(

Monday, January 22, 2007

Signs of MySQL usage at BBC.CO.UK

A very interesting document from BBC's Standards and Guidelines (Database Design and Development Standards v1.00) mentions only two databases, guess what? Oracle and MySQL!!!
BTW the document is also a source of good design tips like:

Normally, use only alphanumeric characters plus the underscore (_) character for table names.

MySQL
Table names are case sensitive under UNIX (or Linux). To avoid confusion, define/declare table names in upper case in any database creation SQL scripts.

No SQL language reserved words should be used as column names

Identify primary, unique, and foreign keys (to enforce important or critical data integrity) as part of the initial design process

To guard against slow queries degrading performance on the shared database infrastructure, the most commonly performed SQL queries should be analysed with an explain plan. This should use sufficient data to generate representative query execution plans

Indexes should be used where appropriate, use explain/explain_plan to demonstrate

All database APIs must define error handling, and success/failure notification

Designs should be as RDBMS agnostic as possible: for example, minimize use of sets and abstract data types, unless these are key to the design. It is accepted that that there will inevitably be some RDBMS specific data typing

MySQL offers a number of storage types. The only ones that should be used are:
MyIsam – for general (especially read only) data.
InnoDB – for transactional data, or data with important referential integrity.


Any database data loading processes MUST create a summary log, an error file (in the event of any errors, and a bad file for records that failed to load.)

This reminds me of a feature request I made for a bad file option for LOAD DATA (bug 11480)

Database users should only be granted sufficient privileges to enable them to perform the tasks that they need to

... and many more.

I think that such a document is of general interest and very useful expecially to newbies.

Sunday, January 21, 2007

Playing with transactions on MySQL

Here is my attempt, failed as per the video quality, at showing a small example of transactions usage on MySQL 5.

Thursday, January 04, 2007

Falcon, the new MySQL Storage Engine

While taking a quick look at the newly released docs for Falcon, the new storage engine for the MySQL database from MySQL AB (and this is important, as actually InnoDB and the now ditched BDB storage engines for MySQL database are from Oracle) I noticed two "strange" things, at least to someone thinking of large databases running on dedicated hardware.
First of all the data storage, chapter 14.6.5.1. Falcon data file and data structures reads:

A single Falcon database file stores all record data, indexes, database structure and other information. ...

This doesn't sound like a good idea for me as it's better to keep indexes and data separated on different disks and something like Oracle's implementation of tablespaces and the ability to assign indexes and data to different tablespaces hosted on different disks is important (I noticed that in some cases up to 40% of the size of my datawarehouse is made up of indexes).

Then data compression, chapter 14.6.5.6. Data Compression reads:

Data stored in the Falcon tablespace is compressed on disk, but is stored in an uncompressed format in memory. Compression occurs automatically when data is committed to disk.

won't this compress/uncompress loop badly affect bulk load speed and concurrency (forced writes ...)?
Hope the engine will be made available in binaries quickly and proper benchmarks will clear any doubt.