Board logo

標題: [分享] Mysql Cluster 個人實作心得筆記 [打印本頁]

作者: adj    時間: 2007-11-30 13:23     標題: Mysql Cluster 個人實作心得筆記

I am going to talk about three servers, manage.domain.com 192.168.20.10 ndb1.domain.com 192.168.20.20 ndb2.domain.com 192.168.20.30 STAGE 1: Install mysql on the first two servers: Complete the following steps on both ndb1 and ndb2: groupadd mysql useradd -g mysql mysql cd /usr/local/ wget http://dev.mysql.com/get/Downloa ... //mysql.isu.edu.tw/ tar -zxvf mysql-max-5.0.19-linux-i686-glibc23.tar.gz rm mysql-max-5.0.19-linux-i686-glibc23.tar.gz ln -s mysql-max-5.0.19-linux-i686-glibc23 mysql cd mysql scripts/mysql_install_db --user=mysql chown -R root . chown -R mysql data chgrp -R mysql . cp support-files/mysql.server /etc/rc.d/init.d/ chmod +x /etc/rc.d/init.d/mysql.server chkconfig --add mysql.server Do not start mysql yet. PS: 這時後可以檢查 /usr/local/mysql/data/mysql 裡面有沒有資料表...如沒有的話...請重新執行或使用 /usr/local/mysql/configure 協助執行 STAGE 2: Install and configure the managment server You need the following files from the bin/ of the mysql directory: ndb_mgm and ndb_mgmd. Download the whole mysql-max tarball and extract them from the bin/ directory. cd /usr/local/ wget http://dev.mysql.com/get/Downloa ... //mysql.isu.edu.tw/ tar -zxvf mysql-max-5.0.19-linux-i686-glibc23.tar.gz rm mysql-max-5.0.19-linux-i686-glibc23.tar.gz ln -s mysql-max-5.0.19-linux-i686-glibc23 mysql cd mysql mv bin/ndb_mgm . mv bin/ndb_mgmd . chmod +x ndb_mg* mv ndb_mg* /usr/bin/ You now need to set up the config file for this managment: mkdir /var/lib/mysql-cluster cd /var/lib/mysql-cluster vi config.ini
複製內容到剪貼板
代碼:
# Options affecting ndbd processes on all data nodes: [NDBD DEFAULT] NoOfReplicas=2 # Number of replicas DataMemory=80M # How much memory to allocate for data storage IndexMemory=18M # How much memory to allocate for index storage # For DataMemory and IndexMemory, we have used the # default values. Since the "world" database takes up # only about 500KB, this should be more than enough for # this example Cluster setup. # TCP/IP options: [TCP DEFAULT] portnumber=2202 # This the default; however, you can use any # port that is free for all the hosts in cluster # Note: It is recommended beginning with MySQL 5.0 that # you do not specify the portnumber at all and simply allow # the default value to be used instead # Management process options: [NDB_MGMD] hostname=192.168.20.10 # Hostname or IP address of MGM node datadir=/var/lib/mysql-cluster # Directory for MGM node logfiles # Options for data node "A": [NDBD] # (one [NDBD] section per data node) hostname=192.168.20.20 # Hostname or IP address datadir=/usr/local/mysql/data # Directory for this data node's datafiles # Options for data node "B": [NDBD] hostname=192.168.20.30 # Hostname or IP address datadir=/usr/local/mysql/data # Directory for this data node's datafiles # SQL node options: [MYSQLD] hostname=192.168.20.10 # Hostname or IP address # (additional mysqld connections can be # specified for this node for various # purposes such as running ndb_restore)
Now, start the managment server:
複製內容到剪貼板
代碼:
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
This is the mysql managment server, not maganment console. You should therefore not expect any output (we will start the console later). STAGE 3: Configure the storage/SQL servers and start mysql On each of the two storage/SQL servers (192.168.20.20 and 192.168.20.30) enter the following (changing the bits as appropriate): vi /etc/my.cnf Enter i to go to insert mode again and insert this on both servers (changing the IP address to the IP of the managment server that you set up in stage 2):
複製內容到剪貼板
代碼:
[mysqld] ndbcluster # the IP of the MANAGMENT (THIRD) SERVER ndb-connectstring=192.168.20.10 [mysql_cluster] # the IP of the MANAGMENT (THIRD) SERVER ndb-connectstring=192.168.20.10
Now, we make the data directory and start the storage engine:
複製內容到剪貼板
代碼:
mkdir /var/lib/mysql-cluster cd /var/lib/mysql-cluster /usr/local/mysql/bin/ndbd --initial /etc/rc.d/init.d/mysql.server start
If you have done one server now go back to the start of stage 3 and repeat exactly the same procedure on the second server. NOTE: that you should ONLY use --initial if you are either starting from scratch or have changed the config.ini file on the managment. STAGE 4: Check its working You can now return to the managment server (manage) and enter the managment console: 啟動順序:
複製內容到剪貼板
代碼:
management node (ndb_mgmd) -> data nodes (ndbd) -> sql node (mysqld)
因使用 3台配置: Management node (*1) SQL nodes + Data nodes (*2) 所以SQL nodes的啟動方式如下...也就是 Management node 那一台啟動 mysql SQL Node 裡面的Mysql安裝方式如Stage1 SQL node的 /etc/my.cnf 如下:
複製內容到剪貼板
代碼:
[mysqld] ndbcluster socket=/var/lib/mysql/mysql.sock # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysqld_safe] err-log=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [ndb_mgm] connect-string=192.168.20.10 [ndb-mgmd] config-file=/var/lib/mysql-cluster/config.ini
#service mysql.server start 進去 mysql 裡面看ndbcluster 是否 Support
複製內容到剪貼板
代碼:
[root@backup mysql-cluster]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 to server version: 5.0.19-max Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show engines\g +------------+---------+----------------------------------------------------------------+ | Engine | Support | Comment | +------------+---------+----------------------------------------------------------------+ | MyISAM | DEFAULT | Default engine as of MySQL 3.23 with great performance | | MEMORY | YES | Hash based, stored in memory, useful for temporary tables | | InnoDB | YES | Supports transactions, row-level locking, and foreign keys | | BerkeleyDB | NO | Supports transactions and page-level locking | | BLACKHOLE | YES | /dev/null storage engine (anything you write to it disappears) | | EXAMPLE | YES | Example storage engine | | ARCHIVE | YES | Archive storage engine | | CSV | YES | CSV storage engine | | ndbcluster | YES | Clustered, fault-tolerant, memory-based tables | | FEDERATED | YES | Federated MySQL storage engine | | MRG_MYISAM | YES | Collection of identical MyISAM tables | | ISAM | NO | Obsolete storage engine | +------------+---------+----------------------------------------------------------------+ 12 rows in set (0.00 sec)
執行 /usr/local/mysql/bin/ndb_mgm Enter the command SHOW to see what is going on. A sample output looks like this:
複製內容到剪貼板
代碼:
[root@backup mysql-cluster]# ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show Connected to Management Server at: 192.168.20.10:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @192.168.20.20 (Version: 5.0.19, Nodegroup: 0, Master) id=3 @192.168.20.30 (Version: 5.0.19, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.20.10 (Version: 5.0.19) [mysqld(API)] 1 node(s) id=4 @192.168.20.10 (Version: 5.0.19) ndb_mgm>
If you see not connected, accepting connect from 192.168.0.[1/2/3] in the first or last two lines they you have a problem. Please email me with as much detail as you can give and I can try to find out where you have gone wrong and change this HOWTO to fix it. If you are OK to here it is time to test mysql. On either server mysql1 or mysql2 enter the following commands: Note that we have no root password yet.
複製內容到剪貼板
代碼:
mysql use test; CREATE TABLE ctest (i INT) ENGINE=NDBCLUSTER; INSERT INTO ctest () VALUES (1); SELECT * FROM ctest;You should see 1 row returned (with the value 1).
If this works, now go to the other server and run the same SELECT and see what you get. Insert from that host and go back to host 1 and see if it works. If it works then congratulations. The final test is to kill one server to see what happens. If you have physical access to the machine simply unplug its network cable and see if the other server keeps on going fine (try the SELECT query). If you dont have physical access do the following: ps aux | grep ndbd You get an output like this: root 5578 0.0 0.3 6220 1964 ? S 03:14 0:00 ndbd root 5579 0.0 20.4 492072 102828 ? R 03:14 0:04 ndbd root 23532 0.0 0.1 3680 684 pts/1 S 07:59 0:00 grep ndbd In this case ignore the command "grep ndbd" (the last line) but kill the first two processes by issuing the command kill -9 pid pid: kill -9 5578 5579 Then try the select on the other server. While you are at it run a SHOW command on the managment node to see that the server has died. To restart it, just issue ndbd NOTE no --inital! Starting and stopping ndbd automatically on boot To achieve this, do the following on both ndb1 and ndb2:
複製內容到剪貼板
代碼:
echo "ndbd" > /etc/rc.d/init.d/ndbd chmod +x /etc/rc.d/init.d/ndbd chkconfig --add ndbd
Note that this is a really quick script. You ought really to write one that at least checks if ndbd is already started on the machine. 哈哈...實作出來的感覺真的粉爽...只有這句話能夠形容這種心情...爽!!!




歡迎光臨 ADJ網路實驗室 (http://dz.adj.idv.tw/) Powered by Discuz! 6.0.0