CLUSTER MYSQL NDB 8 CENTOS
Good afternoon, we have been a bit lost due to lack of time and we have not written for many months, but we are back, this time with a topic with little information in Spanish.
We are going to see how to set up a mysql ndb 8 cluster with 2 management, 2 mysql and 4 data nodes, in a clean installation of centos 7 where we will add a second disk for the cluster part separating it from the OS.
To do this we are going to take some common first steps in the 8 virtual machines in this case, which we will use.
COMMON STEPS
1. We create the LVM of the secondary disk, for this
fdisk /dev/sdb
Please note that your disk may have another name, run fdisk -l to find out what the disk label is
I'm not going to go into much detail on this as there are many tutorials, but it would basically be typing the following keys from the interactive menu that appears. m n p 1 t 8e w (it looks like a hieroglyph but you will be able to decipher it)
Continue running
pvcreate /dev/sdb1
vgcreate data /dev/sdb1
lvcreate -l 100%FREE --name data data
mkfs.ext4 /dev/data/data
mkdir -pv /data
mount /dev/data/data /data
With this we will have our disk mounted in /data, now we put it in the fstab so that it mounts it at startup.
nano /etc/fstab
and we add to the end of the file
/dev/data/data /data ext4 defaults 0 0
We save and ready
2. Disable ipv6
nano /etc/default/grub
We modified the line GRUB_CMDLINE_LINUX...., for this other one
GRUB_CMDLINE_LINUX="ipv6.disable=1 crashkernel=auto rhgb quiet"
We save, exit and execute
grub2-mkconfig -o /boot/grub2/grub.cfg
We reboot the machine.
3. Disable firewalld
This part is up to the consumer, but we are going to make it easy and disable the machine's firewall, since in production this should also be protected by a perimeter firewall for the front end, since as we will see later we are going to use two cards, one for synchronization and another to attack the database.
systemctl stop firewalld
systemctl disable firewalld
systemctl mask --now firewalld
3. Disable selinux
nano /etc/selinux/config
and we add
SELINUX=disabled
We save and exit.
With this we would have already completed the part common to all machines, we move on to the different components.
MANAGEMENT
We will set up two management machines in case one fails.
We will download and install all the necessary software, indicating that we are going to download 8.0.20 because I don't see the higher versions as stable as this one and they cause many problems during the installation.
yum -y update
yum -y install nano wget net-tools
yum -y install epel-release
yum -y install perl-Class-MethodMaker
yum -y install perl-Data-Dumper
yum -y install perl-DBI
yum -y remove mariadb-libs
cd ~
wget https://cdn.mysql.com/archives/mysql-cluster-gpl-8.0/mysql-cluster-community-8.0.20-1.el7.x86_64.rpm-bundle.tar
tar -xvf mysql-cluster-community-8.0.20-1.el7.x86_64.rpm-bundle.tar
rpm -Uvh mysql-cluster-community-management-server-8.0.20-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-common-8.0.20-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-libs-8.0.20-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-client-8.0.20-1.el7.x86_64.rpm
We create the necessary directories
mkdir -p /var/lib/mysql-cluster
mkdir -p /data/logs
And we edit the configuration file, where we will take into account the following scenario, we will communicate all the machines internally through a dedicated network card with the range 172.17.110.0/24 and the management and the mysql will have another card with addressing 172.17.120.0/24 for example to attack them, so we do not produce any interference in the internal communication of the cluster.
nano /var/lib/mysql-cluster/config.ini
And we leave it like that
[ndb_mgmd default]
# Directory for MGM node log files
DataDir=/data/logs
[ndb_mgmd]
#Management Node 1
HostName=172.17.110.10
NodeId=1
[ndb_mgmd]
#Management Node 2
HostName=172.17.110.11
NodeId=2
[ndbd default]
NoOfReplicas=2 # Number of replicas
DataMemory=1G # Memory allocate for data storage
#Directory for Data Node
DataDir=/data
[mysqld]
#SQL Node 1
HostName=172.17.110.20
NodeId=3
[mysqld]
#SQL Node 2
HostName=172.17.110.21
NodeId=4
[ndbd]
#Data Node 1
HostName=172.17.110.31
NodeId=11
[ndbd]
#Data Node 2
HostName=172.17.110.32
NodeId=12
[ndbd]
#Data Node 3
HostName=172.17.110.33
NodeId=13
[ndbd]
#Data Node 4
HostName=172.17.110.34
NodeId=14
I'm going to explain this file so that you understand what we're doing but there's a lot more that's in the official documentation.
In the part [ndb_mgmd default] We put what is global to the management nodes, that is, so as not to have to write twice since it will be the same in both.
We created a [ndb_mgmd] for each management, where the data that changes will go, such as the hostname and the ID, which cannot be repeated because it is what identifies who is who.
Same for data nodes, we have the global part [ndbd default] and the individual part [ndbd]
And finally the mysql with [mysqld]
There is a lot of configuration depending on the scenario, such as the number of replicas, DataMemory, RedoBuffer, MaxNoofTables, .... as I said, it is better to look at the official documentation depending on the scenario because we would never finish the post hahaha.
With this we could start this first management by executing
ndb_mgmd --ndb-nodeid=1 --config-file=/var/lib/mysql-cluster/config.ini
We are telling it the corresponding id number that we have put in the config and where said config is.
Now we have to do the same in node 2, except for the last part where we will indicate that it is id 2, that is, after doing everything and creating the config file, we execute.
ndb_mgmd --ndb-nodeid=2 --config-file=/var/lib/mysql-cluster/config.ini
Where, as you can see, we have changed the id.
We execute ndb_mgm and then write show, we will see that both are ok.
DATA NODES
We download and install what is necessary
yum -y remove mariadb-libs
cd ~
wget https://cdn.mysql.com/archives/mysql-cluster-gpl-8.0/mysql-cluster-community-8.0.20-1.el7.x86_64.rpm-bundle.tar
tar -xvf mysql-cluster-community-8.0.20-1.el7.x86_64.rpm-bundle.tar
rpm -Uvh mysql-cluster-community-data-node-8.0.20-1.el7.x86_64.rpm
We edit the my.cnf
nano /etc/my.cnf
Remaining as follows
[mysqld]
ndbcluster
ndb-connectstring=172.17.110.10 # IP address of Management Node 1
ndb-connectstring=172.17.110.11 # IP address of Management Node 2
[mysql_cluster]
ndb-connectstring=172.17.110.10 # IP address of Management Node 1
ndb-connectstring=172.17.110.11 # IP address of Management Node 2
We are indicating who the management previously created with all the configuration are.
We create the directory
mkdir -p /data/logs
Already defined globally in the management configuration files and we start the node with
ndbd
Do this for each data node.
MYSQL NODE
Finally the mysql, we install everything necessary
yum -y install epel-release
yum -y install perl-Class-MethodMaker
yum -y install perl-Data-Dumper
yum -y install perl-DBI
yum -y remove mariadb-libs
rpm -Uvh mysql-cluster-community-common-8.0.20-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-libs-8.0.20-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-client-8.0.20-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-server-8.0.20-1.el7.x86_64.rpm
and we edit the my.cnf
nano /etc/my.cnf
Remaining as follows
[mysqld]
ndbcluster
ndb-connectstring=172.17.110.10 # IP address of Management Node 1
ndb-connectstring=172.17.110.11 # IP address of Management Node 2
default_storage_engine=ndbcluster # Define default Storage Engine used by MySQL
#[mysql_cluster]
#ndb-connectstring=172.17.110.10 # IP address of Management Node 1
#ndb-connectstring=172.17.110.11 # IP address of Management Node 2
The same as before, we are telling you who are the managers who have all the configuration and the engine that we are going to use in this case ndb
We create the directory
mkdir -p /var/lib/mysql-cluster
And we start the service
service mysqld start
This on both nodes.
After this we can check the status of the cluster again with
ndb_mgm
show
I hope this helps, if you have any questions, please leave a comment.
Have a nice weekend.
TL
Thank you for reading our posts.
No hay comentarios