My Certification

Oracle Database 10g OCP
Oracle Database 11g OCP
Oracle Database 12c OCP
Oracle GoldenGate 10 OCS
Oracle 11gR2 RAC OCE
MySQL 5.6 Database OCP
Novell SUSE Linux 10 CLA
IBM Power System AIX 7.1



NETAPP存储使用配置

配置需要连接到存储的主机

打开存储连接主机的配置选项

netapp1

 

输入主机名,并点击Next

netapp2

 

查看主机的hba wwn号,本主机有两块HBA卡
cat /sys/class/fc_host/host1/port_name
0x2100001b3294bc8e

cat /sys/class/fc_host/host2/port_name
0x2101001b32b4bc8e

 

配置主机wwn号,并点击Add

netapp3

netapp4

 

配置第二块HBA卡,点击Next

netapp5

 

配置Host type,因为启用多路径,因此选择Linux(DM-MP)

netapp6

 

主机是否共享LUN,这里不共享

netapp7

 

配置综述如下,点击Finish,完成创建

netapp8

 

此处告诉我们主机Host Mapping已经创建,我们不进行其他配置,选择No

netapp9

 

可以看到,Host已经创建完成

netapp10

 

 

存储配置

点击Storage & Copy Services,找到之前划分好的Volume Groups,查看Free Capacity

netapp11

netapp12

 

点击创建选项后,出现创建LUN的配置界面,输入各项数值,从Map to host中选择第二节中创建好的主机点击Finish。

netapp13

netapp14

netapp15

 

点击刚创建好的LUN,可以看找到正在格式化这个LUN。格式化完毕,则可使用这个LUN。

netapp16

netapp17

 

 

接下来就是使用Linux Device Mapper Multipathing技术来进行多路径绑定,关于多路径绑定技术不在本文范围以内,可以参考以下我写的两篇文章

Oracle ASM on RHEL 5 with multipath

Oracle ASM on RHEL 6 with udev and multipath

 

 

最终多路径绑定显示如下

netapp18

 

 

MySQL Server Variable: slave_net_timeout; master_connect_retry; master_retry_count

Temporary network glitches are quite common particularly for wide area network (WAN) connections. In a replication setup MySQL handles network issues by allowing the replication to resume using the existing connection to the master. If the network issue has not been resolved after some time, a retry mechanism kicks in.

There are three options that defines this timeout and how the retries should be made:

  • slave_net_timeout – how long to wait before trying to reconnect.
  • master_retry_count – how many reconnect attempts to make.
  • master_connect_retry – how long to wait before reconnect attempts.

All options are to be set on the slave and does not have any direct effect on the master. The options are described in more details below.
 

 

The retry flow is:

myrepvar

1.When a network issue is detected, the slave I/O thread will wait for slave_net_timeout before making the first attempt to reconnect. If the issue resolves itself before that, the existing connection is used and nothing more is happening. During the slave_net_timeout seconds, the status of the slave I/O thread will be seen as running.
2.The slave I/O thread enters a loop that will be executed up to master_retry_count times. At this time the slave I/O status is set to not running.
3.Attempt to reconnect to the master; the first retry is made immediately.
4.If the connection is reestablished, replication resumes. The slave I/O thread status is set back to be be running and the loop is exited.
5.If the connection wasn't reestablished the I/O thread sleeps for slave_connect_retry seconds before going back to step 3.
6.When master_retry_count retries have been made, exit the loop.

 

 

Slave Configuration Options
slave_net_timeout
The slave_net_timeout specifies for how long time the slave's I/O thread should wait for the existing connection to work again before starting the connection retries. The value is specied in seconds with the default being 3600 seconds (1 hour) in MySQL 5.6 and earlier and 60 seconds in MySQL 5.7 and later. As a rule of thumb, the more stable the network connection you have between the master and slave, the lower you can set this value:

  • If the master and slave are both on the same local area network (LAN), you should be fine to set slave_net_timeout as low as 60 to 120 seconds as the network is expected to be very stable.
  • If you have a lot of network flipping, you may want to increase the value.

 

master_retry_count
The master_retry_count option specifies how many attempts should be made to reconnect to the master.

This option has been deprecated as of MySQL 5.6.1 as a start up option. In later releases you should use CHANGE MASTER TO to change the value of master_retry_count, for example:

mysql> CHANGE MASTER TO MASTER_RETRY_COUNT = 20;
The default is to make 86400 attempts to reconnect.

A special value for master_retry_count is 0 which means to attempt to reconnect indefinitely.

 

master_connect_retry
The option master_connect_retry is the interval between reconnection attempts. The value is specified in seconds with the default being 60 seconds.

The start up option has been deprecated since 5.1.17 and was removed in MySQL 5.5 and MySQL will refuse to start if you have it configured in your MySQL configuration file. You should instead use CHANGE MASTER TO to change this option, for example:

mysql> CHANGE MASTER TO MASTER_CONNECT_RETRY = 30;

AIX 7.2 查看硬盘信息

# uname
AIX

# oslevel
7.2.0.0


查看硬盘状态
# lsdev -Cc disk
hdisk0 Available          Virtual SCSI Disk Drive
hdisk1 Available          Virtual SCSI Disk Drive
hdisk2 Available 43-T1-01 3PAR InServ Virtual Volume
hdisk3 Available 46-T1-01 3PAR InServ Virtual Volume
hdisk4 Available 43-T1-01 3PAR InServ Virtual Volume


查看磁盘大小
# bootinfo -s hdisk0
102400
# bootinfo -s hdisk1
102400
# bootinfo -s hdisk2
1048576
# bootinfo -s hdisk3
512000
# bootinfo -s hdisk4
512000


查看磁盘对应文件系统信息
# lspv
hdisk0          00fb052d5090f957                    rootvg          active      
hdisk1          00fb052d552053c5                    rootvg          active      
hdisk2          00fb052d2cc586ec                    sapvg           active      
hdisk3          00fb052d1ff5e7e1                    backupvg        active      
hdisk4          00fb052de9a29bad                    vgbakjcp        active      

# lspv -l hdisk0
hdisk0:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
hd3                   80      80      80..00..00..00..00    /tmp
hd1                   80      80      00..00..00..80..00    /home
hd10opt               80      80      00..00..00..78..02    /opt
hd8                   1       1       00..00..01..00..00    N/A
hd4                   104     104     00..00..80..00..24    /
hd2                   80      80      00..80..00..00..00    /usr
hd9var                80      80      00..00..78..02..00    /var
hd5                   1       1       01..00..00..00..00    N/A
hd6                   256     256     79..45..00..00..132   N/A
hd11admin             1       1       00..01..00..00..00    /admin
lg_dumplv             32      32      00..32..00..00..00    N/A
livedump              2       2       00..02..00..00..00    /var/adm/ras/livedump


查看磁盘属性
# lsattr -El hdisk0
PCM             PCM/friend/vscsi                 Path Control Module        False
algorithm       fail_over                        Algorithm                  True
hcheck_cmd      test_unit_rdy                    Health Check Command       True+
hcheck_interval 0                                Health Check Interval      True+
hcheck_mode     nonactive                        Health Check Mode          True+
max_transfer    0×40000                          Maximum TRANSFER Size      True
pvid            00fb052d5090f9570000000000000000 Physical volume identifier False
queue_depth     3                                Queue DEPTH                True+
reserve_policy  no_reserve                       Reserve Policy             True+

Oracle Database 18c Active Data Guard Far Sync

Oracle 官方Maximum Availability Architecture(MAA)描述
New in Oracle Database 12c, Active Data Guard Far Sync provides the ability to perform a zero data loss failover to a remote standby database without requiring a second standby database or complex operation. Far Sync enables this by deploying a Far Sync instance (a lightweight Oracle instance that has only a control file, spfile, password file and standby log files, there are no database files or online redo logs) at a distance that is within an acceptable range of the primary for SYNC transport. A Far Sync instance receives redo from the primary via SYNC transport and immediately forwards the redo to up to 29 remote standby databases via ASYNC transport as described in Figure 1. A Far Sync instance can also forward redo to the new Oracle Database Backup, Logging, and Recovery Appliance1.
The presence of a Far Sync instance in a Data Guard configuration is transparent to its operation during switchover or failover, the administrator uses the same commands used for any Data Guard configuration. Far Sync requires nothing new to learn or any additional procedures in order to perform a zero data loss failover across a wide area network (WAN).
 
Active Data Guard Far Sync Standby是Oracle Database 12c的新功能。Far Sync功能的实现是通过在距离主库(Primary Database)相对较近的地点配置Far Sync实例,主库(Primary Database) 同步(synchronous)传输redo到Far Sync实例,然后Far Sync实例再将redo异步(asynchronous)传输到地点较远的终端备库(Standby Database)。这样既可以保证零数据丢失又可以降低在低带宽网络环境下对主库的压力。Far Sync实例只有控制文件,口令文件和参数文件。
 
farsync

Continue reading Oracle Database 18c Active Data Guard Far Sync

Installation Oracle Database 18c RAC On Oracle Linux 7.5 — part 2 DB

Install the Database Software
The DB is now an base-image installation, so perform the following on the first node as the "oracle" user.
[oracle@zhongwc1 ~]$ unzip /tmp/LINUX.X64_180000_db_home.zip -d /u01/app/oracle/product/18.0.0.0/db_1/
[oracle@zhongwc1 ~]$ cd $ORACLE_HOME
[oracle@zhongwc1 db_1]$ ./runInstaller 

18cdb1

Continue reading Installation Oracle Database 18c RAC On Oracle Linux 7.5 — part 2 DB

Installation Oracle Database 18c RAC On Oracle Linux 7.5 — part 1 GI

Oracle Installation Prerequisites
[root@zhongwc1 ~]# cat /etc/oracle-release 
Oracle Linux Server release 7.5
 
 
Apart form the localhost address, the "/etc/hosts" file can be left blank.
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
 
#public ip
192.168.0.21  zhongwc1
192.168.0.22  zhongwc2
 
#Vip
192.168.0.121 zhongwc1-vip
192.168.0.122 zhongwc2-vip
 
#private ip
10.10.10.21   zhongwc1-priv
10.10.10.22   zhongwc2-priv
 
#SCAN ip
192.168.0.120 zhongwc-cluster zhongwc-cluster-scan
 
 
Change the setting of SELinux to disabled by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.
SELINUX=disabled
 
 
If you have the Linux firewall enabled, you will need to disable or configure it.
systemctl stop firewalld
systemctl disable firewalld
 
 
Make sure NTP is disable.
systemctl stop chronyd.service
systemctl disable chronyd.service
mv /etc/chrony.conf /etc/chrony.conf.bak

Continue reading Installation Oracle Database 18c RAC On Oracle Linux 7.5 — part 1 GI

MySQL timestamp和datetime区别

timestamp受时区影响,datetime不受时区影响

mysql> create table tab1(dt1 timestamp,dt2 datetime);
Query OK, 0 rows affected (0.02 sec)

mysql> insert into tab1 (dt1,dt2) values(now(),now());
Query OK, 1 row affected (0.00 sec)

mysql> select * from tab1;
+———————+———————+
| dt1                 | dt2                 |
+———————+———————+
| 2018-08-02 11:23:04 | 2018-08-02 11:23:04 |
+———————+———————+
1 row in set (0.00 sec)

mysql> show variables like '%zone%';
+——————+——–+
| Variable_name    | Value  |
+——————+——–+
| system_time_zone | CST    |
| time_zone        | SYSTEM |
+——————+——–+
2 rows in set (0.00 sec)

mysql>  set session time_zone='+8:00';
Query OK, 0 rows affected (0.00 sec)

mysql> select * from tab1;
+———————+———————+
| dt1                 | dt2                 |
+———————+———————+
| 2018-08-02 11:23:04 | 2018-08-02 11:23:04 |
+———————+———————+
1 row in set (0.00 sec)

mysql>  set session time_zone='+9:00';
Query OK, 0 rows affected (0.01 sec)

mysql> select * from tab1;
+———————+———————+
| dt1                 | dt2                 |
+———————+———————+
| 2018-08-02 12:23:04 | 2018-08-02 11:23:04 |
+———————+———————+
1 row in set (0.00 sec)

mysql>  set session time_zone='+10:00';
Query OK, 0 rows affected (0.00 sec)

mysql> select * from tab1;
+———————+———————+
| dt1                 | dt2                 |
+———————+———————+
| 2018-08-02 13:23:04 | 2018-08-02 11:23:04 |
+———————+———————+
1 row in set (0.00 sec)

mysql> show variables like '%zone%';
+——————+——–+
| Variable_name    | Value  |
+——————+——–+
| system_time_zone | CST    |
| time_zone        | +10:00 |
+——————+——–+
2 rows in set (0.00 sec)

 

TIMESTAMP与DATETIME取值范围不同
DATETIME的范围为'1000-01-01 00:00:00.000000'–'9999-12-31 23:59:59.999999'(不受时区影响),在UTC时区下,TIMESTAMP的范围为'1970-01-01 00:00:01.000000'–'2038-01-19 03:14:07.999999'

MySQL 5.7 Multi-threaded Replication – Slave workers apply the changes very slow

MySQL 5.7 Multi-threaded slaves are very slow, can't catch up the master.
1.No work load on on slave server. 
2.No warnings or errors in the MySQL error log.
3.In the innodb status report, transaction section, showing slave workers are in "ACTIVE (PREPARED)" status:

–TRANSACTION xxxxxx, ACTIVE (PREPARED) 0 sec
4 lock struct(s), heap size 1136, 2 row lock(s), undo log entries 2
MySQL thread id xxxx, OS thread handle xxxxxxx, query id xxxxxx System lock
—TRANSACTION xxxxxx, ACTIVE (PREPARED) 0 sec
4 lock struct(s), heap size 1136, 2 row lock(s), undo log entries 2
MySQL thread id xxxxx, OS thread handle xxxxxxx, query id xxxxxxx System lock
—TRANSACTION xxxx, ACTIVE (PREPARED) 0 sec
3 lock struct(s), heap size 1136, 1 row lock(s), undo log entries 2
MySQL thread id xxxx, OS thread handle xxxxxx, query id xxxxxx System lock

 

In the information_schema.innodb_trx, these transactions are in running stat:

select * from information_schema.innodb_trx

mysql> /*!50503 select * from information_schema.innodb_trx */;
+———-+———–+———————+———————–+——————+————+———————+
| trx_id | trx_state | trx_started | trx_requested_lock_id | trx_wait_started | trx_weight | trx_mysql_thread_id |
+———-+———–+———————+———————–+——————+————+———————+
| xxxxxxxx | RUNNING | 2018-07-24 19:29:33 | NULL | NULL | 6 | xxxx |
| xxxxxxxx | RUNNING | 2018-07-24 19:29:33 | NULL | NULL | 5 | xxxx |
| xxxxxxxx | RUNNING | 2018-07-24 19:29:33 | NULL | NULL | 6 | xxxx |
| xxxxxxxx | RUNNING | 2018-07-24 19:29:33 | NULL | NULL | 6 | xxxx |
+———-+———–+———————+———————–+——————+————+———————+

 

High value of binlog_group_commit_sync_delay may impact the slave worker performance in the multi-threaded slave.
Each slave worker will wait microseconds(defined by binlog_group_commit_sync_delay) before committing, due to Bug 21420180 transactions are still being affected by binlog_group_commit_sync_delay even when sync_binlog is set to 0.
And when slave_preserve_commit_order is ON, any delay in one slave worker may cause slave even slower to catch up the master.

The workaround is to set the binlog_group_commit_sync_delay to 0.

Bug 21420180 is now fixed in 5.7.18 and afterwards.

 

Bug 21420180