My Certification

Oracle Database 10g OCP
Oracle Database 11g OCP
Oracle Database 12c OCP
Oracle GoldenGate 10 OCS
Oracle 11gR2 RAC OCE
MySQL 5.6 Database OCP
Novell SUSE Linux 10 CLA
IBM Power System AIX 7.1



How to Preserve Open Mode of PDBs When the CDB Restarts

From version 12.1.0.2(12.1.0.2,12.2.0.1,18.0.0.0.0) it is possible to save or discard the open mode of one or more PDBs when the CDB restarts.
 
The default open mode for a PDB(ZHONGWC1.ZHONGWC2) is MOUNTED (except for PDB$SEED which is READ ONLY and cannot be opened READ WRITE by user).
SQL> show pdbs
 
    CON_ID CON_NAME   OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED   READ ONLY  NO
3 ZHONGWC1   MOUNTED
4 ZHONGWC2   MOUNTED
SQL> select CON_ID, NAME, OPEN_MODE, RESTRICTED, OPEN_TIME  from gv$containers;
 
    CON_ID NAME       OPEN_MODE  RES OPEN_TIME
———- ———- ———- — —————————————————————————
1 CDB$ROOT   READ WRITE NO  13-DEC-18 09.46.03.458 PM +08:00
2 PDB$SEED   READ ONLY  NO  13-DEC-18 09.46.03.509 PM +08:00
3 ZHONGWC1   MOUNTED
4 ZHONGWC2   MOUNTED
 
 
 
How to save or discard the open mode of a PDB when the CDB restarts.
You use the ALTER PLUGGABLE DATABASE SQL statement with a pdb_save or discard_state clause.
SQL> alter pluggable database zhongwc1 save state;
 
Pluggable database altered.
 
SQL> alter pluggable database zhongwc1 discard state;
 
Pluggable database altered.
 
 
 
For Oracle RAC CDB you can use the instances clause together with the in the pdb_save or discard_state clause to specify the instances on which a PDB's open mode is preserved.
List one or more instances in the instances clause in the following form:
INSTANCES = ('instance_name' [,'instance_name'] … )
 
Specify ALL in the instances clause to modify the PDB in all running instances, as in the following example:
INSTANCES = ALL
 
Specify ALL EXCEPT in the instances clause to modify the PDB in all of the instances, except for the instances listed, in the following form:
INSTANCES = ALL EXCEPT('instance_name' [,'instance_name'] … )
 
 
 
How to monitor the save state of the PDBs.
You can use the DBA_PDB_SAVED_STATES view to see the save state for PDBs,in the following example we save the OPEN status of the ZHONGWC2, so that next time CDB restarts ZHONGWC2 will be opened instead of mounted.
SQL> select CON_ID, NAME, OPEN_MODE, RESTRICTED, OPEN_TIME  from gv$containers;
 
    CON_ID NAME       OPEN_MODE  RES OPEN_TIME
———- ———- ———- — —————————————————————————
1 CDB$ROOT   READ WRITE NO  13-DEC-18 09.46.03.458 PM +08:00
2 PDB$SEED   READ ONLY  NO  13-DEC-18 09.46.03.509 PM +08:00
3 ZHONGWC1   MOUNTED
4 ZHONGWC2   MOUNTED
 
SQL> startup pluggable database zhongwc2 open;
Pluggable Database opened.
SQL> show pdbs
 
    CON_ID CON_NAME   OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED   READ ONLY  NO
3 ZHONGWC1   MOUNTED
4 ZHONGWC2   READ WRITE NO
SQL> select CON_ID, NAME, OPEN_MODE, RESTRICTED, OPEN_TIME  from gv$containers;
 
    CON_ID NAME       OPEN_MODE  RES OPEN_TIME
———- ———- ———- — —————————————————————————
1 CDB$ROOT   READ WRITE NO  13-DEC-18 09.46.03.458 PM +08:00
2 PDB$SEED   READ ONLY  NO  13-DEC-18 09.46.03.509 PM +08:00
3 ZHONGWC1   MOUNTED
4 ZHONGWC2   READ WRITE NO  13-DEC-18 09.53.07.169 PM +08:00
 
SQL> select con_name, state from dba_pdb_saved_states;
 
no rows selected
 
SQL> alter pluggable database zhongwc2 save state;
 
Pluggable database altered.
 
SQL> select con_name, state from dba_pdb_saved_states;
 
CON_NAME   STATE
———- ————–
ZHONGWC2   OPEN
 
 
Only the save state is recorded. Once the discard state command is executed for a PDB, the saved state entry for the pdb is removed from DBA_PDB_SAVED_STATES.
SQL> alter pluggable database zhongwc2 discard state;
 
Pluggable database altered.
 
SQL> select con_name, state from dba_pdb_saved_states;
 
no rows selected
 
 
 
As it was mentioned above saving the open state of a PDB is available since 12.1.0.2. 
For 12.1.0.1 you may create a database startup trigger to place PDB(s) into a particular open mode at DB startup.
To open all PDBs at CDB startup, create the following trigger in CDB:
 
CREATE TRIGGER open_all_pdbs 
  AFTER STARTUP ON DATABASE 
BEGIN 
   EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE ALL OPEN'; 
END ;
/

How to estimate how much memory MySQL uses

Global memory use
SELECT (@@innodb_buffer_pool_size +@@innodb_log_buffer_size +@@key_buffer_size) / 1024 / 1024 /1024 AS MEMORY_GB;

global

 

PS(performance_schema) memory use
SELECT SUBSTRING_INDEX(event_name,'/',2) AS
       code_area, sys.format_bytes(SUM(current_alloc))
       AS current_alloc
       FROM sys.x$memory_global_by_current_bytes
       GROUP BY SUBSTRING_INDEX(event_name,'/',2)
       ORDER BY SUM(current_alloc) DESC;
ps1
 
SELECT SUM(CURRENT_NUMBER_OF_BYTES_USED) /1024 /1024 MEMORY_MB
       FROM performance_schema.memory_summary_global_by_event_name 
   WHERE `EVENT_NAME` like 'memory/performance_schema%';
ps2
       
SHOW ENGINE PERFORMANCE_SCHEMA STATUS;
ps3
 
 
All events memory allocation
SELECT EVENT_NAME, COUNT_ALLOC, COUNT_FREE, sys.format_bytes(SUM_NUMBER_OF_BYTES_ALLOC) AS TotalAlloc,
sys.format_bytes(SUM_NUMBER_OF_BYTES_FREE) AS TotalFree, sys.format_bytes(CURRENT_NUMBER_OF_BYTES_USED) AS CurrentUsage,
sys.format_bytes(HIGH_NUMBER_OF_BYTES_USED) AS MaxUsed
FROM performance_schema.memory_summary_global_by_event_name
WHERE CURRENT_NUMBER_OF_BYTES_USED > 0
ORDER BY CURRENT_NUMBER_OF_BYTES_USED DESC;
events
 
 

The static memory usage is the sum of the following variables
innodb_additional_mem_pool_size
innodb_buffer_pool_size
innodb_log_buffer_size
key_buffer_size
query_cache_size

 

For each active connection there can be approximately
binlog_cache_size
binlog_stmt_cache_size
bulk_insert_buffer_size
join_buffer_size
myisam_sort_buffer_size
net_buffer_length * 2
preload_buffer_size
read_buffer_size
read_rnd_buffer_size
sort_buffer_size
thread_stack


This lets you see how many queries use sort buffer, how many joins, etc.
Global Usage = key_buffer_size + query_cache_size + 1.1 * innodb_buffer_pool_size + innodb_additional_mem_pool_size + innodb_log_buffer_size

Per Thread   = thread_stack + 2 * net_buffer_length


Note: the per query contribution is more or less based on an average query
Per Query    = "buffer for reading rows" + "sorting" + "full joins" + "binlog cache" + "index preload" + "internal tmp tables"
                  =  max(read_buffer_size, read_rnd_buffer_size)
                  + max(sort_buffer_size/2, "avg queries with scan" * "avg scans with merge" * sort_buffer_size)
                  + "avg full joins" * join_buffer_size
                  + "avg binlog cache use" * binlog_cache_size
                  + preload_buffer_size
                  + "avg tmp tables" * min(tmp_table_size, max_heap_table_size)

Total        = "global" + max_used_connections * ("thread" + "query")

Replicating from MySQL 8.0 to MySQL 5.7

We’ll discuss how to set a replication from MySQL 8.0 to MySQL 5.7. There are some situations that having this configuration might help.
 
 
The default character_set and the collation has changed on MySQL 8
The default value of the character_set_server and character_set_database system variables has changed from latin1 to utf8mb4.
The default value of the collation_server and collation_database system variables has changed from latin1_swedish_ci to utf8mb4_0900_ai_ci.
 
# master my.cnf
[client]
default-character-set=utf8
[mysqld]
character-set-server=utf8
collation-server=utf8_unicode_ci
 
 
You have to create a replication user using mysql_native_password.This is because MySQL 8 changed the default Authentication Plugin to caching_sha2_password which is not supported by MySQL 5.7.
 
master> CREATE USER 'replica_user'@'%' IDENTIFIED WITH mysql_native_password BY 'repli$cat';
master> GRANT REPLICATION SLAVE ON *.* TO 'replica_user'@'%';
 
slave > CHANGE MASTER TO MASTER_HOST='127.0.0.1', MASTER_USER='replica_user', MASTER_PASSWORD='repli$cat',MASTER_PORT=19025, MASTER_LOG_FILE='mysql-bin.000007', MASTER_LOG_POS=155;
slave > start slave;
 
# This procedure works with GTIDs too
slave > CHANGE MASTER TO MASTER_HOST='127.0.0.1', MASTER_USER='replica_user', MASTER_PASSWORD='repli$cat',MASTER_PORT=19025,MASTER_AUTO_POSITION = 1 ;
slave > start slave; 
 
 
Any tentative attempts to use a new feature from MySQL 8 like roles, invisible indexes or caching_sha2_password will make the replication stop with an error:
 
master > alter user replica_user identified with caching_sha2_password by 'sekret';
 
slave > show slave status\G
               Last_SQL_Errno: 1396
               Last_SQL_Error: Error 'Operation ALTER USER failed for 'replica_user'@'%'' on query. Default database: ''. Query: 'ALTER USER 'replica_user'@'%' IDENTIFIED WITH 'caching_sha2_password' AS '$A$005$H MEDi\"gQ
                        wR{/I/VjlgBIUB08h1jIk4fBzV8kU1J2RTqeqMq8Q2aox0''
 
 
 

NETAPP存储使用配置

配置需要连接到存储的主机

打开存储连接主机的配置选项

netapp1

 

输入主机名,并点击Next

netapp2

 

查看主机的hba wwn号,本主机有两块HBA卡
cat /sys/class/fc_host/host1/port_name
0x2100001b3294bc8e

cat /sys/class/fc_host/host2/port_name
0x2101001b32b4bc8e

 

配置主机wwn号,并点击Add

netapp3

netapp4

 

配置第二块HBA卡,点击Next

netapp5

 

配置Host type,因为启用多路径,因此选择Linux(DM-MP)

netapp6

 

主机是否共享LUN,这里不共享

netapp7

 

配置综述如下,点击Finish,完成创建

netapp8

 

此处告诉我们主机Host Mapping已经创建,我们不进行其他配置,选择No

netapp9

 

可以看到,Host已经创建完成

netapp10

 

 

存储配置

点击Storage & Copy Services,找到之前划分好的Volume Groups,查看Free Capacity

netapp11

netapp12

 

点击创建选项后,出现创建LUN的配置界面,输入各项数值,从Map to host中选择第二节中创建好的主机点击Finish。

netapp13

netapp14

netapp15

 

点击刚创建好的LUN,可以看找到正在格式化这个LUN。格式化完毕,则可使用这个LUN。

netapp16

netapp17

 

 

接下来就是使用Linux Device Mapper Multipathing技术来进行多路径绑定,关于多路径绑定技术不在本文范围以内,可以参考以下我写的两篇文章

Oracle ASM on RHEL 5 with multipath

Oracle ASM on RHEL 6 with udev and multipath

 

 

最终多路径绑定显示如下

netapp18

 

 

MySQL Server Variable: slave_net_timeout; master_connect_retry; master_retry_count

Temporary network glitches are quite common particularly for wide area network (WAN) connections. In a replication setup MySQL handles network issues by allowing the replication to resume using the existing connection to the master. If the network issue has not been resolved after some time, a retry mechanism kicks in.

There are three options that defines this timeout and how the retries should be made:

  • slave_net_timeout – how long to wait before trying to reconnect.
  • master_retry_count – how many reconnect attempts to make.
  • master_connect_retry – how long to wait before reconnect attempts.

All options are to be set on the slave and does not have any direct effect on the master. The options are described in more details below.
 

 

The retry flow is:

myrepvar

1.When a network issue is detected, the slave I/O thread will wait for slave_net_timeout before making the first attempt to reconnect. If the issue resolves itself before that, the existing connection is used and nothing more is happening. During the slave_net_timeout seconds, the status of the slave I/O thread will be seen as running.
2.The slave I/O thread enters a loop that will be executed up to master_retry_count times. At this time the slave I/O status is set to not running.
3.Attempt to reconnect to the master; the first retry is made immediately.
4.If the connection is reestablished, replication resumes. The slave I/O thread status is set back to be be running and the loop is exited.
5.If the connection wasn't reestablished the I/O thread sleeps for slave_connect_retry seconds before going back to step 3.
6.When master_retry_count retries have been made, exit the loop.

 

 

Slave Configuration Options
slave_net_timeout
The slave_net_timeout specifies for how long time the slave's I/O thread should wait for the existing connection to work again before starting the connection retries. The value is specied in seconds with the default being 3600 seconds (1 hour) in MySQL 5.6 and earlier and 60 seconds in MySQL 5.7 and later. As a rule of thumb, the more stable the network connection you have between the master and slave, the lower you can set this value:

  • If the master and slave are both on the same local area network (LAN), you should be fine to set slave_net_timeout as low as 60 to 120 seconds as the network is expected to be very stable.
  • If you have a lot of network flipping, you may want to increase the value.

 

master_retry_count
The master_retry_count option specifies how many attempts should be made to reconnect to the master.

This option has been deprecated as of MySQL 5.6.1 as a start up option. In later releases you should use CHANGE MASTER TO to change the value of master_retry_count, for example:

mysql> CHANGE MASTER TO MASTER_RETRY_COUNT = 20;
The default is to make 86400 attempts to reconnect.

A special value for master_retry_count is 0 which means to attempt to reconnect indefinitely.

 

master_connect_retry
The option master_connect_retry is the interval between reconnection attempts. The value is specified in seconds with the default being 60 seconds.

The start up option has been deprecated since 5.1.17 and was removed in MySQL 5.5 and MySQL will refuse to start if you have it configured in your MySQL configuration file. You should instead use CHANGE MASTER TO to change this option, for example:

mysql> CHANGE MASTER TO MASTER_CONNECT_RETRY = 30;

AIX 7.2 查看硬盘信息

# uname
AIX

# oslevel
7.2.0.0


查看硬盘状态
# lsdev -Cc disk
hdisk0 Available          Virtual SCSI Disk Drive
hdisk1 Available          Virtual SCSI Disk Drive
hdisk2 Available 43-T1-01 3PAR InServ Virtual Volume
hdisk3 Available 46-T1-01 3PAR InServ Virtual Volume
hdisk4 Available 43-T1-01 3PAR InServ Virtual Volume


查看磁盘大小
# bootinfo -s hdisk0
102400
# bootinfo -s hdisk1
102400
# bootinfo -s hdisk2
1048576
# bootinfo -s hdisk3
512000
# bootinfo -s hdisk4
512000


查看磁盘对应文件系统信息
# lspv
hdisk0          00fb052d5090f957                    rootvg          active      
hdisk1          00fb052d552053c5                    rootvg          active      
hdisk2          00fb052d2cc586ec                    sapvg           active      
hdisk3          00fb052d1ff5e7e1                    backupvg        active      
hdisk4          00fb052de9a29bad                    vgbakjcp        active      

# lspv -l hdisk0
hdisk0:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
hd3                   80      80      80..00..00..00..00    /tmp
hd1                   80      80      00..00..00..80..00    /home
hd10opt               80      80      00..00..00..78..02    /opt
hd8                   1       1       00..00..01..00..00    N/A
hd4                   104     104     00..00..80..00..24    /
hd2                   80      80      00..80..00..00..00    /usr
hd9var                80      80      00..00..78..02..00    /var
hd5                   1       1       01..00..00..00..00    N/A
hd6                   256     256     79..45..00..00..132   N/A
hd11admin             1       1       00..01..00..00..00    /admin
lg_dumplv             32      32      00..32..00..00..00    N/A
livedump              2       2       00..02..00..00..00    /var/adm/ras/livedump


查看磁盘属性
# lsattr -El hdisk0
PCM             PCM/friend/vscsi                 Path Control Module        False
algorithm       fail_over                        Algorithm                  True
hcheck_cmd      test_unit_rdy                    Health Check Command       True+
hcheck_interval 0                                Health Check Interval      True+
hcheck_mode     nonactive                        Health Check Mode          True+
max_transfer    0×40000                          Maximum TRANSFER Size      True
pvid            00fb052d5090f9570000000000000000 Physical volume identifier False
queue_depth     3                                Queue DEPTH                True+
reserve_policy  no_reserve                       Reserve Policy             True+

Oracle Database 18c Active Data Guard Far Sync

Oracle 官方Maximum Availability Architecture(MAA)描述
New in Oracle Database 12c, Active Data Guard Far Sync provides the ability to perform a zero data loss failover to a remote standby database without requiring a second standby database or complex operation. Far Sync enables this by deploying a Far Sync instance (a lightweight Oracle instance that has only a control file, spfile, password file and standby log files, there are no database files or online redo logs) at a distance that is within an acceptable range of the primary for SYNC transport. A Far Sync instance receives redo from the primary via SYNC transport and immediately forwards the redo to up to 29 remote standby databases via ASYNC transport as described in Figure 1. A Far Sync instance can also forward redo to the new Oracle Database Backup, Logging, and Recovery Appliance1.
The presence of a Far Sync instance in a Data Guard configuration is transparent to its operation during switchover or failover, the administrator uses the same commands used for any Data Guard configuration. Far Sync requires nothing new to learn or any additional procedures in order to perform a zero data loss failover across a wide area network (WAN).
 
Active Data Guard Far Sync Standby是Oracle Database 12c的新功能。Far Sync功能的实现是通过在距离主库(Primary Database)相对较近的地点配置Far Sync实例,主库(Primary Database) 同步(synchronous)传输redo到Far Sync实例,然后Far Sync实例再将redo异步(asynchronous)传输到地点较远的终端备库(Standby Database)。这样既可以保证零数据丢失又可以降低在低带宽网络环境下对主库的压力。Far Sync实例只有控制文件,口令文件和参数文件。
 
farsync

Continue reading Oracle Database 18c Active Data Guard Far Sync

Installation Oracle Database 18c RAC On Oracle Linux 7.5 — part 2 DB

Install the Database Software
The DB is now an base-image installation, so perform the following on the first node as the "oracle" user.
[oracle@zhongwc1 ~]$ unzip /tmp/LINUX.X64_180000_db_home.zip -d /u01/app/oracle/product/18.0.0.0/db_1/
[oracle@zhongwc1 ~]$ cd $ORACLE_HOME
[oracle@zhongwc1 db_1]$ ./runInstaller 

18cdb1

Continue reading Installation Oracle Database 18c RAC On Oracle Linux 7.5 — part 2 DB