Introduction.
The below scenario will be discussed in this post: There is container database that has been created with an utf8 Character Set (and this character set comes recommended in OCI). This Container Db (and its only PDB (of course also in utf8) has an active data guard already in place in OCI.
Scenario requested: Can we add – create a PDB with a different Character set into this CDB with that other Character set.
Short story: No you cannot create a pluggable database in the CDB with a different Character Set. Plan your CDB with care.
Long story: You cannot, but there is a work around possible but your target CDB (the database where you would like to add that different Character set needs to be UTF8 to make this work!)
AND
You need a second Container Database with a different Character set in OCI.
Recommendations / Setup / Tested practice:
In OCI: before implementing a data guard on your CDB, it is recommended to implement all the needed PDBs (best practice via the OCI console) in the CDB before setting up a data guard. Which means IF you really really really want to do this scenario below, do it before setting up the Dataguard.
Second recommendation if you need to add PDBs in the container database ( with utf8) still recommend to add or clone first all the PDBs needed in that CDB before starting to work on the (active) data guard.
Should you need a container database with a different Character Set and need to load data into it which also holds that different character set , then set up this environment and perform the remote clone steps as mentioned here (below) and make sure the PDB is in a good status on the Prim side before continue with cloning scenario.
Please be aware that a lot of things happen for a reason. If you really really really ( did i mention really) have to have environments with other characterset ( different from UTF8) do consider to implement a separate CDB – infrastructure with that different character set!
And most and most importantly order of using tools should be: UI, then the db cli and only as last resort use manual intervention using ssh – sqlplus etc.
Requirements / best practice that need checking before Cloning:
- My source CDB database (in UTF8) is holding a PDB and I want to add a PDB In a different Character set to my primary side in the existing CDB:
- Source (remote pluggable database) in CDB with different Character set is set to read only.
- Is there a data guard set up in place in OCI? If your answer to that question is Yes: check with dgmgrl that the Data Guard = Happy.
- Do you have enough storage on disk groups to hold a transient PDB, and a clone PDB from transient in same Container Database?
- Is there a database Link in place for the remote clone.
- Check source (in a container database with different character set) as a preparation but for the first clone) :
SQL> show PDBs
CON_ID CON_NAME OPEN MODE RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED READ ONLY NO
3 ISOP15_PDB1 READ WRITE NO
4 TEST READ ONLY NO
5 WILLIE READ WRITE NO
- Check character set in that Source PDB:
SELECT value FROM nls_database_parameters WHERE parameter = ‘NLS_CHARACTER SET’;
VALUE
—————————————————————-
WE8ISO8859P15
- Check for Data Guard (if in place) in place (and happy)
Note: by default in OCI broker is running on the Prim side.
dgmgrl
DGMGRL> connect sys / as sysdba
Password:
Connected to “DBUTF8_fra2ps”
Connected as SYSDBA.
DGMGRL> show configuration
Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7
Protection Mode: MaxPerformance
Members:
DBUTF8_fra2ps – Primary database
DBUTF8_fra1h7 – Physical standby database
Fast-Start Failover: Disabled
Configuration Status:
SUCCESS (status updated 59 seconds ago)
- Checked for storage:
Specific script for Asm environment:
SET LINESIZE 150
SET PAGESIZE 9999
SET VERIFY off
COLUMN group_name FORMAT a25 HEAD ‘DISKGROUP_NAME’
COLUMN state FORMAT a11 HEAD ‘STATE’
COLUMN type FORMAT a6 HEAD ‘TYPE’
COLUMN total_mb FORMAT 999,999,999 HEAD ‘TOTAL SIZE(GB)’
COLUMN free_mb FORMAT 999,999,999 HEAD ‘FREE SIZE (GB)’
COLUMN used_mb FORMAT 999,999,999 HEAD ‘USED SIZE (GB)’
COLUMN pct_used FORMAT 999.99 HEAD ‘PERCENTAGE USED’
SELECT distinct name group_name , state state , type type ,
round(total_mb/1024) TOTAL_GB , round(free_mb/1024) free_gb ,
round((total_mb – free_mb) / 1024) used_gb ,
round((1- (free_mb / total_mb))*100, 2) pct_used
from v$asm_diskgroup
–where round((1- (free_mb / total_mb))*100, 2) > 90
ORDER BY name;
- On prim side check for specific database link
set lines 300
select owner,DB_LINK from dba_db_links order by 1 ;
OWNER DB_LINK
——————————————————————————————————————————– ——————————————————————————————————————————–
PUBLIC dblink_to_other_CDB.SUB10200805271.MBK1.ORACLEVCN.COM
SYS SYS_HUB
- If the database link test is not in place it needs create (As you can see I decided to do a public db link and also decided to add all the details in the db link and not use tnsnames entry):
CREATE public DATABASE LINK dblink_to_other_CDB
CONNECT TO <C##POWERUSER> IDENTIFIED BY <USER_PASSWORD>
USING ‘(DESCRIPTION=(CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.179)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=test.sub10200805271.mbk1.oraclevcn.com)))’;
- On the Prim side check for parameter for GLOBAL first, if you do not really need it put it to False!
SQL> show parameter global
NAME TYPE VALUE
———————————— ———– ——————————
allow_global_dblinks boolean FALSE
global_names boolean FALSE <<<—-
global_txn_processes integer 1
- if global_names is true change it to false.
alter system set global_names = FALSE ;
- After that check that the db link is working on prim side and SB side:
SQL> select sysdate from dual@dblink_to_other_CDB;
SYSDATE
———
19-NOV-21
- On the prim side create an intermedia PDB (clone from the remote container) (with a PDB in diff character set). This step will create a pluggable dabase in our CBD with the character set from the Other CDB (if that CDB was we8iso8859p15 and its PDB there will be we8iso8859p15 too). PDBVOID will be a pluggable database with character set we8iso8859p15 which is part of the CDB where character Set is UTF8.
create pluggable database PDBVOID from test@test keystore identified by “<wallet_password>” standbys=none;
Explanation:
test@dblink_to_other_CDB means: PDBname_to_clone_from dblink_to_other_CDB: dblink we created earlier to the CDB holding PDB (in a different Character set)
standbys means do not protect this PDB in the Data Guard.
- Next step will be to create a self-referencing DB link on primary (here named DBLINK_TO_PRIM) ( so basically a dblink that is pointing to the database where it is also created).
- Note it will be replicated on standby pointing to the primary too) and I DO recommend you test that DB link on both sides ( Primary and Standby) if it works well.
CREATE public DATABASE LINK PRIMSIDE
CONNECT TO C##SYSTEM IDENTIFIED BY HAS_BEEN_CHANGED_##12
USING ‘(DESCRIPTION=(CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.39)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=DBUTF8_fra2ps.sub10200805271.mbk1.oraclevcn.com)))’;
- Next step to be taken is happening on the Standby side: You will have to set – update the parameter: STANDBY_PDB_SOURCE_FILE_DBLINK:
ALTER SYSTEM SET STANDBY_PDB_SOURCE_FILE_DBLINK=’PRIMSIDE’;
- Next Step on prim side prepare for the next clone (inside the container) and that clone from a clone will be there to stay.
SQL> alter pluggable database PDBVOID open instances = all;
SQL> alter pluggable database PDBVOID close instances = all;
SQL> alter pluggable database PDBVOID open read only instances = all;
SQL> show PDBs
CON_ID CON_NAME OPEN MODE RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDBDEC READ WRITE NO
5 PDB2 READ WRITE NO
6 PDB010 READ WRITE NO
7 PDBP15 READ WRITE NO
8 PDBVOID READ ONLY NO
- Now create a local clone from the transient no-standby PDB with STANDBYS=ALL (which means the Data Guard should protect this New PDB).
create pluggable database PDBP15C from PDBVOID keystore identified by “WElcome##12” STANDBYS=ALL ;
SQL> show PDBs
CON_ID CON_NAME OPEN MODE RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDBDEC READ WRITE NO
5 PDB2 READ WRITE NO
6 PDB010 READ WRITE NO
7 PDBP15 READ WRITE NO
8 PDBVOID READ ONLY NO
9 PDBP15C MOUNTED
- Open the PDB on primary side:
SQL> alter pluggable database PDBP15C open instances = all;
- Time to check wallet on prim side:
col wrl_parameter format a60
set lines 300
select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;
- Create a new TDE master encryption key on the primary side.
alter session set container=PDBP15C;
administer key management set key force keystore identified by “WElcome##12” with backup;
- Note the broker of the Data Guard will not be happy it will show error due to the new master key implemented on the PRIM SIDE !!!!!!!!!!!!!!!!!!!!!:
DGMGRL> show configuration
Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7
Protection Mode: MaxPerformance
Members:
DBUTF8_fra2ps – Primary database
DBUTF8_fra1h7 – Physical standby database
Error: ORA-16766: Redo Apply is stopped
Fast-Start Failover: Disabled
Configuration Status:
ERROR (status updated 60 seconds ago)
- On prim side .. start preps for scp of the wallet to the standby machine(s).
cd /opt/oracle/dcs/commonstore/wallets/tde/DBUTF8_fra2ps/
[oracle@clutf81 DBUTF8_fra2ps]$ cp ewallet.p12 cwallet.sso /tmp
[oracle@clutf81 DBUTF8_fra2ps]$ cd /tmp
[oracle@clutf81 tmp]$ ls -ltr ewallet.p12 cwallet.sso
-rw——- 1 oracle oinstall 14091 Nov 19 12:33 ewallet.p12
-rw——- 1 oracle oinstall 14136 Nov 19 12:33 cwallet.sso
## Permissions needed for OPC:
chmod o+rx ewallet.p12 cwallet.sso
[opc@clutf81 tmp]$ scp ewallet.p12 cwallet.sso 10.0.1.62:/tmp
ewallet.p12 100% 14KB 10.4MB/s 00:00
cwallet.sso
- On the Standy side start your preparations by saving the wallet. And copy the wallet from the primary side to the wallet location on the Standby Side.
cd /opt/oracle/dcs/commonstore/wallets/tde/*/
[oracle@mysbfra1 DBUTF8_fra1h7]$ cp ewallet.p12 ewallet.p12.save
[oracle@mysbfra1 DBUTF8_fra1h7]$ cp cwallet.sso cwallet.sso.save
[oracle@mysbfra1 DBUTF8_fra1h7]$ cp /tmp/ewallet.p12 .
[oracle@mysbfra1 DBUTF8_fra1h7]$ cp /tmp/cwallet.sso .
- On Standby Side close the wallet ( it will reopen with the query in the next bullit):
SQL> alter session set container = CDB$ROOT;
SQL> administer key management set keystore close container=ALL;
- On Standby side check , ( and this will also open the keystore again).
col wrl_parameter format a60
set lines 300
select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;
- Now it is time to check status in broker again.
DGMGRL> show configuration
Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7
Protection Mode: MaxPerformance
Members:
DBUTF8_fra2ps – Primary database
DBUTF8_fra1h7 – Physical standby database
Error: ORA-16766: Redo Apply is stopped
Fast-Start Failover: Disabled
Configuration Status:
ERROR (status updated 60 seconds ago)
DGMGRL> show configuration
Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7
Protection Mode: MaxPerformance
Members:
DBUTF8_fra2ps – Primary database
DBUTF8_fra1h7 – Physical standby database
Error: ORA-16810: multiple errors or warnings detected for the member
Fast-Start Failover: Disabled
Configuration Status:
ERROR (status updated 51 seconds ago)
- In the broker enable apply again and give it some time !!! ( in my case various minutes).
edit database DBUTF8_fra1h7 set state=’apply-on’;
## check broker:
DGMGRL> show configuration
DGMGRL> show database DBUTF8_fra1h7
Database – DBUTF8_fra1h7
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 0 seconds (computed 1 second ago)
Average Apply Rate: 3.88 MByte/s
Real Time Query: ON
Instance(s):
DBUTF81
DBUTF82 (apply instance)
Database Status:
SUCCESS
DGMGRL> show configuration
Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7
Protection Mode: MaxPerformance
Members:
DBUTF8_fra2ps – Primary database
DBUTF8_fra1h7 – Physical standby database
Fast-Start Failover: Disabled
Configuration Status:
SUCCESS (status updated 66 seconds ago)
Well that is Relieve. Data Guard configuration – Broker are happy again (which also means so is this Dba),
- This can also be checked in sqlplus on SB side
SQL> Set lines 300
Col value format a25
select * from v$Data Guard_stats;
SOURCE_DBID SOURCE_DB_UNIQUE_NAME NAME VALUE UNIT TIME_COMPUTED DATUM_TIME CON_ID
———– ——————————– ————————————————– ————————- —————————— —————————— —————————— ———-
0 transport lag +00 00:00:00 day(2) to second(0) interval 11/19/2021 12:50:42 11/19/2021 12:50:40 0
0 apply lag +00 00:00:00 day(2) to second(0) interval 11/19/2021 12:50:42 11/19/2021 12:50:40 0
0 apply finish time +00 00:00:00.000 day(2) to second(3) interval 11/19/2021 12:50:42 0
0 estimated startup time 24 second 11/19/2021 12:50:42 0
- And good things always come in twice ( another check on the standby side:
SQL> select status,blocks, delay_mins, known_agents from gv$managed_standby where process like ‘MRP%’;
STATUS BLOCKS DELAY_MINS KNOWN_AGENTS
———— ———- ———- ————
APPLYING_LOG 2097152 0 3
- Since we are in active Data Guard almost last step in this scenario is to bring the new born PDB to read only mode.
SQL> show con_name
CON_NAME
——————————
CDB$ROOT
## Last Checks
SQL> show PDBs
CON_ID CON_NAME OPEN MODE RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED READ ONLY NO
3 PDB1 READ ONLY NO
4 PDBDEC READ ONLY NO
5 PDB2 READ ONLY NO
6 PDB010 READ ONLY NO
7 PDBP15 READ ONLY NO
8 PDBVOID MOUNTED
9 PDBP15C MOUNTED
SQL> alter pluggable database PDBP15C open read only instances = all;
Pluggable database altered.
## Check
SQL> alter session set container = PDBP15C;
Session altered.
SQL> SELECT value FROM nls_database_parameters WHERE parameter = ‘NLS_CHARACTER SET’;
VALUE
————————-
WE8ISO8859P15
################################ The End ##############################
PS Of course it is completely up to you if you want to drop PDBVOID now, or that you will keep it as a template for future scenarios as a kind of Template.
Happy Reading and till we meet again. And of course always test test test this scenario before you run it on your favorite production environment in a TEST environment.
Mathijs