Importing Data via Network


For two projects there has been an assignment to upgrade to 11.20.4 Oracle. One environment was already 11.2.3 with same Cluster stack below it and one environment will come from on Solaris.  For both projects on Linux  an cluster-stack plus database version has been set up on one of the newer shared clusters.  Both environments will be migrated using the export – import method (since they are relatively small ( app 400- 500 GB) ) and of course since one of them is being migrated cross platforms (from Solaris to Linux ) you do not have that much choice.

In other project I had good experience with nfs filesystems between source and target servers and at first was aiming to use them again during these migrations.  However since not every project is able to make it to the time lines ( will have to wait for at least 2 more weeks to get the nfs mounts ) other creativity will be  required. In this specific case will work with datapump via the network.

When looking into this scenario i came across two scenarios. First scenario being covered by a fellow blogger and interesting since it offers the option to export directly into an ASM disk group. In  that scenario extra step would be needed using impdp with directory to the same  asmdiskgroup/subdirectory. Second scenario which is explained in more detail here is even one step beyond. Scenario is simple  using impdp via a dblink directly in the database ( not even a need to park a dumpfile somewhere on filesystem or in diskgroup first and then run the imp). Nope just another  imdp and you are there !



1.     Setting up  tnsnames entry on the target ( receiving ) side.


In order to make this scenario work  you will have to make sure that there is no firewall in place to the source database you will pull the data from when you create the tnsnames.ora entry on the target side.

In my case:


I always try a: telnet <ip> <port>

telnet  666.233.103.203  33012


If  you see something like trying ….  and nothing helps will happen well this was not your lucky day and a firewall is blocking you from making this a happy scenario.  If you see something like this lucky you :

Escape character is '^]'.

Recommendation when you get stuck with trying … then is to make sure that firewall  is opened. In my case my host was a vip address for a rac database and Port 33012 had been assigned to the local listener of that database.


## Let set up the tnsnames entry  NOTE : firewall needs to be freed before proceed with tnsping etc:


One interesting part is that the service_name of the tnsnames  i wanted to use was not present as a service in the database so I had to add to extend the present service (which was not default service since it was without  domain).


## ## On the source side in the database where i want to take the data from:  added service:


alter system set service_names = ‘MYDB’,’’ scope = both ;


SQL> show parameter service


NAME                                                      TYPE VALUE

———————————— ———– ——————————

service_names                                      string               MYDB,


So now we have two services in place which we can use in the tnsnames.ora.


2.     Time to set up a public dblink


## Reading articles by fellow bloggers they recommended to created PUBLIC (this seems mandatory) db link. Since in my case i would do the import with system a normal db link would b okay too. But for the scenarios sake  public database link is fine.


drop public DATABASE LINK old_MYDB;

## worked with this one


3.     Seeing is believing , test the db link.


## performed select

select ‘x’ from  dual@old_MYDB;

4.     Next stop, creating a directory for the logfile of the impdp.


Yes that is correct only a directory for the log file not for the dump itself J  that is why i liked this scenario so much.


## created directory for the logfile

create directory acinu_imp as ‘/opt/oracle/MYDB/admin/create’ ;

grant read,write on directory acinu_imp to system;




5.     Time to perform the import.


Over the years have used expdp and impdp a lot  but most time as an almost 1:1 clone of exp/ imp. But since Google  is your friend when looking for scenarios it was great to explore the  powerful option of exclude= parameter. As you will see ,  creating an import of the full database but excluding the  schemas i don’t care about.


Since i was hmm energy efficient i wanted to type the full statement in Linux but was punished  by having ” ” in my command. However had i used a parfile things would have been easier J . But since i wanted to stick to scenario found that whenever on OS  ” level an \ will be mandatory like below:


## performed import  with success with  command below


impdp system full= yes "EXCLUDE=SCHEMA:\"IN('ADBM','DBSNMP','PERFSTAT','UPDOWN','ORACLE_OCM','OUTLN','SYS','SYSTEM')\"" network_link=old_MYDB directory=acinu_imp logfile=AcinupImport.log parallel=2 job_name=MYDB_DMP_FULL



## Note

At first all my scenarios  had error below


Connected to: Oracle Database 11g Enterprise Edition Release – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39200: Link name “OLD_ACINUP” is invalid.
ORA-02019: connection description for remote database not found


This made me check  the services in the database, the entry in the tnsnames, and test it all again.  After that as A-team Hannibal would say , love it when a plan comes together  it worked !


Happy reading ,


And always don’t believe it just because it is printed.


Mathijs Bruggink



(De)install Apex in 11GR2 challenge


Recently two new databases set up had been checked by QA process and they did not make that check due to the fact that Apex was setup in the sysaux tablespace. Of course it is important to follow the standards so some  extra work  would be needed to pass  the checks . House rules state that for Apex installation a dedicated tablespace will be needed. Time to play and make things right with two scenarios.  Scenario 1 will be faulty installation with a deinstall and an install after. Scenario 2 will be an install( During which action noticed that Oracle text is a mandatory component that needs to be in place before install Apex). Below notes I have used  to investigate and fix the challenge. As always  Google is you friend when exploring such solutions.

Being curious by nature of course had to explore two approaches. Primarily installing a new part to an already existing database can be done with DBCA. As another option it is still a good challenge to see if you can do it yourself by using the command line too. Hmm does it still show that I grew up with Oracle 7.3.4 as a starters kit ?


First lets explore if there is a current installation of Apex in place.

col  comp_name format a60
set lines 144
select COMP_NAME,status from dba_registry order by 1;





As part of activities looked on  the Web and found below Blog.  Big Thank you to the author Matthias Hoys for documenting:

De-installing Apex in 11gr2

So in my case would have set Environment to Oracle 11gr2 and  move to the apex directory before  starting sqlplus .

cd /opt/oracle/product/11203_ee_64/db/apex/
sqlplus /nolog
connect / as sysdba
Once the scripts has completed check again:

no rows selected

Now let’s do it proper and create a dedicated tablespace for apex installation first:

create tablespace APEX datafile ‘+CRMMST_DATA01’ size 4096M autoextend off

Note. If you settle for  uniform size tablespace, make sure your extend-size is big enough. I had the issue that install will fail  if extend size is too small (apex install will fail with  ORA-60019). Error Detail:

create table wwv_mig_forms (
ERROR at line 1:
ORA-60019: Creating initial extent of size 14 in tablespace of extent size 8

ORA-60019: Creating initial extent of size string in tablespace of extent size string
Cause: Creation of SECUREFILE segment failed due to small tablespace extent size.
Action: Create tablespace with larger extent size and reissue command.
## my ts was uniform size 64K when created dedicated ts. Recreated ts with uniform size 1M.

Install Apex manually  (with Oracle Text in place):

## make sure you are in this directory:
cd /opt/oracle/product/11203_ee_64/db/apex 
@/opt/oracle/product/11203_ee_64/db/apex/apexins apex apex temp /i/

Since more than one script will run, made sure to head to the Apex directory in my Oracle installation before running  the starting script in sqlplus.  My new created dedicated  tablespace apex  is part of  parameters needed. ( @apexins.sql tablespace_apex tablespace_files tablespace_temp images ).


Install Apex manually (with Oracle Text  not in place):

In case Oracle Text is not installed in the database, the apex install will complain about that. In such case you will have to install Oracle Text first.

Oracle Text:

cd /opt/oracle/product/11203_ee_64/db/ctx/admin
Start your sqlplus session and run:
spool /tmp/catctx.log

Among the parameters username,tablespace to create objects into, temp, account status after install.

After that installed Apex:

## make sure you are in this directory:
cd /opt/oracle/product/11203_ee_64/db/apex 
@/opt/oracle/product/11203_ee_64/db/apex/apexins apex apex temp /i/

## check status :



## Another issue and blushing when in the end  turned out i had proper tablespace NOT in place. Ran the script with the parameters but the tablespace had not been created:


APPLICATION 4411 - APEX  - System Messages Set Credentials... begin * ERROR at line 1: 
ORA-04063: package body "APEX_030200.WWV_FLOW_API" has errors ORA-06508: PL/SQL: could not find program unit being called: 
"APEX_030200.WWV_FLOW_API" ORA-06512: at line 4 
## and did notice this package remained invalid even after remove part of apex. 
Solution: drop package htmldb_system; drop public synonym htmldb_system; 

Mandatory Aftercare (last man standing).
Check for invalid objects and recompile them.
select owner ,object_name,object_type from dba_objects where  status <> ‘VALID’ order by 1,2;


and check again .


Happy reading,




Btw. Loved this post by one of the great bloggers in general and on this Apex  in particular :

Oracle-Base oracle-application-express-apex-4-2-installation

Transport Tablespace as Mig to with rman.


For one of the projects the question came in to investigate and set up a Real application cluster database with an extra challenge that the migration had to be done cross-platform from Oracle on Solaris platform to on Linux. From application provider came the suggestion to investigate a back-up – restore scenario with an upgrade on the new server ( Linux environment). Due to the fact that the Source environment was 10.20.3 on Solaris and due to fact we were heading towards a Rac cluster environment on on Linux that suggestion was  the first that was send to the dustbin.

Normal export  / import was the second scenario that was explored. Of course this is a valid scenario  but given  the fact that the database was more than 1.x TB not exactly the most favorite way to bring  the data across. But whit scripting and using multiple par-files  and or with moving  partitioned data across in waves would be a  fair plan-b.

From reading though Had put my mind to the use of  transportable tablespaces as a way forward with this challenging question.


As preparation for the job requested to have Nas filesystem mounted between the source Server (MySunServer) holding the 10G database and the target Server (MyLinuxcluster). This Nas filesystem  would hold  the datapumps to be created, to hold the scripts and parfiles  / config files as was suggested  based on Mos Note ( 1389592.1 ). Nas system was / read-writable from both  servers. The  perl scripts that come with the note will support in the transport of the tablespaces but also help in  the convert of big endian to little endian And as a bonus in my case will do the copy into ASM.

Due to the layout of the database in the source environment  Rman was chosen as the best way forward with the scenario.

As a preparation an 110204 Rac database was set up on the target cluster. This  database only to hold the normal tablespaces and a smal temporary tablespace for the users. ( In TTS solution the name of the data tablespaces that come across to the new environment may not exist in the new environment). All data- application users have been pre created on the new environment with a  new – default user tablespace.

Details & Comments

Configuration file for the Perl scripts:

This is a file  that is part of the unzipped file from the Mos note. It needs to be setup to match your specific needs.  Will only show settings  I have used and  its comments:
## Reduce Transportable Tablespace Downtime using Incremental Backups
## (Doc ID 1389592.1)

## Properties file for

## See documentation below and My Oracle Support Note 1389592.1 for details.
## Tablespaces to transport

## Specify tablespace names in CAPITAL letters.

## Source database platform ID

## platformid

## Source database platform id, obtained from V$DATABASE.PLATFORM_ID


## srclink

## Database link in the destination database that refers to the source

## database. Datafiles will be transferred over this database link using
## dbms_file_transfer.

## Location where datafile copies are created during the “-p prepare” step.

## This location must have sufficient free space to hold copies of all
## datafiles being transported.


## backupformat

## Location where incremental backups are created.


## Destination system file locations

## stageondest

## Location where datafile copies are placed by the user when they are

## transferred manually from the souce system. This location must have
## sufficient free space to hold copies of all datafiles being transported.


# storageondest

## This parameter is used only when Prepare phase method is RMAN backup.

## Location where the converted datafile copies will be written during the

## "-c conversion of datafiles" step. This is the final location of the
## datafiles where they will be used by the destination database.
## backupondest

## Location where converted incremental backups on the destination system

## will be written during the "-r roll forward datafiles" step.

## NOTE: If this is set to an ASM location then define properties

##      asm_home and asm_sid below. If this is set to a file system
##       location, then comment out asm_home and asm_sid below

## asm_home, asm_sid

## Grid home and SID for the ASM instance that runs on the destination


## Parallel parameters


## rollparallel

## Defines the level of parallelism for the -r roll forward operation.

## If undefined, default value is 0 (serial roll forward).

## getfileparallel

## Defines the level of parallelism for the -G operation


## desttmpdir

## This should be defined to same directory as TMPDIR for getting the

## temporary files. The incremental backups will be copied to directory pointed
## by stageondest parameter.


Below in a Table format you will see the steps performed with comments.

Steps do qualify for

  • I for Initial steps – activities
  • P for Preparation
  • R for Roll Forward activities
  • T for Transport activities

Server column shows where the action needs to be done.

Step Server What needs 2 b done
I1.3 Source Identify the tablespace(s) in the source database that will be transported ( Application owner needs to support with schema owner information) :


I1.5 Source + Target In my case project offered an nfs filesystem which i could use : Nfs filesystem : /mycomp_mig_db_2_linux
I1.6 Source Together with the Mos note cam  this zip file : Unzip
I1.7 Source Tailor the extracted file file on the source system to match your environment.
I1.8 Target As the oracle software owner copy all xttconvert scripts and the modified file to the destination system. This was not needed since we used the nas filesystem.
P1.9 Source + Target On both environments set up this:

export TMPDIR= /mycomp_mig_db_2_linux/MYDBP/scripts.

P2B.1 Source perl -p
Note. Do Not use ]$ $ORACLE_HOME/perl/bin/perl this did not work
P2B.2 Source Copy files to destination. N/A since we use NFS
P2B3 Target On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the rmanconvert.cmd file created in step 2B.1 from the source system and run the convert datafiles step as follows:
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/rmanconvert.cmd /home/oracle/xtt N/A since we use NFS.
perl/bin/perl –c
R3.1 Source On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the create incremental step as follows:
perl –I
R3.3 Target [oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttplan.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/tsbkupmap.txt /home/oracle/xtt
 Since we are using Nas shared filesystem no need to copy with scp  between source and target.
perl -r
R3.4 Source On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the determine new FROM_SCN step as follows:
perl –s
R3.5 Source 1.     If you need to bring the files at the destination database closer in sync with the production system, then repeat the Roll Forward phase, starting with step 3.1.
2.     If the files at the destination database are as close as desired to the source database, then proceed to the Transport phase.
T4.0 Source As found in note : Alter Tablespace Read Only Hanging When There Are Active TX In Any Tablespace (Doc ID 554832.1). A restart of the database is required to have no active transactions. Alternative during off hours . Actually during a first test with one dedicated tablespace with only one  object it took more than 7 hrs. Oracle seems to look and wait  for ALL active transactions, not only the ones that would impact   the object in the test tablespace i worked with.
T4.1 Source On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, make the tablespaces being transported READ ONLY.
alter tablespace MYDB_DATA read only;
alter tablespace MYDB_EUC_DATA read only;
alter tablespace MYDB_EUC_INDEX read only;
alter tablespace MYDB_INDEX read only;
alter tablespace MYTS read only;
alter tablespace USERS read only;
T4.2 Source Repeat steps 3.1 through 3.3 one last time to create, transfer, convert, and apply the final incremental backup to the destination datafiles.
perl -i
T4.2 Target [oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttplan.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/tsbkupmap.txt /home/oracle/xtt
perl –r
T4.3 Target On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the generate Data Pump TTS command step as follows:
perl –e
The generate Data Pump TTS command step creates a sample Data Pump network_link transportable import command in the file xttplugin.txt. It will hold list of all the TTS you have configured and all its transport_datafiles in details.
Example of that generated file : cat xttplugin.txt
impdp directory=MYDB_XTT_DIR logfile=tts_imp.log \
network_link=TTSLINK.PROD.NL transport_full_check=no \
transport_tablespaces=MYCOMPTTS ,A,B,C\
Note in our example once edited we chmodded   xttplugin.txt with 744 and ran it as script .
T4.3 Source After the object metadata being transported has been extracted from the source database, the tablespaces in the source database may be made READ WRITE again, if desired.
T4.4 Target At this step, the transported data is READ ONLY in the destination database.  Perform application specific validation to verify the transported data.
Also, run RMAN to check for physical and logical block corruption by running VALIDATE TABLESPACE as follows:
In rman:
validate tablespace MYDB_DATA, MYDB_EUC_DATA, MYDB_EUC_INDEX, MYDB_INDEX, MYTS, USERS check logical;
T4.5 Target alter tablespace MYDB_DATA read write;
alter tablespace MYDB_EUC_DATA read write;
alter tablespace MYDB_EUC_INDEX read write;
alter tablespace MYDB_INDEX read write,
alter tablespace MYTS read write;
alter tablespace USERS read write;
T5 Source + Target Cleanup of NFS filesystem.
Put Source Db in restricted mode as a fallback after the go live for couple of days then put it to tape and decommission it;

Adding Vip Address and Goldengate to Grid Infra structure


Earlier this week preparations have been started to add  the Goldengate software to the Grid  infrastructure of on  the Billing environment on production. As part of that scenario   I also had to add a Vip address  that is to be used by the Goldengate  Software  as part of high(er) availability. In my concept  Goldengate Daemons are  running on on Node only  by default. During a Node crash ( of course not wanted  nor desired )  or as a way to load balance work on the cluster the Vip address and the Goldengate software need to stop and restart on the other Node. Below you will  find a working example as part of the preparations I have performed.  Some comment has been added to the specific steps.

Commands will be typed in italic in this blog.


## First step will be to be adding the vip address  to the Grid Infra (GI). Note IP address and the description have been defined in the DNS. Once I got feedback that the address was added I was able to perform a nslookup. Of course it was not possible yet to ping the ip  because we first have to add it to the cluster as is done here.

## As root:
/opt/crs/product/11203/crs/bin/appvipcfg create -network=1 -ip= -user=root

## Once that is in place , grant permissions to Oracle user to work with the vip address:
(As root, allow the Oracle Grid infrastructure software owner (e.g. Oracle) to run the script to start the VIP.)

/opt/crs/product/11203/crs/bin/crsctl setperm resource -u user:oracle:r-x

## Now it is time to start  the Vip:
## As Oracle, start the VIP:
/opt/crs/product/11203/crs/bin/crsctl start resource

##Check our activities:
## As Oracle:

/opt/crs/product/11203/crs/bin/crsctl status resource -p

## In my setup  Goldengate is defined to be able to run on either node one (usapb1hr)  or  on node 2 (usapb2hr) in my four node cluster. And  Since i want to make sure it only runs on those two servers I add placement to restricted.
## As root:

/opt/crs/product/11203/crs/bin/crsctl modify resource  -attr “HOSTING_MEMBERS=usapb1hr usapb2hr”
/opt/crs/product/11203/crs/bin/crsctl modify resource  -attr  “PLACEMENT=restricted”

## As always the taste of the creme brulee is in the details  so let’ s check :
## As Oracle:
/opt/crs/product/11203/crs/bin/crsctl status resource -p

##  Great that worked , now lets relocate the Vip to the other node as a test:
## As Oracle:

/opt/crs/product/11203/crs/bin/crsctl relocate resource

## completed  action with a smile Because it worked as planned.

## As always the taste of the creme brulee is in the details  so let’ s check :
## As Oracle:
/opt/crs/product/11203/crs/bin/crsctl status resource -p

## As part of making sure that setup from scratch was same on all machines ( had the same solution in Pre Prod env. ) let us first remove the existing resource  for Goldengate and then add it to the GI again.

/opt/crs/product/11203/crs/bin/crsctl delete resource myGoldengate

## as Oracle  ( white paper was very specific about that , performed it as root first time ending up with   wrong primary group in the ACL which i checked in the end) . So stick to plan ! And do this als ORACLE. Add the resource to the GI and  put in a relationship to the Vip address that has been created in the GI earlier, AND  inform the cluster about  the action script that is to be used during a relocate – server boot  – node crash . ( This script is in my case a shell script holding conditions like stop, start , status etc   and the correspondig commands in the Goldengate that are to be used by the GI:

/opt/crs/product/11203/crs/bin/crsctl add resource myGoldengate -type cluster_resource -attr “ACTION_SCRIPT=/opt/crs/product/11203/crs/crs/public/, CHECK_INTERVAL=30, START_DEPENDENCIES=’hard( pullup(’, STOP_DEPENDENCIES=’hard(‘”

## Altering  hosting members and placement again ( by default only one node part of hosting_members and placement=balanced by default).

## As root:
/opt/crs/product/11203/crs/bin/crsctl modify resource  myGoldengate -attr “HOSTING_MEMBERS=usapb1hr usapb2hr”

/opt/crs/product/11203/crs/bin/crsctl modify resource  myGoldengate -attr  “PLACEMENT=restricted”

## so in the end you should check it with this:

/opt/crs/product/11203/crs/crs/public [CRS]# crsctl status resource myGoldengate -p

## Time to set  set permission to myGoldengate (altering Ownership to myGoldengate user ( which is my OS user for this).
### As root:
/opt/crs/product/11203/crs/bin/crsctl setperm resource myGoldengate -o myGoldengate

## needed and sometimes forgotten to make sure that the oracle user ( who is also the owner of the Grid infra software on these boxes).
###As root, allow oracle to run the script to start the goldengate_app application.
/opt/crs/product/11203/crs/bin/crsctl setperm resource myGoldengate -u user:oracle:r-x



All preparations are now in place. During an already scheduled maintenance window  following steps will be performed to bring this scenario to a HA solution for Goldengate.

  • Stop the Goldengate software daemons ( at moment  stopped and started by hand) .
  • Start the  Goldengate resource  via the Grid Infra ( giving her control of status and activities) .
  • Perform checks that Goldengate is starting  its activities .
  • Perform a relocate  of the  Goldengate resource  via the Grid Infra to the other node.
  • Perform checks that Goldengate is starting  its activities .

As an old quote states . Success just loves preparation. With these preparations in place  I feel confident  for the maintenance window to put this Solution  live .


As Always ,  happy reading and till next Time ,



When PRCD-1027 and PRCD-1229 is spoiling your rainy day


More then one year ago i had set up an Oracle restart environment  with Grid Infra, ASM and Databases all in since that was a requirement from vendor at first. Once the server had been handed over to production  I got the request that it should also host EMC  based  Clones and those clones where That meant i had to upgrade both Grid infrastructure and the database software and of course the databases as well.

So i geared up , did an upgrade of the GI and the Rdbms software and of course of the local databases in place. After that  the Emc clones had been added and every thing  looked fine .

Until ……….

Error Messages after Server reboot:

Well until the server got rebooted. After that server reboot a first sign that things where wrong was that the databases , did not start via the grid infra structure which was not expected !

So there I was again ready for solving another puzzle and of course people waiting for the DBs to come online so they could work.

## First clue:

I checked the Resource ( the database ) in the cluster with:   crsctl status resource ….  –p

Much to my surprise that showed the wrong oracle home ( it was the initial Oracle Home before upgrade). But I was so sure that I had upgraded the database.. What did i miss . Even more strange was that the Cluster agent kept altering my oratab  for the specific database to have the old oracle home ( and  it would almost stick out tongue at me telling  #line has been added by agent ).

## Second clue

When i altered the oratab to show the correct oracle home i could start the database via sqlplus which was indeed my second clue .

After a big face-palm it became clear to me that the cluster was not having correct status in the cluster ware about that Oracle Home ..

## Will srvctl modify do the Job:

srvctl modify database -d mydb -o /opt/oracle/product/11203_ee_64/db


PRCD-1027 : Failed to retrieve database mydb

PRCD-1229 : An attempt to access configuration of database migoms was rejected because its version differs from the program version Instead run the program from /opt/oracle/product/11202_ee_64/db

Well that was not expected.  Especially since that other clue was that the db can be started as 1120.3 db when oracle env is put properly in ORATAB.


First I tried :

srvctl modify database -d mydb -o /opt/oracle/product/11203_ee_64/db

but that is wrong as we already saw in this post.

Hmm then i thought of an expression in German. once you start doing it the right way  things will start to work for you :

Plain and simple this is what  I have to do making things right again:

srvctl upgrade database -d  mydb -o /opt/oracle/product/11203_ee_64/db

After that I started mydb via cluster she is happy now.

## Bottom-line ( aka lesson learned ).

If you upgrade your databases in on an oracle Restart /  Rac cluster environment make it part of your upgrade plan to upgrade the information in the clusterlayer of that specific database.

As always,

Happy Reading and till we meet again.


Playing with Cluster commands 4 single Instances in Grid Infra Structure


This weekend ( 20 – 22  February 2015)  I am involved in a big Data migration of app 900K  Customers  and load of data into environments that i have set up as Single Instances under control of the  Grid Infra structure in on Red Hat Linux. As always during such big operations there is a need to have a fall-back plan for when all will break. Since I have the luxury  that  we Can use EMC clone technology a fall-back scenario have been set up where during  the week EMC storage Clones have been setup for the databases in scope. These clones are permanent syncing  with the  Source databases on the machines at the moment.

This Friday the application will be stopped, After feedback from Application team I will have to stop  the databases via the cluster (GI). As always as prep , started to make notes which i will share / elaborate here to do stop – start – checks of those Databases..


All my databases  have been registered in the GI( Grid Infra) as an application resource since I was not allowed to use RAC or Rac one  during setup of these environments.  Yet  I had to offer  a higher availability  that is why i implemented a poor-mans-rac where a Database becomes a resource in the cluster , that is capable of failing over to another ( specific and specified Node in the cluster).

In the end  when i had my setup in place , the information in the cluster looks  pretty much like this:

### status in detail

/opt/crs/product/11203/crs/bin/crsctl status resource app.mydb1.db -p













DESCRIPTION=Resource mydb1 DB





HOSTING_MEMBERS=mysrvr05hr mysrvr04hr










START_DEPENDENCIES=hard(ora.MYDB1_DATA.dg,ora.MYDB1_FRA.dg,ora.MYDB1_REDO.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(ora.MYDB1_DATA.dg,ora.MYDB1_FRA.dg,ora.MYDB1_REDO.dg)






As  you can see i have set up the dependencies with the disk groups ( start_ and stop_)  i have  set up placement to be restricted ( so the db can only start on restricted number of nodes ( which i defined in hosting_members).

This evening action plan will involve:

### Checking my resources for status and where they are running at moment. So i know where they are when i start my actions . PS the -C 3 is a nice option to show some extra lines in Linux level  about the  resource.

/opt/crs/product/11203/crs/bin/crsctl status resource -t|grep app -C 3


      1        ONLINE  ONLINE       mysrvr05hr                                    


      1        ONLINE  ONLINE       mysrvr04hr                                    


      1        ONLINE  ONLINE       mysrvr02hr                                     

###  checking status  on a high level .

/opt/crs/product/11203/crs/bin/crsctl status resource app.mydb1.db

/opt/crs/product/11203/crs/bin/crsctl status resource app.mydb2.db

/opt/crs/product/11203/crs/bin/crsctl status resource app.mydb3.db

In order to enable my colleagues to do the EMC split properly the application will be stopped. Once i have my Go  after that  i will stop the databases using GI commands:

### stopping resources:

/opt/crs/product/11203/crs/bin/crsctl stop resource app.mydb1.db

/opt/crs/product/11203/crs/bin/crsctl stop resource app.mydb2.db

/opt/crs/product/11203/crs/bin/crsctl stop resource app.mydb3.db

Once  my storage colleague has finished the EMC split ( this should take only minutes because the databases have been  in sync mode with the production all week, i will put some databases in noarchivelog mode manually to be faster in doing Datapump loads. After shutting down  the databases again I will start them again using the GI command:

### starting resources:

/opt/crs/product/11203/crs/bin/crsctl start resource app.mydb1.db

/opt/crs/product/11203/crs/bin/crsctl start resource app.mydb2.db

/opt/crs/product/11203/crs/bin/crsctl start resource app.mydb3.db

##-  Relocate  if needed                                                                   

–  =========================================                                                                     

–  server mysrvr05hr                                     :                                                                                              

–                         crsctl relocate  resource app.mydb1.db                                                


–  server mysrvr04hr                                     :                                                                                              

–                         crsctl relocate  resource app.mydb2.db                                                


## Alternatively :

–  server mysrvr05hr:

–                         crsctl relocate  resource app.mydb1.db   -n mysrvr04hr

–  server mysrvr04hr:

–                         crsctl relocate  resource app.mydb2.db   -n mysrvr05hr

On Saturday will stop the databases that are in noarchivelog mode again via the cluster and put them back to archivelog mode. After that i have scheduled a level 0 Backup with rman.

Happy reading,


11gR2 Database Services and Instance Shutdown

This needs check with app supplier. Great note.


I’m a big fan of accessing the database via services and there are some nice new features with database services in 11gR2. However I got a nasty shock when performing some patch maintenance with an RAC system that had applications using services. Essentially I did not realise what happens to a service when you shutdown an instance for maintenance. Let me demonstrate:

This has the following configuration:

So the service is now online on the node where DBA1 (preferred node in definition) runs:

any examples I’ve seen, show what happens to a service when you perform shutdown abort. First lets see what our tns connection looks like:

Which gives the following in V$SESSION when you connect using this definition:

Lets abort the node:

Oh that’s not good. Look whats happened to my application:

Let’s bring everything back and try a different kind of shutdown This time using the following:

View original post 133 more words