For two projects there has been an assignment to upgrade to 11.20.4 Oracle. One environment was already 11.2.3 with same Cluster stack below it and one environment will come from 10.2.0.4 on Solaris. For both projects on Linux an 184.108.40.206 cluster-stack plus database version has been set up on one of the newer shared clusters. Both environments will be migrated using the export – import method (since they are relatively small ( app 400- 500 GB) ) and of course since one of them is being migrated cross platforms (from Solaris to Linux ) you do not have that much choice.
In other project I had good experience with nfs filesystems between source and target servers and at first was aiming to use them again during these migrations. However since not every project is able to make it to the time lines ( will have to wait for at least 2 more weeks to get the nfs mounts ) other creativity will be required. In this specific case will work with datapump via the network.
When looking into this scenario i came across two scenarios. First scenario being covered by a fellow blogger and interesting since it offers the option to export directly into an ASM disk group. In that scenario extra step would be needed using impdp with directory to the same asmdiskgroup/subdirectory. Second scenario which is explained in more detail here is even one step beyond. Scenario is simple using impdp via a dblink directly in the database ( not even a need to park a dumpfile somewhere on filesystem or in diskgroup first and then run the imp). Nope just another imdp and you are there !
1. Setting up tnsnames entry on the target ( receiving ) side.
In order to make this scenario work you will have to make sure that there is no firewall in place to the source database you will pull the data from when you create the tnsnames.ora entry on the target side.
In my case:
I always try a: telnet <ip> <port>
telnet 6220.127.116.11 33012
If you see something like trying …. and nothing helps will happen well this was not your lucky day and a firewall is blocking you from making this a happy scenario. If you see something like this lucky you :
Escape character is '^]'.
Recommendation when you get stuck with trying … then is to make sure that firewall is opened. In my case my host was a vip address for a rac database and Port 33012 had been assigned to the local listener of that database.
## Let set up the tnsnames entry NOTE : firewall needs to be freed before proceed with tnsping etc:
One interesting part is that the service_name of the tnsnames i wanted to use was not present as a service in the database so I had to add to extend the present service (which was not default service since it was without domain).
## ## On the source side in the database where i want to take the data from: added service:
alter system set service_names = ‘MYDB’,’MYDB.test.nl’ scope = both ;
SQL> show parameter service
NAME TYPE VALUE
———————————— ———– ——————————
service_names string MYDB, MYDB.test.nl
So now we have two services in place which we can use in the tnsnames.ora.
2. Time to set up a public dblink
## Reading articles by fellow bloggers they recommended to created PUBLIC (this seems mandatory) db link. Since in my case i would do the import with system a normal db link would b okay too. But for the scenarios sake public database link is fine.
drop public DATABASE LINK old_MYDB;
## worked with this one
CREATE public DATABASE LINK old_MYDB CONNECT TO system IDENTIFIED BY xxxxxxx USING ‘mbMYDB’;
3. Seeing is believing , test the db link.
## performed select
select ‘x’ from dual@old_MYDB;
4. Next stop, creating a directory for the logfile of the impdp.
Yes that is correct only a directory for the log file not for the dump itself J that is why i liked this scenario so much.
## created directory for the logfile
create directory acinu_imp as ‘/opt/oracle/MYDB/admin/create’ ;
grant read,write on directory acinu_imp to system;
5. Time to perform the import.
Over the years have used expdp and impdp a lot but most time as an almost 1:1 clone of exp/ imp. But since Google is your friend when looking for scenarios it was great to explore the powerful option of exclude= parameter. As you will see , creating an import of the full database but excluding the schemas i don’t care about.
Since i was hmm energy efficient i wanted to type the full statement in Linux but was punished by having ” ” in my command. However had i used a parfile things would have been easier J . But since i wanted to stick to scenario found that whenever on OS ” level an \ will be mandatory like below:
## performed import with success with command below
impdp system full= yes "EXCLUDE=SCHEMA:\"IN('ADBM','DBSNMP','PERFSTAT','UPDOWN','ORACLE_OCM','OUTLN','SYS','SYSTEM')\"" network_link=old_MYDB directory=acinu_imp logfile=AcinupImport.log parallel=2 job_name=MYDB_DMP_FULL
At first all my scenarios had error below
|Connected to: Oracle Database 11g Enterprise Edition Release 18.104.22.168.0 – 64bit Production|
|With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,|
|Data Mining and Real Application Testing options|
|ORA-39001: invalid argument value|
|ORA-39200: Link name “OLD_ACINUP” is invalid.|
|ORA-02019: connection description for remote database not found|
This made me check the services in the database, the entry in the tnsnames, and test it all again. After that as A-team Hannibal would say , love it when a plan comes together it worked !
Happy reading ,
And always don’t believe it just because it is printed.