Data Dump Import Missing NOT NULL Constraints

Hi All, I like to share my experience with recent database migration to Exadata Machine using Oracle Export / Import utility. Many of us used Oracle Export / Import utility for database upgrades and migrations. If you are migrating data using Data Pump with all the constraints and indexes, you will not face this issue. This issue is only related to Data Pump option Exclude / Include = Constraints.

Let me start with the reason that why would we need to copy data using data pump without constraints or indexes. When you are migration few GB of data it will not matter that you are creating and build indexes during data import. But if you are planning to migrate 10+ TB database using data pump, you want to separate data copy from indexes and constraints. Creating indexes and constraints on terabyte tables can take days since they will run without any parallelism. Hence many DBA’s and migration experts uses Data Pump SQLFILE option for indexes and constraints creation. This way we are able to create indexes in parallel mode and create constraints without validate.

Now with 12c Oracle has change exclude option and it will not import NOT NULL constraints if you exclude constraints during data import. Please review following Oracle Support note and make sure to bring NOT NULL constraints manually after the data import. Fortunately, data import with exclude constraints option still bring CHECK constraints for now.

Data Pump Import With EXCLUDE=CONSTRAINT Or INCLUDE=CONSTRAINT Is Excluding And Respectively Not Including NOT NULL Constraints (Doc ID 1930631.1)

After doing a Data Pump import (impdp) in 12.1.0.2 database release with parameter EXCLUDE=CONSTRAINT the NOT NULL constraints from source tables are not in target tables. Sequence of steps leading to the problem:
– Data Pump export of tables with NOT NULL and referential integrity constraints
– Data Pump import with EXCLUDE=CONSTRAINT

For example:

==> In source database a table has the following structure:

Name Null? Type
—————————————– ——– —————————-
ID NOT NULL NUMBER(38)
NAME NOT NULL VARCHAR2(20)
AGE NOT NULL NUMBER(38)
ADDRESS CHAR(25)
SALARY NUMBER(18,2)

=> After import, the table structure:

Name Null? Type
—————————————– ——– —————————-
ID NUMBER(38) ———————> NOT NULL Constraints ignored
NAME VARCHAR2(20)———————> NOT NULL Constraints ignored
AGE NUMBER(38)———————> NOT NULL Constraints ignored
ADDRESS CHAR(25)
SALARY NUMBER(18,2)

Also, if using Data Pump import (impdp) with INCLUDE=CONSTRAINT parameter, NOT NULL constraints are not imported.

Perform Table Recovery With Oracle 12c Using RMAN

Import Consideration:

  • An rman backup containing the missing table must exist; it cannot rely on a backup from before the table was created alongside the application of archived redo.
  • The table cannot belong to SYS or SYSTEM and cannot reside in SYSTEM or SYSAUX.

Create Test Tables

SQL> create table test1 as select * from dba_tables;

Table created.

SQL> select count(*) from test1;

  COUNT(*)

———-

      6352

Make sure to perform Incremental or Full Backup

Drop test Tables

Connected to:

Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

SQL> drop table umair.test1;

Table dropped.

Set Recovery Destination Area

> Mkdir –p /zfssa/dbm01/backup1/fra

> cd /zfssa/dbm01/backup1/fra

oracle@exadbadm01.gain.tcprod.local:dbm01:/zfssa/dbm01/backup1/fra

Recover table using time or SCN

RMAN> recover table umair.test1 until time “to_date(’09/16/2017 21:01:15′,’mm/dd/yyyy hh24:mi:ss’)” auxiliary destination ‘/zfssa/r360pdimg/imgbackup1/fra’;

Starting recover at 17-SEP-17
using target database control file instead of recovery catalog
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=509 instance=dbm01 device type=DISK
RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
Tablespace UNDOTBS2
Tablespace UNDOTBS3
Tablespace UNDOTBS4
Tablespace UNDOTBS5
Tablespace UNDOTBS6

auxiliary instance file tspitr_plti_80445.dmp deleted
Finished recover at 18-SEP-17

Validate Table Recovery

SQL> select count(*) from test1;

  COUNT(*)

———-

      6352

Shutting down Exadata Storage cell for maintenance

There are times when you have to shutdown just Exadata Storage node for maintenance reasons like disk or memory replacement. You can use following steps to safely shutdown and startup storage node.

Login to to Storage cell using root user

root@ex01celadm09 ~]# cellcli
CellCLI: Release 18.1.4.0.0 - Production on Mon Sep 17 09:28:58 CDT 2018

Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.

Make sure there are no inactive GRIDDISKs

CellCLI> LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != 'Yes'

Deactivate all GRIDDISKs

CellCLI> ALTER GRIDDISK ALL INACTIVE
GridDisk DATAC1_CD_00_ex01celadm09 successfully altered
GridDisk DATAC1_CD_01_ex01celadm09 successfully altered
GridDisk DATAC1_CD_02_ex01celadm09 successfully altered
GridDisk DATAC1_CD_03_ex01celadm09 successfully altered
.
.
.
GridDisk RECOC1_CD_08_ex01celadm09 successfully altered
GridDisk RECOC1_CD_09_ex01celadm09 successfully altered
GridDisk RECOC1_CD_10_ex01celadm09 successfully altered
GridDisk RECOC1_CD_11_ex01celadm09 successfully altered

Make sure GRIDDISKs are all inactive

CellCLI> LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != 'Yes'

Shutdown Storage cell, you can also use ILOM interface to turn off cell node

[root@ex01celadm09 ~]# shutdown now

Broadcast message from root@ex01celadm09.corp.medtronic.com
(/dev/pts/0) at 9:35 ...

The system is going down for maintenance NOW!

Login to storage node once its online

[root@ex01celadm09 ~]# cellcli
CellCLI: Release 18.1.4.0.0 - Production on Mon Sep 17 10:50:25 CDT 2018

Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.

Check GRIDDISKs Status

CellCLI> LIST GRIDDISK WHERE STATUS != 'inactive'

CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus
DATAC1_CD_00_ex01celadm09 OFFLINE
DATAC1_CD_01_ex01celadm09 OFFLINE
DATAC1_CD_02_ex01celadm09 OFFLINE
.
.
.
RECOC1_CD_08_ex01celadm09 OFFLINE
RECOC1_CD_09_ex01celadm09 OFFLINE
RECOC1_CD_10_ex01celadm09 OFFLINE
RECOC1_CD_11_ex01celadm09 OFFLINE

Activate all GRIDDISKs

CellCLI> ALTER GRIDDISK ALL ACTIVE
GridDisk DATAC1_CD_00_ex01celadm09 successfully altered
GridDisk DATAC1_CD_01_ex01celadm09 successfully altered
GridDisk DATAC1_CD_02_ex01celadm09 successfully altered
GridDisk DATAC1_CD_03_ex01celadm09 successfully altered
.
.
.
GridDisk RECOC1_CD_08_ex01celadm09 successfully altered
GridDisk RECOC1_CD_09_ex01celadm09 successfully altered
GridDisk RECOC1_CD_10_ex01celadm09 successfully altered
GridDisk RECOC1_CD_11_ex01celadm09 successfully altered

Continue to Check GRIDDISKS Status

CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus
DATAC1_CD_00_ex01celadm09 SYNCING
DATAC1_CD_01_ex01celadm09 SYNCING
DATAC1_CD_02_ex01celadm09 SYNCING
.
.
.
RECOC1_CD_09_ex01celadm09 OFFLINE
RECOC1_CD_10_ex01celadm09 OFFLINE
RECOC1_CD_11_ex01celadm09 OFFLINE

Make sure all GRIDDISKs are online

CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus
DATAC1_CD_00_ex01celadm09 ONLINE
DATAC1_CD_01_ex01celadm09 ONLINE
DATAC1_CD_02_ex01celadm09 ONLINE
DATAC1_CD_03_ex01celadm09 ONLINE
.
.
.
RECOC1_CD_08_ex01celadm09 ONLINE
RECOC1_CD_09_ex01celadm09 ONLINE
RECOC1_CD_10_ex01celadm09 ONLINE
RECOC1_CD_11_ex01celadm09 ONLINE

 

Enable TDE for 12.2 Databases on Exadata Machine

I have seen a lot of customers run into “Data At Rest Encryption” deli-ma , when they look into migrating databases to Exadata Machine from Traditional storage like EMC.  Storage like EMC’s provide encryption at storage level and in most cases it satisfies compliance requirement for many customers. Unfortunately, Exadata Storage Disk are not encrypted by default and if you need to comply with  “Data At Rest Encryption” requirement for your databases , you need to Enable Oracle TDE feature. It’s important to understand that this is license feature, make sure your are covered in terms of licensing. Here are the steps you can sue to enable encryption on 12.2 databases on Exadata Machine.

Step 1 : Location for TDE wallet ( All Nodes )

This is very important , you will probably have multiple Exadata nodes with multiple databases running on it.  In order to have multiple wallet , you need to choose Wallet location bases on either $ORACLE_SID or $UNIQUE_NAME. I will be using ORACLE_SID for my blog , since its set in most environments.   Once you have identified the Wallet location , you need to add following entry to SQLNET.ora file.

ENCRYPTION_WALLET_LOCATION =
(SOURCE =(METHOD = FILE)(METHOD_DATA =
(DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore/)))

Step 2 : Create KETSTORE ( Node 1 Only )  

ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/oracle/admin/MART1/encryption_keystore/' IDENTIFIED BY Password!;

Step 3 : Open KETSTORE  (Node 1 Only)

ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Password!;

Step 4 : Set KETSTORE Encryption Key (Node 1 only )

Administer key management set encryption key identified by Password! with backup;

Step 5 : Copy wallet to other nodes

Make sure you have directories created on all Exadata Compute Nodes

mkdir -p /u01/app/oracle/admin/MART2/encryption_keystore/

Step 6 : Close & Open Wallet from Node 1 Only

-- Close Wallet
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY Password!;

-- Open Wallet
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Password!;

Step 7 : Check Wallet Status for all nodes from Node 1 Only

SELECT status FROM Gv$encryption_wallet;

Step 8 : Create AUTO LOGIN for Wallet (Node 1 Only)

Optionally, you can all create auto logon for your Wallet so you don have to open wallet every time database is restarted.

ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN keystore from keystore '/u01/app/oracle/admin/MART1/encryption_keystore/' IDENTIFIED BY Password!;

Step 9 : Copy AUTO LOGIN files to other nodes

Since you just created new files on node 1 only, you need to copy them to rest of Exadata Nodes

Step 10 : Shutdown and Start Database using SRVCTL 

Srvctl stop database -d MART 

Srvctl start database -d MART

Step 11 : Check Wallet Status 

Once database is back online , Encryption Wallet should be open for all nodes.

 SELECT status FROM Gv$encryption_wallet;

 

Create Databases on Exadata Machine using DBCA silent mode

I strongly recommend creating databases on Exadata Machine using DBCA GUI but you can use following DBCA command to create databases if for some reason GUI is not working.

To Create Database 

dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbname mart -sid mart -responseFile NO_VALUE -characterSet AL32UTF8 -sysPassword OraPasswd1 -systemPassword OraPasswd1 -createAsContainerDatabase false -pdbAdminPassword OraPasswd1 -databaseType MULTIPURPOSE -automaticMemoryManagement false -totalMemory 5000 -storageType ASM -diskGroupName DATAC1 -redoLogFileSize 50 -emConfiguration NONE -nodeinfo msplex01dbadm01,msplex01dbadm02,msplex01dbadm03 -ignorePreReqs

To delete database 

dbca -silent -deleteDatabase -sourceDB mart -sysDBAUserName sys -sysDBAPassword OraPasswd1

Migrating Database from AIX to Exadata using Data Pump

There are many methods to migrated Oracle databases but this blog is focus on migrating large database from AIX to Exadata Machine. This methods can also be used to migrate and upgrade Oracle database from Linux to Exadata machine if needed.

There is another popular method called “transportable databases” that can be used to migrate databases from AIX operating system. But if for some reason, you cannot used database transportable method, data pump is your only option.  Using data pump, you can use upgrade, migrate and optimize database in one shot.

Following steps are focused on migrating large databases using data pump. Ideally, you want divide migration into 4 steps. 1) META DATA only, 2) DATA Only, 3) Indexes and Constraints, 4) Validate. Creating indexes and enabling constraints with Terabyte DATA import can take days, this is why it’s very important that you separate them from actual DATA import.  You can use network impdp feature only if your have 10G network between source and target systems.

Step 1 : Create shell Database on target Exadata Machine using DBCA

You can use DBCA silent option to create database too. I strongly recommended using DBCA GUI to create databases on Exadata Machine.

dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbname mart -sid mart -responseFile NO_VALUE -characterSet AL32UTF8 -sysPassword OraPasswd1 -systemPassword OraPasswd1 -createAsContainerDatabase false -pdbAdminPassword OraPasswd1 -databaseType MULTIPURPOSE -automaticMemoryManagement false -totalMemory 5000 -storageType ASM -diskGroupName DATAC1 -redoLogFileSize 50 -emConfiguration NONE -nodeinfo ex01dbadm01,ex01dbadm02,ex01dbadm03 –ignorePreReqs

Step 2 : Create tablespaces based on source database

I have to enable encryption and compression for my migration but I recommend that you should enable OLTP complete at least for Exadata Machine.

select 'create bigfile tablespace '||tablespace_name ||' DATAFILE ''+DATAC1'' SIZE '||sum(bytes)/1024/1024 ||'M ENCRYPTION

 USING ''AES256'' DEFAULT COMPRESS for OLTP STORAGE(ENCRYPT) ;'

       from dba_data_files

       group by tablespace_name

Step 3 : Create public database link to source database  

Note : – DB link be used for network data pump and compare objects between source and target databases.

CREATE public DATABASE LINK src_db CONNECT TO umair IDENTIFIED BY password! USING 'MART';

Step 4 : Create migration directory for logs and dump files

Create directory for Data Pump Usage both on Source and Target

create or replace directory migration as '/etc/migration/mart';

Step 5 : Import Full database META DATA only using DB Link

nohup impdp / full=y content=metadata_only network_link=src_db directory=migration PARALLEL=16 logfile=imp_full_mart.log &

Step 6 : Import Data schema by schema (FIN_MART)

Note : – I am using network option for my data migration about you don’t have network bandwidth or a lot of unsupported objects , you should first export and then import data using dump files.

nohup impdp / schemas=FIN_MART network_link=src_db exclude=index,constraint TABLE_EXISTS_ACTION=REPLACE directory=migration parallel=16 logfile=impdpFIN_MART.log &

 Step 7 : Generate Indexes & Constraints scripts

First create export dump file (META DATA Only)

expdp / schemas=FIN_MART content=metadata_only directory=migration dumpfile=FIN_MART.dmp logfile=expdpFIN_MART.log

Then generate seperate SQL scripts for Indexes and Constraints.

impdp / schemas=FIN_MART include=index sqlfile=FIN_MART_IDX.sql dumpfile=FIN_MART.dmp  directory=migration logfile=imp_FIN_MART_IDX.log
impdp / schemas=FIN_MART include=contraint sqlfile=FIN_MART_CONS.sql dumpfile=FIN_MART.dmp  directory=migration logfile=imp_FIN_MART_CONS.log

Step 8 : Create Indexes using SQLscript

Update FIN_MART_IDX.sql script and replace parallel 1 with 16 then execute it

nohup sqlplus umair/password! @FIN_MART_IDX.sql &

Step 9 : Enable Contraints using SQL script

Update FIN_MART_CONS.sql script and replace ENABLE with ENABLE NOVALIDATE then execute it.

nohup sqlplus umair/password! @FIN_MART_CONS.sql &

 Step 10 : Validate objects

Lastly Validate objects using following SQL and bring any missing objects.

select owner,object_type,count(*) MISSING_OBJECTS

from (

select owner,object_type,object_name

from dba_objects@src_dih

where owner =’FIN_MART’

minus

select owner, object_type, object_name

from dba_objects where owner = ‘FIN_MART’

)

group by owner,object_type

order by owner,object_type

 

Repeat Steps though 6 -10 for each Schema in your database.

 

 

 

Collecting Exadata ILOM Snapshot using CLI

Many of you might be asked by Oracle support to provide ILOM snapshot to troubleshoot Exadata Hardware issues. I had to diagnose a hardware issue recently and was not able to use web interface because for firewall issue. Fortunately, you can generate ILOM snapshot using following CLI method.

Step 1 : Login to ILOM using root user.

[root@msplex01dbadm02 ~]# ssh root@10.23.44.101

Password:
Oracle(R) Integrated Lights Out Manager
Version 4.0.0.28 r121827
Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved.
Warning: HTTPS certificate is set to factory default.

Hostname: exa01dbadm01-ilom

Step 2 : Set snapshot dataset to normal.

-> set /SP/diag/snapshot dataset=normal

Set 'dataset' to 'normal'

Step 3 : Set snapshot output location.

 Please note that you can root user too, as long as root as permission to write on target directory

-> set /SP/diag/snapshot dump_uri=sftp://oracle:"passowrd!"@10.21.100.22/etc/snapshot’

Set 'dump_uri' to 'sftp://oracle:password!@10.21.100.22/etc/snapshot’

Step 4 : Change directory to snapshot

-> cd /SP/diag/snapshot

/SP/diag/snapshot

Step 5 : Check Status of snapshot , make sure its running

-> show

/SP/diag/snapshot

Targets:

Properties:

dataset = normal

dump_uri = (Cannot show property)

encrypt_output = false

result = Running

Step 6: Keep checking status till it’s completed. May take up to 10 mins

-> show

/SP/diag/snapshot

Targets:

Properties:

dataset = normal

dump_uri = (Cannot show property)

encrypt_output = false

result = Collecting data into

sftp://oracle@10.21.100.22/etc/snapshot/exa01dbadm01-ilom_XXXX30AG_2018-09-14T23-04-46.zip

TIMEOUT: /usr/local/bin/spshexec show /SP/bootlist

TIMEOUT: /usr/local/bin/create_ueficfg_xml

Snapshot Complete.

Done.

Step 7: Upload files to Oracle support.

oracle@10.21.100.22/etc/snapshot/exa01dbadm01-ilom_XXXX30AG_2018-09-14T23-04-46.zip

 

Reverse Exadata Elastic Configuration using elastic config marker

As you may already know ,  The elastic configuration process will allow initial IP addresses to be assigned to database servers and cells, regardless of the exact customer configuration ordered. The customer specific configuration can then be applied to the nodes.

Sometime you can make mistakes and end up assigning wrong IP’s or hostnames to Exadata nodes. You can using Exadata elastic config marker to revert applied elastic configuration.

Problem : Applied wrong IP’s to Exadata Nodes 

[root@exdbadm01 linux-x64]# ibhosts
Ca : 0x0010e00001d4f7a8 ports 2 "exadbadm02 S 192.168.10.3,192.168.10.4 HCA-1"
Ca : 0x0010e00001d691f0 ports 2 "exaceladm03 C 192.168.10.9,192.168.10.10 HCA-1"
Ca : 0x0010e00001d68e30 ports 2 "exaceladm01 C 192.168.10.5,192.168.10.6 HCA-1"
Ca : 0x0010e00001d68cd0 ports 2 "exaceladm02 C 192.168.10.7,192.168.10.8 HCA-1"
Ca : 0x0010e00001d60e00 ports 2 "exadbadm01 S 192.168.10.1,192.168.10.2 HCA-1"

Solution : Create ./elasticConfig file at root on all Exadata nodes. Please note that all the IP’s will be changed to factory default.

create elastic marker on all nodes 

[root@node1 /]# cd /
[root@node1 /]# touch .elasticConfig
[root@node1 /]# reboot

Broadcast message from root@exdbadm01.itrans.int
(/dev/pts/0) at 18:38 ...

The system is going down for reboot NOW!

Login Again using factory default IP's 

[root@node8 linux-x64]# ibhosts
Ca : 0x0010e00001d4f7a8 ports 2 "node10 elasticNode 172.16.2.46,172.16.2.46 ETH0"
Ca : 0x0010e00001d691f0 ports 2 "node4 elasticNode 172.16.2.40,172.16.2.40 ETH0"
Ca : 0x0010e00001d68e30 ports 2 "node2 elasticNode 172.16.2.38,172.16.2.38 ETH0"
Ca : 0x0010e00001d68cd0 ports 2 "node1 elasticNode 172.16.2.37,172.16.2.37 ETH0"
Ca : 0x0010e00001d60e00 ports 2 "node8 elasticNode 172.16.2.44,172.16.2.44 ETH0"

 

 

 

 

 

 

Drop cell disks before converting to 1/8th rack

Hi ! , Today i like to share my experience about an issue i faced during the deployment of Exadata Eight Rack.  I faced following issue while executing Exadata deployment step 2 (Executing Update Nodes for Eighth Rack). We faced the issue because storage cells were shipped with default cell disk groups and they need to be dropped before we can continue deploying Exadata Eight Rack.

Issue : 

[root@node1linux-x64]# ./install.sh -cf Intellitrans-ex.xml -s 2
Initializing
Executing Update Nodes for Eighth Rack
Error: Storage cell [cellnode1, cellnode2, cellnode3] contains cell disks. Cannot setup 1/8th rack. Drop cell disks before converting to 1/8th rack rack.
Collecting diagnostics...
Errors occurred. Send /u01/onecommand/linux-x64/WorkDir/Diag-180626_150747.zip to Oracle to receive assistance.

ERROR:
Error running oracle.onecommand.deploy.operatingSystem.ResourceControl method setupResourceControl
Error: Errors occured...
Errors occured, exiting...

Reason : Cell disk already exist , you can validate by login into all storage nodes. 

[root@cellnoe1~]# cellcli
CellCLI: Release 18.1.4.0.0 - Production on Tue Jun 26 16:13:33 EDT 2018

Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.

CellCLI> list celldisk
FD_00_ru02 normal
FD_01_ru02 normal
FD_02_ru02 normal
FD_03_ru02 normal

Solution : drop celldisk all force;

[root@cellnode1 ~]# cellcli
CellCLI: Release 18.1.4.0.0 - Production on Tue Jun 26 16:18:11 EDT 2018

Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.

CellCLI> list celldisk
FD_00_ru02 normal
FD_01_ru02 normal
FD_02_ru02 normal
FD_03_ru02 normal

CellCLI> drop celldisk all force;

CellDisk FD_00_ru02 successfully dropped
CellDisk FD_01_ru02 successfully dropped
CellDisk FD_02_ru02 successfully dropped
CellDisk FD_03_ru02 successfully dropped

CellCLI> list celldisk