Shutting down Exadata Storage cell for maintenance

There are times when you have to shutdown just Exadata Storage node for maintenance reasons like disk or memory replacement. You can use following steps to safely shutdown and startup storage node.

Login to to Storage cell using root user

root@ex01celadm09 ~]# cellcli
CellCLI: Release 18.1.4.0.0 - Production on Mon Sep 17 09:28:58 CDT 2018

Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.

Make sure there are no inactive GRIDDISKs

CellCLI> LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != 'Yes'

Deactivate all GRIDDISKs

CellCLI> ALTER GRIDDISK ALL INACTIVE
GridDisk DATAC1_CD_00_ex01celadm09 successfully altered
GridDisk DATAC1_CD_01_ex01celadm09 successfully altered
GridDisk DATAC1_CD_02_ex01celadm09 successfully altered
GridDisk DATAC1_CD_03_ex01celadm09 successfully altered
.
.
.
GridDisk RECOC1_CD_08_ex01celadm09 successfully altered
GridDisk RECOC1_CD_09_ex01celadm09 successfully altered
GridDisk RECOC1_CD_10_ex01celadm09 successfully altered
GridDisk RECOC1_CD_11_ex01celadm09 successfully altered

Make sure GRIDDISKs are all inactive

CellCLI> LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != 'Yes'

Shutdown Storage cell, you can also use ILOM interface to turn off cell node

[root@ex01celadm09 ~]# shutdown now

Broadcast message from root@ex01celadm09.corp.medtronic.com
(/dev/pts/0) at 9:35 ...

The system is going down for maintenance NOW!

Login to storage node once its online

[root@ex01celadm09 ~]# cellcli
CellCLI: Release 18.1.4.0.0 - Production on Mon Sep 17 10:50:25 CDT 2018

Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.

Check GRIDDISKs Status

CellCLI> LIST GRIDDISK WHERE STATUS != 'inactive'

CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus
DATAC1_CD_00_ex01celadm09 OFFLINE
DATAC1_CD_01_ex01celadm09 OFFLINE
DATAC1_CD_02_ex01celadm09 OFFLINE
.
.
.
RECOC1_CD_08_ex01celadm09 OFFLINE
RECOC1_CD_09_ex01celadm09 OFFLINE
RECOC1_CD_10_ex01celadm09 OFFLINE
RECOC1_CD_11_ex01celadm09 OFFLINE

Activate all GRIDDISKs

CellCLI> ALTER GRIDDISK ALL ACTIVE
GridDisk DATAC1_CD_00_ex01celadm09 successfully altered
GridDisk DATAC1_CD_01_ex01celadm09 successfully altered
GridDisk DATAC1_CD_02_ex01celadm09 successfully altered
GridDisk DATAC1_CD_03_ex01celadm09 successfully altered
.
.
.
GridDisk RECOC1_CD_08_ex01celadm09 successfully altered
GridDisk RECOC1_CD_09_ex01celadm09 successfully altered
GridDisk RECOC1_CD_10_ex01celadm09 successfully altered
GridDisk RECOC1_CD_11_ex01celadm09 successfully altered

Continue to Check GRIDDISKS Status

CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus
DATAC1_CD_00_ex01celadm09 SYNCING
DATAC1_CD_01_ex01celadm09 SYNCING
DATAC1_CD_02_ex01celadm09 SYNCING
.
.
.
RECOC1_CD_09_ex01celadm09 OFFLINE
RECOC1_CD_10_ex01celadm09 OFFLINE
RECOC1_CD_11_ex01celadm09 OFFLINE

Make sure all GRIDDISKs are online

CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus
DATAC1_CD_00_ex01celadm09 ONLINE
DATAC1_CD_01_ex01celadm09 ONLINE
DATAC1_CD_02_ex01celadm09 ONLINE
DATAC1_CD_03_ex01celadm09 ONLINE
.
.
.
RECOC1_CD_08_ex01celadm09 ONLINE
RECOC1_CD_09_ex01celadm09 ONLINE
RECOC1_CD_10_ex01celadm09 ONLINE
RECOC1_CD_11_ex01celadm09 ONLINE

 

Enable TDE for 12.2 Databases on Exadata Machine

I have seen a lot of customers run into “Data At Rest Encryption” deli-ma , when they look into migrating databases to Exadata Machine from Traditional storage like EMC.  Storage like EMC’s provide encryption at storage level and in most cases it satisfies compliance requirement for many customers. Unfortunately, Exadata Storage Disk are not encrypted by default and if you need to comply with  “Data At Rest Encryption” requirement for your databases , you need to Enable Oracle TDE feature. It’s important to understand that this is license feature, make sure your are covered in terms of licensing. Here are the steps you can sue to enable encryption on 12.2 databases on Exadata Machine.

Step 1 : Location for TDE wallet ( All Nodes )

This is very important , you will probably have multiple Exadata nodes with multiple databases running on it.  In order to have multiple wallet , you need to choose Wallet location bases on either $ORACLE_SID or $UNIQUE_NAME. I will be using ORACLE_SID for my blog , since its set in most environments.   Once you have identified the Wallet location , you need to add following entry to SQLNET.ora file.

ENCRYPTION_WALLET_LOCATION =
(SOURCE =(METHOD = FILE)(METHOD_DATA =
(DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore/)))

Step 2 : Create KETSTORE ( Node 1 Only )  

ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/oracle/admin/MART1/encryption_keystore/' IDENTIFIED BY Password!;

Step 3 : Open KETSTORE  (Node 1 Only)

ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Password!;

Step 4 : Set KETSTORE Encryption Key (Node 1 only )

Administer key management set encryption key identified by Password! with backup;

Step 5 : Copy wallet to other nodes

Make sure you have directories created on all Exadata Compute Nodes

mkdir -p /u01/app/oracle/admin/MART2/encryption_keystore/

Step 6 : Close & Open Wallet from Node 1 Only

-- Close Wallet
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY Password!;

-- Open Wallet
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Password!;

Step 7 : Check Wallet Status for all nodes from Node 1 Only

SELECT status FROM Gv$encryption_wallet;

Step 8 : Create AUTO LOGIN for Wallet (Node 1 Only)

Optionally, you can all create auto logon for your Wallet so you don have to open wallet every time database is restarted.

ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN keystore from keystore '/u01/app/oracle/admin/MART1/encryption_keystore/' IDENTIFIED BY Password!;

Step 9 : Copy AUTO LOGIN files to other nodes

Since you just created new files on node 1 only, you need to copy them to rest of Exadata Nodes

Step 10 : Shutdown and Start Database using SRVCTL 

Srvctl stop database -d MART 

Srvctl start database -d MART

Step 11 : Check Wallet Status 

Once database is back online , Encryption Wallet should be open for all nodes.

 SELECT status FROM Gv$encryption_wallet;

 

Create Databases on Exadata Machine using DBCA silent mode

I strongly recommend creating databases on Exadata Machine using DBCA GUI but you can use following DBCA command to create databases if for some reason GUI is not working.

To Create Database 

dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbname mart -sid mart -responseFile NO_VALUE -characterSet AL32UTF8 -sysPassword OraPasswd1 -systemPassword OraPasswd1 -createAsContainerDatabase false -pdbAdminPassword OraPasswd1 -databaseType MULTIPURPOSE -automaticMemoryManagement false -totalMemory 5000 -storageType ASM -diskGroupName DATAC1 -redoLogFileSize 50 -emConfiguration NONE -nodeinfo msplex01dbadm01,msplex01dbadm02,msplex01dbadm03 -ignorePreReqs

To delete database 

dbca -silent -deleteDatabase -sourceDB mart -sysDBAUserName sys -sysDBAPassword OraPasswd1

Migrating Database from AIX to Exadata using Data Pump

There are many methods to migrated Oracle databases but this blog is focus on migrating large database from AIX to Exadata Machine. This methods can also be used to migrate and upgrade Oracle database from Linux to Exadata machine if needed.

There is another popular method called “transportable databases” that can be used to migrate databases from AIX operating system. But if for some reason, you cannot used database transportable method, data pump is your only option.  Using data pump, you can use upgrade, migrate and optimize database in one shot.

Following steps are focused on migrating large databases using data pump. Ideally, you want divide migration into 4 steps. 1) META DATA only, 2) DATA Only, 3) Indexes and Constraints, 4) Validate. Creating indexes and enabling constraints with Terabyte DATA import can take days, this is why it’s very important that you separate them from actual DATA import.  You can use network impdp feature only if your have 10G network between source and target systems.

Step 1 : Create shell Database on target Exadata Machine using DBCA

You can use DBCA silent option to create database too. I strongly recommended using DBCA GUI to create databases on Exadata Machine.

dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbname mart -sid mart -responseFile NO_VALUE -characterSet AL32UTF8 -sysPassword OraPasswd1 -systemPassword OraPasswd1 -createAsContainerDatabase false -pdbAdminPassword OraPasswd1 -databaseType MULTIPURPOSE -automaticMemoryManagement false -totalMemory 5000 -storageType ASM -diskGroupName DATAC1 -redoLogFileSize 50 -emConfiguration NONE -nodeinfo ex01dbadm01,ex01dbadm02,ex01dbadm03 –ignorePreReqs

Step 2 : Create tablespaces based on source database

I have to enable encryption and compression for my migration but I recommend that you should enable OLTP complete at least for Exadata Machine.

select 'create bigfile tablespace '||tablespace_name ||' DATAFILE ''+DATAC1'' SIZE '||sum(bytes)/1024/1024 ||'M ENCRYPTION

 USING ''AES256'' DEFAULT COMPRESS for OLTP STORAGE(ENCRYPT) ;'

       from dba_data_files

       group by tablespace_name

Step 3 : Create public database link to source database  

Note : – DB link be used for network data pump and compare objects between source and target databases.

CREATE public DATABASE LINK src_db CONNECT TO umair IDENTIFIED BY password! USING 'MART';

Step 4 : Create migration directory for logs and dump files

Create directory for Data Pump Usage both on Source and Target

create or replace directory migration as '/etc/migration/mart';

Step 5 : Import Full database META DATA only using DB Link

nohup impdp / full=y content=metadata_only network_link=src_db directory=migration PARALLEL=16 logfile=imp_full_mart.log &

Step 6 : Import Data schema by schema (FIN_MART)

Note : – I am using network option for my data migration about you don’t have network bandwidth or a lot of unsupported objects , you should first export and then import data using dump files.

nohup impdp / schemas=FIN_MART network_link=src_db exclude=index,constraint TABLE_EXISTS_ACTION=REPLACE directory=migration parallel=16 logfile=impdpFIN_MART.log &

 Step 7 : Generate Indexes & Constraints scripts

First create export dump file (META DATA Only)

expdp / schemas=FIN_MART content=metadata_only directory=migration dumpfile=FIN_MART.dmp logfile=expdpFIN_MART.log

Then generate seperate SQL scripts for Indexes and Constraints.

impdp / schemas=FIN_MART include=index sqlfile=FIN_MART_IDX.sql dumpfile=FIN_MART.dmp  directory=migration logfile=imp_FIN_MART_IDX.log
impdp / schemas=FIN_MART include=contraint sqlfile=FIN_MART_CONS.sql dumpfile=FIN_MART.dmp  directory=migration logfile=imp_FIN_MART_CONS.log

Step 8 : Create Indexes using SQLscript

Update FIN_MART_IDX.sql script and replace parallel 1 with 16 then execute it

nohup sqlplus umair/password! @FIN_MART_IDX.sql &

Step 9 : Enable Contraints using SQL script

Update FIN_MART_CONS.sql script and replace ENABLE with ENABLE NOVALIDATE then execute it.

nohup sqlplus umair/password! @FIN_MART_CONS.sql &

 Step 10 : Validate objects

Lastly Validate objects using following SQL and bring any missing objects.

select owner,object_type,count(*) MISSING_OBJECTS

from (

select owner,object_type,object_name

from dba_objects@src_dih

where owner =’FIN_MART’

minus

select owner, object_type, object_name

from dba_objects where owner = ‘FIN_MART’

)

group by owner,object_type

order by owner,object_type

 

Repeat Steps though 6 -10 for each Schema in your database.

 

 

 

Collecting Exadata ILOM Snapshot using CLI

Many of you might be asked by Oracle support to provide ILOM snapshot to troubleshoot Exadata Hardware issues. I had to diagnose a hardware issue recently and was not able to use web interface because for firewall issue. Fortunately, you can generate ILOM snapshot using following CLI method.

Step 1 : Login to ILOM using root user.

[root@msplex01dbadm02 ~]# ssh root@10.23.44.101

Password:
Oracle(R) Integrated Lights Out Manager
Version 4.0.0.28 r121827
Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved.
Warning: HTTPS certificate is set to factory default.

Hostname: exa01dbadm01-ilom

Step 2 : Set snapshot dataset to normal.

-> set /SP/diag/snapshot dataset=normal

Set 'dataset' to 'normal'

Step 3 : Set snapshot output location.

 Please note that you can root user too, as long as root as permission to write on target directory

-> set /SP/diag/snapshot dump_uri=sftp://oracle:"passowrd!"@10.21.100.22/etc/snapshot’

Set 'dump_uri' to 'sftp://oracle:password!@10.21.100.22/etc/snapshot’

Step 4 : Change directory to snapshot

-> cd /SP/diag/snapshot

/SP/diag/snapshot

Step 5 : Check Status of snapshot , make sure its running

-> show

/SP/diag/snapshot

Targets:

Properties:

dataset = normal

dump_uri = (Cannot show property)

encrypt_output = false

result = Running

Step 6: Keep checking status till it’s completed. May take up to 10 mins

-> show

/SP/diag/snapshot

Targets:

Properties:

dataset = normal

dump_uri = (Cannot show property)

encrypt_output = false

result = Collecting data into

sftp://oracle@10.21.100.22/etc/snapshot/exa01dbadm01-ilom_XXXX30AG_2018-09-14T23-04-46.zip

TIMEOUT: /usr/local/bin/spshexec show /SP/bootlist

TIMEOUT: /usr/local/bin/create_ueficfg_xml

Snapshot Complete.

Done.

Step 7: Upload files to Oracle support.

oracle@10.21.100.22/etc/snapshot/exa01dbadm01-ilom_XXXX30AG_2018-09-14T23-04-46.zip

 

Reverse Exadata Elastic Configuration using elastic config marker

As you may already know ,  The elastic configuration process will allow initial IP addresses to be assigned to database servers and cells, regardless of the exact customer configuration ordered. The customer specific configuration can then be applied to the nodes.

Sometime you can make mistakes and end up assigning wrong IP’s or hostnames to Exadata nodes. You can using Exadata elastic config marker to revert applied elastic configuration.

Problem : Applied wrong IP’s to Exadata Nodes 

[root@exdbadm01 linux-x64]# ibhosts
Ca : 0x0010e00001d4f7a8 ports 2 "exadbadm02 S 192.168.10.3,192.168.10.4 HCA-1"
Ca : 0x0010e00001d691f0 ports 2 "exaceladm03 C 192.168.10.9,192.168.10.10 HCA-1"
Ca : 0x0010e00001d68e30 ports 2 "exaceladm01 C 192.168.10.5,192.168.10.6 HCA-1"
Ca : 0x0010e00001d68cd0 ports 2 "exaceladm02 C 192.168.10.7,192.168.10.8 HCA-1"
Ca : 0x0010e00001d60e00 ports 2 "exadbadm01 S 192.168.10.1,192.168.10.2 HCA-1"

Solution : Create ./elasticConfig file at root on all Exadata nodes. Please note that all the IP’s will be changed to factory default.

create elastic marker on all nodes 

[root@node1 /]# cd /
[root@node1 /]# touch .elasticConfig
[root@node1 /]# reboot

Broadcast message from root@exdbadm01.itrans.int
(/dev/pts/0) at 18:38 ...

The system is going down for reboot NOW!

Login Again using factory default IP's 

[root@node8 linux-x64]# ibhosts
Ca : 0x0010e00001d4f7a8 ports 2 "node10 elasticNode 172.16.2.46,172.16.2.46 ETH0"
Ca : 0x0010e00001d691f0 ports 2 "node4 elasticNode 172.16.2.40,172.16.2.40 ETH0"
Ca : 0x0010e00001d68e30 ports 2 "node2 elasticNode 172.16.2.38,172.16.2.38 ETH0"
Ca : 0x0010e00001d68cd0 ports 2 "node1 elasticNode 172.16.2.37,172.16.2.37 ETH0"
Ca : 0x0010e00001d60e00 ports 2 "node8 elasticNode 172.16.2.44,172.16.2.44 ETH0"

 

 

 

 

 

 

Drop cell disks before converting to 1/8th rack

Hi ! , Today i like to share my experience about an issue i faced during the deployment of Exadata Eight Rack.  I faced following issue while executing Exadata deployment step 2 (Executing Update Nodes for Eighth Rack). We faced the issue because storage cells were shipped with default cell disk groups and they need to be dropped before we can continue deploying Exadata Eight Rack.

Issue : 

[root@node1linux-x64]# ./install.sh -cf Intellitrans-ex.xml -s 2
Initializing
Executing Update Nodes for Eighth Rack
Error: Storage cell [cellnode1, cellnode2, cellnode3] contains cell disks. Cannot setup 1/8th rack. Drop cell disks before converting to 1/8th rack rack.
Collecting diagnostics...
Errors occurred. Send /u01/onecommand/linux-x64/WorkDir/Diag-180626_150747.zip to Oracle to receive assistance.

ERROR:
Error running oracle.onecommand.deploy.operatingSystem.ResourceControl method setupResourceControl
Error: Errors occured...
Errors occured, exiting...

Reason : Cell disk already exist , you can validate by login into all storage nodes. 

[root@cellnoe1~]# cellcli
CellCLI: Release 18.1.4.0.0 - Production on Tue Jun 26 16:13:33 EDT 2018

Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.

CellCLI> list celldisk
FD_00_ru02 normal
FD_01_ru02 normal
FD_02_ru02 normal
FD_03_ru02 normal

Solution : drop celldisk all force;

[root@cellnode1 ~]# cellcli
CellCLI: Release 18.1.4.0.0 - Production on Tue Jun 26 16:18:11 EDT 2018

Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.

CellCLI> list celldisk
FD_00_ru02 normal
FD_01_ru02 normal
FD_02_ru02 normal
FD_03_ru02 normal

CellCLI> drop celldisk all force;

CellDisk FD_00_ru02 successfully dropped
CellDisk FD_01_ru02 successfully dropped
CellDisk FD_02_ru02 successfully dropped
CellDisk FD_03_ru02 successfully dropped

CellCLI> list celldisk

 

Reducing Exadata Active Cores on Compute nodes

Recently, I had an opportunity to deployment eight rack Exadata Machine. As you might already know that it will require reducing active CPU cores on both compute nodes and storage nodes. As per Oracle documentation , this can all be done during Exadata deployment. Make sure you have reduce active CPU cores during OEDA process using capacity on demand section. In my case , Exadata deployment (OEDA) didn’t reduce active cores and i had to manually reduce cores on both DB nodes.

Problem Description : You can clearly see below Exadata deployment process just skipped compute nodes and only reduced CPU cores on storage nodes.

[root@node1 linux-x64]# ./install.sh -cf Intellitrans-ex.xml -s 2 
Initializing 
Executing Update Nodes for Eighth Rack 

Skip Eighth rack configuration in compute node node1 

running setup on: celadm01 
running setup on: celadm03 
running setup on: celadm02 
cellnode3 total CPU cores set from 20 to 10 
cellnode2 needs total CPU cores set from 20 to 10 
cellnode31 needs total CPU cores set from 20 to 10 

Skip Eighth rack configuration in compute node node2 

Successfully completed execution of step Update Nodes for Eighth Rack [elapsed Time [Elapsed = 36051 mS [0.0 minutes] Fri Jul 13 20:31:36 EDT 2018]]
 
[root@node1 linux-x64]# dbmcli -e LIST DBSERVER attributes coreCount 
24/24 

Solution : alter dbserver pendingCoreCount=10 force ( repeat on all db nodes )

[root@node1 linux-x64]# dbmcli -e alter dbserver pendingCoreCount=10 force

Note :- reboot Exadata nodes 

[root@node1 linux-x64]# dbmcli -e LIST DBSERVER attributes coreCount

         10/24

 

The vm.min_free_kbytes configuration is not set as recommended

I saw following issue issue during Exachk review of one of my Exadata deployment. After working with Oracle support and deployment team , it was declare a BUG and will be fixed in future exachk release. But i will still recommend opening an SR with Oracle supprot if we see this issue being report in your exachk report.

Problem Description 
--------------------------------------------------- 
CRITICAL => The vm.min_free_kbytes configuration is not set as recommended 

DATA FROM EXDBADM01 FOR VERIFY THE VM.MIN_FREE_KBYTES CONFIGURATION 

FAILURE: vm.min_free_kbytes is not set as recommended: 
socket count: 1 
minimum size: -1 
in sysctl.conf: 524288 
in active memory: 524288 

Status on nod2: 
CRITICAL => The vm.min_free_kbytes configuration is not set as recommended 

DATA FROM node2 FOR VERIFY THE VM.MIN_FREE_KBYTES CONFIGURATION 

FAILURE: vm.min_free_kbytes is not set as recommended: 
socket count: 1 
minimum size: -1 
in sysctl.conf: 524288 
in active memory: 524288 

Error Codes 
--------------------------------------------------- 
FAILURE: vm.min_free_kbytes is not set as recommended:

 

Clone Oracle Database Home on Exadata Machine

I was asked to clone database home during one of my Exadata deployment project. We wanted to have additional Database home for patching and isolation purposes but its a topic for different blog.  you can use following guidelines to clone database blogs on Exadata machine.

Note :- These steps needs to be performed on all DB nodes.

Step 1 : Create directory or new mount for database home. It’s best to have separate mount for different database homes on Exadata Machine.

mkdir -p /u01/app/oracle/product/11.2.0.4/dbhome_2

Step 2 : Copy all files using root user to new database home (dbhome_2)

[root@exdbadm01 dbhome_1]# cp * -rp /u01/app/oracle/product/11.2.0.4/dbhome_2/

Step 3 : Links RDS required only for Exadata Machine

Set ORACLE_HOME environment variable
cd $ORACLE_HOME/rdbms/lib
make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ipc_rds ioracle

Step 4 : Clone and relink db home using Oracle OUI install in silent mode.

./runInstaller -silent -clone ORACLE_BASE=”/u01/app/oracle” ORACLE_HOME=”/u01/app/oracle/product/11.2.0.4/dbhome_2″ ORACLE_HOME_NAME=”OraDb11g_home2″

export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_2

cd $ORACLE_HOME/oui/bin

[oracle@node1 bin]$ ./runInstaller -silent -clone ORACLE_BASE="/u01/app/oracle" ORACLE_HOME="/u01/app/oracle/product/11.2.0.4/dbhome_2" ORACLE_HOME_NAME="OraDb11g_home2"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 24575 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-06-27_05-04-52PM. Please wait ...[oracle@node1 bin]$ Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.

You can find the log of this install session at:
/u01/app/oraInventory/logs/cloneActions2018-06-27_05-04-52PM.log
.................................................................................................... 100% Done.

Installation in progress (Wednesday, June 27, 2018 5:04:57 PM EDT)
............................................................................... 79% Done.
Install successful

Linking in progress (Wednesday, June 27, 2018 5:05:00 PM EDT)
Link successful

Setup in progress (Wednesday, June 27, 2018 5:05:17 PM EDT)
Setup successful

End of install phases.(Wednesday, June 27, 2018 5:05:38 PM EDT)
WARNING:
The following configuration scripts need to be executed as the "root" user.
/u01/app/oracle/product/11.2.0.4/dbhome_2/root.sh
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts

The cloning of OraDb11g_home2 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2018-06-27_05-04-52PM.log' for more details.