Sunday, 12 June 2016

Data Dictionary views Vs V$ views in Oracle

Here are the differences between data dictionary views and V$ views, in Oracle.

DBA_ / USER_ / ALL_ views are built on the data dictionary - they're not available if the database is not mounted and opened.


  • dba_views user_views all_views are obtained from Data Dictionary tables


V$ views tend to run against the instance, and therefore may be available if the database is not mounted, or is not mounted and opened, depending on the nature of the view.

  • v$ views are got from control file



Using your example:

V$TABLESPACE is a view on X$KCCTS, which is an internal memory structure.
DBA_TABLESPACES is a view on the data dictionary table SYS.TS$

Manually Adding Space to ASM Disk

Pre-requisites:

Ensure you set ASM instance environment.
Ensure you connect ASM instance as sysasm user, not sys/sysdba.
Ensure you entered the right disk group name in add disk group syntax.

Implementation Steps:

Step 1: Before adding disk group verify the disk group status using below query. 

select a.name DiskGroup, b.disk_number Disk#, b.name DiskName, b.total_mb, b.free_mb, b.path, b.header_status
from v$asm_disk b, v$asm_diskgroup a
where a.group_number (+) =b.group_number
and a.name='DATA'
order by b.group_number, b.disk_number, b.name;

Step  2: Identify the Disk Group for space addition E.g. DATA or FLASH or GRID1 or GRID2 etc...

SQL> SELECT GROUP_NUMBER, NAME, SECTOR_SIZE, BLOCK_SIZE,ALLOCATION_UNIT_SIZE,STATE,TYPE TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;

Step  3: Check either for CANDIDATE or PROVISIONED disk using below query.

set lines 150
column path format a40
column name format a20

select name, path, mount_status, header_status, mode_status ,state, free_mb from v$asm_disk where header_status='CANDIDATE' order by 2;

Step  4: Once you identify the available disks in Step 3, Choose those disks for space addition

sqlplus / as sysasm
SQL> ALTER DISKGROUP DATA ADD DISK 'ORCL:DISKHP003' NAME DISKHP03 REBALANCE POWER 11;

Step  5: When you add the disks to Disk Group, rebalancing will happen automatically.  Check the status of rebalancing using the below query. Rebalancing will take a while to complete and it depends on the number of disks and space you are adding.

SQL> select * from v$asm_operation;

Step  6:  Either Query or login to asmcmd prompt to check if the space have been added.

>asmcmd lsdg

Step  7: Verify if the newly added disk HEADER_STATUS has changed to MEMBER.  

SQL> set lines 150
column path format a40
column name format a20
select name, path, mount_status, header_status, mode_status ,state, free_mb from v$asm_disk where header_status='MEMBER' order by 2;

Step  8: Check ASM Instance alert log for any issues.

Wednesday, 1 June 2016

Oracle RAC 11gR2 Installation on OEL 5 Part - 4

Step:1)

Network Configuration for RAC Nodes:

The Public, VIPs,SCAN VIPs and private IPs for Cluster Interconnects are resolved through /etc/hosts

Change the /etc/hosts entries as below

[root@rac1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1           localhost.localdomain       localhost

# Public
147.43.0.1          rac1.pavan.com             rac1
147.43.0.2          rac2.pavan.com             rac2

#Private
192.168.0.1         rac1-priv.pavan.com        rac1-priv
192.168.0.2         rac2-priv.pavan.com        rac2-priv

#Vip
147.43.0.3          rac1-vip.pavan.com         rac1-vip
147.43.0.4          rac2-vip.pavan.com         rac2-vip

#Scan
147.43.0.5          scan.pavan.com             scan

::1                 localhost6.localdomain6    localhost6


Step:2)  Remove NETWORKING_IPV6 parameter from network file.

[root@rac1 ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=rac1.pavan.com

Step:3) Set the below Kernel Parameters with recommended range in /etc/sysctl.conf

[root@rac1 ~]# cat /etc/sysctl.conf|grep kernel.sem
# Oracle-Validated setting for kernel.sem is '250 32000 100 142'
kernel.sem = 250 32000 100 200


– By specifying the values in the /etc/sysctl.conf file they persist when you restart the system. For immediate effect of the changed parameters, restart the system or run below commad.

# sysctl -p
Step:4)  Configure NTP Settings
You have two options for time synchronization as shown below,
1. Network Time Protocol [NTP]
If you prefer to use NTP instead of CTSS daemon, then you must enable the slewing option, which prevents time from being adjusted backward.
– Stop the NTP service
# service ntpd stop
– Enable slewing option,
# vi /etc/sysconfig/ntpd
Change the line
—-> OPTIONS=”-u ntp:ntp -p /var/run/ntpd.pid”
to
—-> OPTIONS=”-x -u ntp:ntp -p /var/run/ntpd.pid”
[Save and Exit]
– Start the NTP service
# service ntpd start
or use CTSS
2. Oracle Cluster Time Synchronization Service [CTSS]
Oracle 11gR2 Clusterware includes the CTSS which checks for NTP configuration at the startup of Clusterware. If found, CTSS goes into “Observer” mode. If it does not find a NTP configuration, CTSSD becomes active. CTSSD will synchronize time with a reference node in the cluster when an NTPD is not found to be configured.
You can remove the ntp.conf file or just change the name, so that the CTSSD daemon will become active when the Clusterware is started.
# mv /etc/ntp.conf /etc/ntp.conf_orig
Step:5) Creating Oracle Users/Groups/Permissions and Installation Paths: (On all the RAC Nodes):

userdel -r oracle
groupdel oinstall
groupdel dba
groupadd -g 1000 oinstall
groupadd -g 1001 dba
groupadd -g 1002 asmadmin
groupadd -g 1003 asmdba
groupadd -g 1004 asmoper

Creation of Users
[root@node1 ~]# useradd -u 2000 -g oinstall -G dba,asmdba,asmadmin,asmoper oracle

mkdir -p /u01/app/11.2.0.3/grid
mkdir -p /u01/app/oracle/product/11.2.0.3/db_1
chown -R oracle:oinstall /u01 
chmod -R 777 /u01
passwd oracle

Step:6) SSH user Equivalency configuration (oracle):

On All the Cluster Nodes:

su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys

Generate the RSA key on both nodes :

/usr/bin/ssh-keygen -t rsa  ---> Generates key to /home/oracle/.ssh/id_rsa.pub .

Note : Copy key from both nodes of id_rsa.pub file to  ~/.ssh/authorized_keys file of both nodes.

Ex: Finally, authorized_keys file on both nodes should look like

cat ~/.ssh/authorized_keys
ssh-rsa AAAA.......+EQiQ== oracle@rac2.pavan.com
ssh-rsa AAAAB3...rRIw== oracle@rac1.pavan.com


[Now the “authorized_keys” file will have two entries (one of rac1 and rac2 each) on both the nodes]
– On rac1 node, run the below commands. You should be able to login without asking for passwords.
$ ssh rac1
$ ssh rac1-priv
$ ssh rac2
$ ssh rac2-priv

Similarly run the above ssh commands on rac2 node. If everything is fine, we are done with passwordless configuration for Oracle user.
Step:7) Installing and Configuring ASMLib:

The ASMLib is highly recommended for those systems that will be using ASM for shared storage within the cluster due to the performance and manageability benefits that it provides. Perform the following steps to install and configure ASMLib on the cluster nodes:

Download the following packages from the ASMLib OTN page

[root@rac1 ~]# cd /mnt/hgfs/Softwares/asmlib/
[root@rac1 asmlib]# ls -ltr
-rwxrwxrwx 1 root root 13929 Sep 11  2014 oracleasmlib-2.0.4-1.el5.i386.rpm
-rwxrwxrwx 1 root root 14176 Sep 11  2014 oracleasmlib-2.0.4-1.el5.x86_64.rpm
-rwxrwxrwx 1 root root 85303 Sep 11  2014 oracleasm-support-2.1.8-1.el5.i386.rpm

 Install the RPMs by running the following as the root user.

[root@rac1 asmlib]# rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm --nodeps --force

[root@rac1 asmlib]# rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm --nodeps --force

[root@rac1 asmlib]# rpm -ivh oracleasm-support-2.1.8-1.el5.i386.rpm --nodeps --force

– To check whether the package is installed or not, execute below command

# rpm -qa | grep make

Step:8) Configure ASMLib by running the following as the root user:

[root@rac1 asmlib]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

Step:9)  Create Partitions but Do Not Format
To check the number of disks, enter the below command.
# fdisk -l
The above command will show all the disks attached to a server. The new disks will have no partitions yet. Always create the raw devices as primary partition and allocate the whole disk. Don’t make multiple primary partitions on a single disk for ASM installation. It might not work properly.
Partition the Disk Devises (only from one node): Format these disks to contain a single primary partition to represent it at the time of creating ASM disk using oracleasm.


[root@rac1 ~]# fdisk /dev/sdb

The number of cylinders for this disk is set to 6527.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-6527, default 6527): +45g

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (5473-6527, default 5473): 
Using default value 5473
Last cylinder or +size or +sizeM or +sizeK (5473-6527, default 6527): +5g

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.

Note : Run “fdisk” to create partitions on any one node. On other nodes, only run the “partprobe” command. The newly created partitions will get reflected automatically on the other nodes.

Step:10)  Using ASMLib to Mark the Shared Disks as Candidate Disks:


To create ASM disks using ASMLib:

1.      As the root user, use oracleasm to create ASM disks using the following syntax:

[root@rac1 ~]# oracleasm createdisk DSK05 /dev/sdb2
Writing disk header: done
Instantiating disk: done

[root@rac1 ~]# oracleasm createdisk DSK05 /dev/sdb2
Writing disk header: done

Instantiating disk: done

[root@rac1 ~]# oracleasm listdisks
DSK05
DSK45

[root@rac1 ~]# oracleasm-discover
Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.4 (KABI_V2)]
Discovered disk: ORCL:DSK05 [9783585 blocks (5009195520 bytes), maxio 256]
Discovered disk: ORCL:DSK45 [87907617 blocks (45008699904 bytes), maxio 256]

On all the other nodes in the cluster, use the scandisks command as the root user to pickup the newly created ASM disks. You do not need to create the ASM disks on each node, only on one node in the cluster.

rac2:
[root@rac2 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DSK45"
Instantiating disk "DSK05"
[root@rac2 ~]# oracleasm listdisks
DSK05
DSK45

Step: 11) Run the cluvfy utility


You can run the cluvfy utility (runcluvfy.sh) and confirm whether all the prechecks are completed or not.
 Download the Grid Infrastructure software from OTN site and unzip the file. After unzipping run the cluvfy utility.

./runcluvfy.sh stage -pre crsinst -n rac1,rac2 > /tmp/precheck.log

Step:12)  Oracle Grid Infrastructure Install.

$ ./runInstaller


Action: Select radio button 'Install and Configure Grid Infrastructure for a Cluster' and click ' Next> '



Action: Select radio button 'Advanced Installation' and click ' Next> '



Action: Accept 'English' as language' and click ' Next> '

Action: Specify your cluster name and the SCAN name you want to use and click ' Next> '  
Note: Make sure 'Configure GNS' is NOT selected.


Action:Use the Edit and Add buttons to specify the node names and virtual IP addresses you configured previously in your /etc/hosts file. Use the 'SSH Connectivity' button to configure/test the passwordless SSH connectivity between your nodes.





Action:Click on 'Interface Type' next to the Interfaces you want to use for your cluster and select the correct values for 'Public', 'Private' and 'Do Not Use' . When finished click ' Next> '




Action:Select radio button 'Automatic Storage Management (ASM) and click ' Next> '


Action:Select the 'DiskGroup Name' specify the 'Redundancy' and tick the disks you want to use, when done click ' Next> '  
NOTE: The number of voting disks that will be created depend on the redundancy level you specify: EXTERNAL will create 1 voting disk, NORMAL will create 3 voting disks, HIGH will create 5 voting disks.


Action:Specify and conform the password you want to use and click ' Next> '


Action:Select NOT to use IPMI and click ' Next> '


Action: Assign the correct OS groups for OS authentication and click ' Next> '


Action:Specify the locations for your ORACLE_BASE and for the Software location and click ' Next> ' 


Specify the locations for your Inventory directory and click ' Next> '





Note: OUI performs certain checks and comes back with the screen below






[root@rac1 ~]# /tmp/CVU_11.2.0.3.0_oracle/runfixup.sh 
Response file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.enable
Log file location: /tmp/CVU_11.2.0.3.0_oracle/orarun.log
Installing Package /tmp/CVU_11.2.0.3.0_oracle//cvuqdisk-1.0.9-1.rpm
error: failed to stat /var/run/vmblock-fuse: Resource temporarily unavailable
Preparing...                ########################################### [100%]
   1:cvuqdisk               ########################################### [100%]


[root@rac2 ~]# /tmp/CVU_11.2.0.3.0_oracle/runfixup.sh 
Response file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.3.0_oracle/fixup.enable
Log file location: /tmp/CVU_11.2.0.3.0_oracle/orarun.log
Installing Package /tmp/CVU_11.2.0.3.0_oracle//cvuqdisk-1.0.9-1.rpm
error: failed to stat /var/run/vmblock-fuse: Resource temporarily unavailable
Preparing...                ########################################### [100%]
   1:cvuqdisk               ########################################### [100%]










[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.


[root@rac1 ~]# /u01/app/11.2.0.3/grid/root.sh
Performing root user operation for Oracle 11g 
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0.3/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
ASM created and started successfully.
Disk Group OCRVD created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 0e72aef793124f4fbf26bf6548fcfb02.
Successfully replaced voting disk group with +OCRVD.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0e72aef793124f4fbf26bf6548fcfb02 (ORCL:DSK05) [OCRVD]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.OCRVD.dg' on 'rac1'
CRS-2676: Start of 'ora.OCRVD.dg' on 'rac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 ~]# 





[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[root@rac2 ~]# /u01/app/11.2.0.3/grid/root.sh
Performing root user operation for Oracle 11g 
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0.3/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac2 ~]#