Tuesday, January 22, 2013

OCFS2 Configuration



OCFS Configuration


OCFS related rpms are now available with Oracle Linux CD

Install the following
rpm –Uvh ocfs2-2.6.9-22.EL-1.2.1-1.i686.rpm \
           ocfs2console-1.2.1-1.i386.rpm \
           ocfs2-tools-1.2.1-1.i386.rpm \
           ocfs2-tools-debuginfo-1.2.1-1.i386.rpm

Run the below command it will open the window to add nodes –

$ ocfs2console 

Select the "Cluster > Configure Nodes..." menu option and ignore the subsequent information message by clicking the "Close" button.
You may get the message to startup ocfs on startup/boot


On the "Node Configuration" screen add the nodes and click on apply.
You may get the error for configuration.. It might be because of the hostname misconfiguration
You would require to delete the cluster config file in /etc/ocfs2/cluster.conf
Can be done as follows:-
$/etc/init.d/ocfs2 stop
$/etc/init.d/o2cb offline ocfs2
$/etc/init.d/o2cb unload
$rm –f /etc/ocfs2/cluster.conf
$/usr/sbin/ocfs2console &

Now follow the above steps again to add nodes.
Edit /etc/init.d/o2cb script and delete the following lines.
### BEGIN INIT INFO
## Provides: o2cb
## Required-Start: $network
## Should-Start:
## Required-Stop:
## Default-Start: 2 3 5
## Default-Stop:
## Description: Load O2CB cluster services at system boot.
### END INIT INFO

Reregister the o2cb using below commands:-
$chkconfig - - del o2cb
$chkconfig - - add o2cb
$chkconfig - - list o2cb
o2cb            0:off   1:off   2:on    3:on    4:on    5:on    6:off
$
Now configure the o2cb using the below command:-
$/etc/init.d/o2cb offline ocfs2
$/etc/init.d/o2cb unload
$/etc/init.d/o2cb configure

Other commands:-
  • "/etc/init.d/o2cb load" - Loads all OCFS2 modules.
  • "/etc/init.d/o2cb online" - Brings the cluster online.
  • "/etc/init.d/o2cb offline" - Takes the cluster offline.
  • "/etc/init.d/o2cb unload" - Unloads all OCFS2 modules.
Partion the available disks using fdisk
Options can be choosed as n, 1, Enter, Enter and w to write the partition table.
Start the OCFS2CONSOLE
Format the disk from the task menu… names can be given as per choice.
And then mount the disks by clicking on mount icon.
To make the persistent mounting of the file system make an entry in the file /etc/fstab on all nodes.
/dev/sdb1     /u02    ocfs2   _netdev,datavolume     0 0

Filesystem should be available on all nodes.

Thursday, December 6, 2012

VMSTAT memory information

vmstat (virtual memory statistics) reports information about processes, memory, paging, block IO, traps, and cpu activity. Below is the sample default output generated by vmstat with no options:

bash> vmstat
procs -----------memory----------                 ---swap--      -----io----         -system--            -----cpu------
 r  b       swpd   free     buff         cache         si   so             bi    bo            in   cs             us sy id wa st
 2  1      0 200936584 2329428 53481392    0    0              60   113    0     0     2              1  9   7  0   0

Description of default output-

Procs           r: The number of processes waiting for run time.
                    b: The number of processes in un-interruptible sleep.

Memory     swpd: the amount of virtual memory used in KB.
                   free: the amount of idle memory in KB.
                   buff: the amount of memory used as buffers in KB.
                   cache: the amount of memory used as cache in KB.

Swap          si: Amount of memory swapped in from disk (KB/s).
                   so: Amount of memory swapped to disk (KB/s).

IO               bi: Blocks received from (read in) a block device (blocks/s).
                   bo: Blocks sent to (written out) a block device (blocks/s).

System       in: The number of interrupts per second, including the clock.
                   cs: The number of context switches per second.

CPU           us: Time spent running non-kernel code. (user time, including nice time)
                    sy: Time spent running kernel code. (system time)
                    id: Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time.
                    wa: Time spent waiting for IO. Prior to Linux 2.5.41, included in idle.
                    st: Time stolen from a virtual machine. Prior to Linux 2.6.11, unknown.

Options associated with vmstat

       vmstat [-a] [-n] [delay [ count]]
       vmstat [-f] [-s] [-m]
       vmstat [-S unit]
       vmstat [-d]
       vmstat [-p disk partition]
       vmstat [-V]

-a  switch displays active/inactive memory, given a 2.5.41 kernel or better.

-f switch displays the number of forks since boot.  Each process is represented by one or more tasks,
               depending on thread usage.  This display does not repeat.

-m displays slabinfo.

-n switch causes the header to be displayed only once rather than periodically.

-s switch displays a table of various event counters and memory statistics. This display does
              not repeat.
delay is the delay between updates in seconds.  If no delay is specified, only one report is printed with
             the average values since boot.
count is the number of updates.  If no count is specified and delay is defined, count defaults to         
             infinity.

-d reports disk statistics (2.5.70 or above required)

-p followed by some partition name for detailed statistics (2.5.70 or above required)

-S followed by k or K or m or M switches outputs between 1000, 1024, 1000000, or 1048576 bytes
       The -V switch results in displaying version information.

Example -

Repeats 5 times interval 2 seconds

vmstat 2 5
procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 2  0      0 200844476 2330340 53602492    0    0    60   112    0    0  2  1 97  0  0
 1  0      0 200853744 2330340 53602904    0    0    38   150 7055 10128  1  0 98  0  0
 2  0      0 200859260 2330340 53602900    0    0   103  3398 6828 9979  1  1 98  0  0
 1  0      0 200861660 2330340 53602904    0    0     5    58 6804 9784  1  0 99  0  0
 6  0      0 200824900 2330340 53602896    0    0    37    68 10740 11525  2  3 96  0  0

vmstat -a 2 4

procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
 r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st
 3  0      0 200827012 4099080 46355812    0    0    60   112    0    0  2  1 97  0  0
 0  0      0 200836496 4099472 46353840    0    0    69   868 12479 12793  1  0 98  0  0
 1  0      0 200844692 4099864 46351732    0    0     5   232 12463 13112  1  1 98  0  0
 2  0      0 200831652 4099860 46370176    0    0   666  1275 13227 13570  2  1 97  0  0

 vmstat -f

    236676487 forks

Tuesday, November 27, 2012

CRS Errors

            Today I was installing a Oracle cluster on 2 new machines and encountered the error in post installation though the installation was clean.

Error - CRS not installed on any of the nodes

           Strange isn't it.. I started digging, infact crsctl was very helpful in dignosing the issue.

I had checked for the OCR and Voting disks presence and if these are accessible from the user.
First error found at one of the cluster nodes the oraInst.loc was not having the correct permissions.

The correct permissions are 644 and the files should be owned by root root.

-rw-r--r-- 1 root root 63 2012-10-11 15:46 oraInst.loc

            Once I had fixed the issue again i tried to run the post checks.

./cluvfy stage -post crsinst -n rac1,rac2 -verbose

            And again the similar error for one of the node.

           I went in to check the the logs for the same, I found that at it is referring to a file in /oraInventory/ContentsXML/inventory.xml
           and it does not have the listing of one node on one of the machine and on the other CRS="true" was missing.

CRS="true"
>
  
     
     
  


Updated the missing NODE NAME and CRS="true"

Imagine we are on track..

But not exactly.. Again I faced a issue with CTSS Cluster Time Synchronization service.. This looks to be not fully functional and is running in observer mode. I checked in the documentation and found that it is fine.. It should be running in OBSERVER mode.

If the timegap is huge between the servers it will not allow to start the cluster most of the times.

Fix - Needed to change the time of the machines and then started the cluster.. we are good :)

on linux --

date --set "dd mon yyyy hh24:mi:ss"

Restart the CRS..

Friday, November 23, 2012

Secure Password Support

Secure Password Support:
Oracle 11g introduces case-sensitive passwords for databases created with the default Oracle Database 11g enhanced security. The SEC_CASE_SENTITIVE_LOGON parameter must be set to TRUE to enable case-sensitive database passwords.

If the dump file is imported from Oracle Database 9i or 10g, user's passwords will remain case-insensitive until you manually reset them. Same rule apply when you upgrade a database from an earlier version to Oracle11g.

alter system set SEC_CASE_SENSITIVE_LOGON = TRUE;

Case-Sensitive Password Files

Passwords created in the password file can be set as case-sensitive by using the new option ignorecase with the utility orapwd. Following is an example:

orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=ORAcle123
entries=25 ignorecase=n

Change Default User Passwords

The new DBA_USERS_WITH_DEFPWD view reports those accounts with default passwords for Oracle-supplied database accounts. It is a good practice to change passwords of users displayed by this view.

select USERNAME from DBA_USERS_WITH_DEFPWD order by USERNAME;

Hiding Password Hash Values in DBA_USERS

To provide further level of security, the DBA_USERS view in Oracle Database 11g has the password column blanked out instead of displaying the hashed value of the password.

select USERNAME, PASSWORD from DBA_USERS order by USERNAME;

New Password Verification Function

Oracle 11g provides a new password verification function with stronger settings than those in the functions of earlier versions. This function, however, is not enabled by default.

The script $ORACLE_HOME/rdbms/admin/utlpwdmg.sql creates the new function (named as
VERIFY_FNCTION_11G), enables it in the default profile and also it creates the Oracle 10g function for legacy compatibility.

@$ORACLE_HOME/rdbms/admin/utlpwdmg.sql

Creating Trigger for SQL Trace / Session Trace

Creating Trigger for SQL Trace / Session Trace

CREATE OR REPLACE TRIGGER _sql_trace_trigger
AFTER LOGON
ON .SCHEMA
BEGIN
execute immediate 'alter session set TRACEFILE_IDENTIFIER="xxxxxx"';
execute immediate 'alter session set MAX_DUMP_FILE_SIZE=1000000';
execute immediate 'alter session set events ''10046 trace name context forever, level 12''';
execute immediate 'alter session set STATISTICS_LEVEL=ALL';
END;
/


Parallel queries

set pagesize 500 linesize 500
set serveroutput on
set feedback off echo off
col username format a10
col osuser format a10
col machine format a10
col program format a30
col schemaname format a10

SELECT qcsid, username, status, schemaname, machine, logon_time, program, degree, req_degree
FROM SYS.V_$PX_SESSION p, V$SESSION s
WHERE p.saddr=s.saddr
ORDER BY qcsid, status, logon_time;

SELECT username, schemaname, qcsid, count(*) as num_parallel_proc
FROM SYS.V_$PX_SESSION p, V$SESSION s
WHERE p.saddr=s.saddr
GROUP BY username, schemaname, qcsid;

Temp Usage

set serveroutput off
col TABLESPACE for a15
col SIZE for a10
col sid_serial for a15
col program for a45
col username for a15
col SID_SERIAL for a15
col osuser for a15

set line 200
set pagesize 200
SELECT ta.TABLESPACE_NAME,
ta.total_space "TOTAL SPACE(MB)" ,
tb.used_space "USED SPACE(MB)",
ta.total_space - tb.used_space " FREE SPACE(MB)",
((tb.used_space/ta.total_space)*100)||'%' "USED%"
FROM
(select TABLESPACE_NAME,
sum(BYTES)/1048576 total_space
from dba_temp_files
group by TABLESPACE_NAME ) ta,
(SELECT tablespace,
sum(u.blocks*p.value)/(1024 * 1024) used_space
from v$sort_usage u, v$parameter p
where p.name='db_block_size'
group by tablespace) tb
where ta.tablespace_name=tb.tablespace(+)
/
SELECT b.tablespace,
ROUND(((b.blocks*p.value)/1024/1024),2)||' MB ' "SIZE",
a.sid||','||a.serial# SID_SERIAL,
a.username,
a.osuser,
a.status,
to_char(a.logon_time,'DD-MM-YYYY:HH24:MI') " LOGON TIME ",
a.program
FROM sys.v_$session a,
sys.v_$sort_usage b,
sys.v_$parameter p
WHERE p.name = 'db_block_size'
AND a.saddr = b.session_addr
ORDER BY b.tablespace, b.blocks
/


Find Blocking in Database and related session information

Blocking in the Database

select s1.username || '@' || s1.machine || ' ( SID=' || s1.sid || ' ) is blocking ' || s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status from gv$lock l1, gv$session s1, gv$lock l2, gv$session s2 where s1.sid=l1.sid and s2.sid=l2.sid and l1.BLOCK=1 and l2.request > 0 and l1.id1 = l2.id1 and l2.id2 = l2.id2 ;

SQL related to session and object and row wait

select s.sid, q.sql_text from v$sqltext q, v$session s
where q.address = s.sql_address
and s.sid = &sid
order by q.piece;


select do.object_name,
row_wait_obj#, do.data_object_id, row_wait_file#, row_wait_block#, row_wait_row#,
dbms_rowid.rowid_create ( 1, do.data_object_id, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
from v$session s, dba_objects do
where sid=&sid
and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;

select * from dba_dml_locks;

select * from dba_ddl_locks;

Wednesday, August 29, 2012

Copying SQL Plan Baselines between Databases

Copying SQL Plan Baselines between databases

The DBMS_SPM package provides functionality for transferring SQL plan baselines between databases. First, a staging table must be created in the source database using the CREATE_STGTAB_BASELINE procedure.
BEGIN
  DBMS_SPM.CREATE_STGTAB_BASELINE(
    table_name      => 'spm_stageing_tab',
    table_owner     => 'GPS_TEST',
    tablespace_name => 'USERS');
END;
/
The PACK_STGTAB_BASELINE function exports the SQL plan baselines to the staging table. There are several parameters allowing you to limit amount and type of data you export. The following example exports all SQL plan baselines.
SET SERVEROUTPUT ON
DECLARE
  l_plans_packed  PLS_INTEGER;
BEGIN
  l_plans_packed := DBMS_SPM.pack_stgtab_baseline(
    table_name      => 'spm_stageing_tab',
    table_owner     => 'GPS_TEST');

  DBMS_OUTPUT.put_line('Plans Packed: ' || l_plans_packed);
END;
/
Plans Packed: 131

PL/SQL procedure successfully completed.

SQL>
The staging table is then transferred to the destination database using data pump or the original export/import utilities. Once in the destination database, the SQL plan baselines are imported into the dictionary using the UNPACK_STGTAB_BASELINE function. Once again, there are several parameters allowing you to limit amount and type of data you import. The following example imports all SQL plan baselines owned by the user "GPS_TEST".
SET SERVEROUTPUT ON
DECLARE
  l_plans_unpacked  PLS_INTEGER;
BEGIN
  l_plans_unpacked := DBMS_SPM.unpack_stgtab_baseline(
    table_name      => 'spm_stageing_tab',
    table_owner     => 'GPS_TEST',
    creator         => 'GPS_TEST');

  DBMS_OUTPUT.put_line('Plans Unpacked: ' || l_plans_unpacked);
END;
/
Plans Unpacked: 11

PL/SQL procedure successfully completed.

SQL>