Tuesday, November 27, 2012

CRS Errors

            Today I was installing a Oracle cluster on 2 new machines and encountered the error in post installation though the installation was clean.

Error - CRS not installed on any of the nodes

           Strange isn't it.. I started digging, infact crsctl was very helpful in dignosing the issue.

I had checked for the OCR and Voting disks presence and if these are accessible from the user.
First error found at one of the cluster nodes the oraInst.loc was not having the correct permissions.

The correct permissions are 644 and the files should be owned by root root.

-rw-r--r-- 1 root root 63 2012-10-11 15:46 oraInst.loc

            Once I had fixed the issue again i tried to run the post checks.

./cluvfy stage -post crsinst -n rac1,rac2 -verbose

            And again the similar error for one of the node.

           I went in to check the the logs for the same, I found that at it is referring to a file in /oraInventory/ContentsXML/inventory.xml
           and it does not have the listing of one node on one of the machine and on the other CRS="true" was missing.

CRS="true"
>
  
     
     
  


Updated the missing NODE NAME and CRS="true"

Imagine we are on track..

But not exactly.. Again I faced a issue with CTSS Cluster Time Synchronization service.. This looks to be not fully functional and is running in observer mode. I checked in the documentation and found that it is fine.. It should be running in OBSERVER mode.

If the timegap is huge between the servers it will not allow to start the cluster most of the times.

Fix - Needed to change the time of the machines and then started the cluster.. we are good :)

on linux --

date --set "dd mon yyyy hh24:mi:ss"

Restart the CRS..

Friday, November 23, 2012

Secure Password Support

Secure Password Support:
Oracle 11g introduces case-sensitive passwords for databases created with the default Oracle Database 11g enhanced security. The SEC_CASE_SENTITIVE_LOGON parameter must be set to TRUE to enable case-sensitive database passwords.

If the dump file is imported from Oracle Database 9i or 10g, user's passwords will remain case-insensitive until you manually reset them. Same rule apply when you upgrade a database from an earlier version to Oracle11g.

alter system set SEC_CASE_SENSITIVE_LOGON = TRUE;

Case-Sensitive Password Files

Passwords created in the password file can be set as case-sensitive by using the new option ignorecase with the utility orapwd. Following is an example:

orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=ORAcle123
entries=25 ignorecase=n

Change Default User Passwords

The new DBA_USERS_WITH_DEFPWD view reports those accounts with default passwords for Oracle-supplied database accounts. It is a good practice to change passwords of users displayed by this view.

select USERNAME from DBA_USERS_WITH_DEFPWD order by USERNAME;

Hiding Password Hash Values in DBA_USERS

To provide further level of security, the DBA_USERS view in Oracle Database 11g has the password column blanked out instead of displaying the hashed value of the password.

select USERNAME, PASSWORD from DBA_USERS order by USERNAME;

New Password Verification Function

Oracle 11g provides a new password verification function with stronger settings than those in the functions of earlier versions. This function, however, is not enabled by default.

The script $ORACLE_HOME/rdbms/admin/utlpwdmg.sql creates the new function (named as
VERIFY_FNCTION_11G), enables it in the default profile and also it creates the Oracle 10g function for legacy compatibility.

@$ORACLE_HOME/rdbms/admin/utlpwdmg.sql

Creating Trigger for SQL Trace / Session Trace

Creating Trigger for SQL Trace / Session Trace

CREATE OR REPLACE TRIGGER _sql_trace_trigger
AFTER LOGON
ON .SCHEMA
BEGIN
execute immediate 'alter session set TRACEFILE_IDENTIFIER="xxxxxx"';
execute immediate 'alter session set MAX_DUMP_FILE_SIZE=1000000';
execute immediate 'alter session set events ''10046 trace name context forever, level 12''';
execute immediate 'alter session set STATISTICS_LEVEL=ALL';
END;
/


Parallel queries

set pagesize 500 linesize 500
set serveroutput on
set feedback off echo off
col username format a10
col osuser format a10
col machine format a10
col program format a30
col schemaname format a10

SELECT qcsid, username, status, schemaname, machine, logon_time, program, degree, req_degree
FROM SYS.V_$PX_SESSION p, V$SESSION s
WHERE p.saddr=s.saddr
ORDER BY qcsid, status, logon_time;

SELECT username, schemaname, qcsid, count(*) as num_parallel_proc
FROM SYS.V_$PX_SESSION p, V$SESSION s
WHERE p.saddr=s.saddr
GROUP BY username, schemaname, qcsid;

Temp Usage

set serveroutput off
col TABLESPACE for a15
col SIZE for a10
col sid_serial for a15
col program for a45
col username for a15
col SID_SERIAL for a15
col osuser for a15

set line 200
set pagesize 200
SELECT ta.TABLESPACE_NAME,
ta.total_space "TOTAL SPACE(MB)" ,
tb.used_space "USED SPACE(MB)",
ta.total_space - tb.used_space " FREE SPACE(MB)",
((tb.used_space/ta.total_space)*100)||'%' "USED%"
FROM
(select TABLESPACE_NAME,
sum(BYTES)/1048576 total_space
from dba_temp_files
group by TABLESPACE_NAME ) ta,
(SELECT tablespace,
sum(u.blocks*p.value)/(1024 * 1024) used_space
from v$sort_usage u, v$parameter p
where p.name='db_block_size'
group by tablespace) tb
where ta.tablespace_name=tb.tablespace(+)
/
SELECT b.tablespace,
ROUND(((b.blocks*p.value)/1024/1024),2)||' MB ' "SIZE",
a.sid||','||a.serial# SID_SERIAL,
a.username,
a.osuser,
a.status,
to_char(a.logon_time,'DD-MM-YYYY:HH24:MI') " LOGON TIME ",
a.program
FROM sys.v_$session a,
sys.v_$sort_usage b,
sys.v_$parameter p
WHERE p.name = 'db_block_size'
AND a.saddr = b.session_addr
ORDER BY b.tablespace, b.blocks
/


Find Blocking in Database and related session information

Blocking in the Database

select s1.username || '@' || s1.machine || ' ( SID=' || s1.sid || ' ) is blocking ' || s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status from gv$lock l1, gv$session s1, gv$lock l2, gv$session s2 where s1.sid=l1.sid and s2.sid=l2.sid and l1.BLOCK=1 and l2.request > 0 and l1.id1 = l2.id1 and l2.id2 = l2.id2 ;

SQL related to session and object and row wait

select s.sid, q.sql_text from v$sqltext q, v$session s
where q.address = s.sql_address
and s.sid = &sid
order by q.piece;


select do.object_name,
row_wait_obj#, do.data_object_id, row_wait_file#, row_wait_block#, row_wait_row#,
dbms_rowid.rowid_create ( 1, do.data_object_id, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
from v$session s, dba_objects do
where sid=&sid
and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;

select * from dba_dml_locks;

select * from dba_ddl_locks;

Wednesday, August 29, 2012

Copying SQL Plan Baselines between Databases

Copying SQL Plan Baselines between databases

The DBMS_SPM package provides functionality for transferring SQL plan baselines between databases. First, a staging table must be created in the source database using the CREATE_STGTAB_BASELINE procedure.
BEGIN
  DBMS_SPM.CREATE_STGTAB_BASELINE(
    table_name      => 'spm_stageing_tab',
    table_owner     => 'GPS_TEST',
    tablespace_name => 'USERS');
END;
/
The PACK_STGTAB_BASELINE function exports the SQL plan baselines to the staging table. There are several parameters allowing you to limit amount and type of data you export. The following example exports all SQL plan baselines.
SET SERVEROUTPUT ON
DECLARE
  l_plans_packed  PLS_INTEGER;
BEGIN
  l_plans_packed := DBMS_SPM.pack_stgtab_baseline(
    table_name      => 'spm_stageing_tab',
    table_owner     => 'GPS_TEST');

  DBMS_OUTPUT.put_line('Plans Packed: ' || l_plans_packed);
END;
/
Plans Packed: 131

PL/SQL procedure successfully completed.

SQL>
The staging table is then transferred to the destination database using data pump or the original export/import utilities. Once in the destination database, the SQL plan baselines are imported into the dictionary using the UNPACK_STGTAB_BASELINE function. Once again, there are several parameters allowing you to limit amount and type of data you import. The following example imports all SQL plan baselines owned by the user "GPS_TEST".
SET SERVEROUTPUT ON
DECLARE
  l_plans_unpacked  PLS_INTEGER;
BEGIN
  l_plans_unpacked := DBMS_SPM.unpack_stgtab_baseline(
    table_name      => 'spm_stageing_tab',
    table_owner     => 'GPS_TEST',
    creator         => 'GPS_TEST');

  DBMS_OUTPUT.put_line('Plans Unpacked: ' || l_plans_unpacked);
END;
/
Plans Unpacked: 11

PL/SQL procedure successfully completed.

SQL>

Wednesday, August 22, 2012

Shrink datafiles in all tablespace below HWM

Shrink Data files in all tablespaces

Spool /tmp/shrink_tablespaces.sql

select 'alter database datafile ''' || file_name || ''' resize ' ||
ceil(ROUND((((nvl(hwm, 1) * &&BlockSize) / 1024 / 1024) + 50), -2)) || 'm' cmd
from dba_data_files a,
(select file_id, max(block_id + blocks - 1) hwm
from dba_extents
group by file_id) b
where a.file_id = b.file_id(+)
and ceil(blocks * &BlockSize / 1024 / 1024) -
ceil((nvl(hwm, 1) * &&BlockSize) / 1024 / 1024) > 0
and a.tablespace_name not in
('SYSTEM', 'TEMP', 'UNDO_SDATA01', 'UNDO_SDATA02')
/

spool off;

@/tmp/shrink_tablespaces.sql


Schema Size

select 'TABLE' "SEGMENT_TYPE",round(sum(bytes)/1024/1024/1024) "SIZE in GB" from dba_segments where owner='&OWNER'
and segment_type in('TABLE','TABLE PARTITION','LOBSEGMENT') group by 'TABLE'
UNION
select 'INDEX' "SEGMENT_TYPE",round(sum(bytes)/1024/1024/1024) "SIZE in GB" from dba_segments where owner='&OWNER'
and segment_type in('INDEX','INDEX PARTITION','LOBINDEX') group by 'INDEX'
/



Monday, November 30, 2009

Data Guard - Logical Standby using Hotbackup

Data Guard is a technique used to minimize the downtime in case of db failures and user mistakes. There are more solutions like RAC, Volume Manager to protect the database and provide the high availability, Protection of single instance failure by providing redundancy to the users. However these are good for high availability and scalability, they do not protect from user mistakes, data corruptions and other disasters that may destroy the database itself.

In the following paragraphs I am going to describe the Logical standby database creation using HOTBACKUPS. Before we go into the creation of the database what benefits we are having using Logical Standby database.

Benefits of Logical Standby database:- Since we know dataguard can be configured in two ways - Physical Standby Database and Logical Standby Database. The most advantage over Physical standby is Logical standby database can be in open mode and simultaneously it can be used for reporting. This advantage comes handy because in Logical standby the SQL's are being applied on the table which are generated from the archived logs which get transfred from the Primary database. Since applying SQL required the database has to be in the Open mode.
Second Important benefit is Resource Utilization, Logical Standby database can have extra indexes and materialized views which might be required for the reporting purpose Since it is not mandatory to have the same physical structure of the Physical standby.

=====================================================================================

PRE REQUISITES:- After getting the advantages, it is required to check the primary database for following points which enables how far standby database is going to support for log apply service:

1. Primary database should be in Archive log mode.
2. Primary database should be checked for the objects which are good candidate for the log apply service. If found that few objects are not a candidate for the log apply service by creating primary key / unique index we can make them a candidate.
3. Ensure Supplemental logging is enabled for the Primary database.

Below are the commands to check the above requirements:-

Log in as sysdba into the primary database then issue the commands.

1. Archive Log List;
This will show if the database is in Archive log list. if the database is not in Archive log mode. Goahead and put the database in archive log mode.

2. SELECT * FROM dba_logstdby_unsupported;

This will show if the primary database contains tables and datatypes that were not supported by a logical stand by database. If the primary database contains tables that were unsupported, log apply services will exclude the tables applying to the logical stand by database.

3. SELECT owner, table_name, bad_column FROM dba_logstdby_not_unique;

OWNER TABLE_NAME B
VCSUSER VCS N

Bad column ‘N’ indicates that the table contains enough column information to maintain the table in the logical standby database, where as ‘Y’ indicates the table column is defined using an unbounded data type, such as LONG.

Add a primary key to the tables that do not have to improve performance.

If the table has a primary key or a unique index with a non-null column, the amount of information added to the redo log is minimal.

4. Ensure supplemental logging is enabled and log parallelism is enabled on the primary database. Supplemental logging must be enabled because the logical standby database cannot use archived redo logs that contain both supplemental log data and no supplemental log data.

SELECT supplemental_log_data_pk,supplemental_log_data_ui FROM v$database;


SUP SUP
--- ---
YES YES

If the supplemental logging is not enabled, then it is required to put the database for supplemental logging mode. In order to do that execute the following

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY,UNIQUE INDEX) COLUMNS;

ALTER SYSTEM SWITCH LOGFILE;


If log parallelism is not enabled, execute the following:

ALTER SYSTEM SET LOG_PARALLELISM=1 SCOPE=BOTH;

Since we are going to create Logical Standby using Hot backups we need to start Resource Manager. If you do not have a resource_manager plan, you can use one of the system defined plans and restart the primary database to make sure it is using the defined plan.

ALTER SYSTEM SET RESOURCE_MANAGER_PLAN=SYSTEM_PLAN SCOPE=BOTH;

SHUTDOWN

STARTUP


====================================================================================

Once the database has been configured to support the Logical standby Use following steps to configure the Logical stand by database.

1. Take a hotbackup of the primary database. Bring a tablespace to backup mode, copy the datafiles of that tablespace using an OS command, and bring the tablespace back online.

SELECT file_name,tablespace_name FROM v$tablespace;

This will give all the tablespace names and the datafiles which we need to backup, this can be done using RMAN or Command Line.

Below is an example how a SYSTEM tablespace can be backedup using command line. Follow the same for all tablespaces.

ALTER TABLESPACE SYSTEM BEGIN BACKUP;

! cp /opt/oracle/oradata/PRIMEDB/system01.dbf /backup/PRIMEDB/HOT/


ALTER TABLESPACE SYSTEM END BACKUP;

2. Create the backup control file.

ALTER DATABASE BACKUP CONTROLFILE TO backup/PRIMEDB/HOT/PRIMEDB_backup.ctl;

3. Bring the database to a QUIESCED State and configure Database Resource Manager for the same.

ALTER SYSTEM QUIESCE RESTRICTED;

Above task can be skip only if you are using 10g relase 2.

4. Build the LogMiner dictionary.

EXECUTE DBMS_LOGSTDBY.BUILD;

5. Identify the archived redo log that contains the LogMiner dictionary and the starting SCN.

You need to switch the log file, to create the archived log, and then query from V$ARCHIVED_LOG.

ALTER SYSTEM ARCHIVE LOG CURRENT;

SELECT NAME FROM V$ARCHIVED_LOG
WHERE (SEQUENCE#= (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG
WHERE DICTIONARY_BEGIN = 'YES' AND STANDBY_DEST='NO')
);


NAME
-----------------------------------------------------------
/opt/oracle/ARC/MYFN/MYFN_0001_0000000005.arc

SELECT MAX(FIRST_CHANGE#) FROM V$ARCHIVED_LOG
WHERE DICTIONARY_BEGIN='YES';


MAX(FIRST_CHANGE#)
------------------
2856516

Note: Remember to record the name of the archived redo log for use later in the creation process.

6. Bring the database back to normal (UNQUIESCED).

ALTER SYSTEM UNQUIESCE;

ALTER SYSTEM SWITCH LOGFILE;

7. Create the parameter file from spfile in the Primary database. The pfile created will be used to create the pfile of the standby database.

CREATE PFILE=’/backup/PRIMEDB/HOT/Primary_init.ora’ FROM SPFILE;

8. Copy Files from the Primary Database Location to the Standby Location.

Use an operating system copy utility to copy the following binary files from the primary database site to the standby site:

i) Backup data files
ii) Backup of control files
iii) Latest archived redo log
iv) init.ora file
v) password file


9. Set the init.ora Parameters on the Logical standby site.

DB_NAME='PRIMEDB'
INSTANCE_NAME='PRIMEDB_H'
LOG_ARCHIVE_DEST_1='LOCATION=/OPT/ORACLE/ARC/PRIMEDB MANDATORY'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_FORMAT='PRIMEDB_%T_%S.ARC'
REMOTE_ARCHIVE_ENABLE=RECEIVE
LOG_ARCHIVE_START=TRUE
LOG_PARALLELISM=1
PARALLEL_MAX_SERVERS=9
STANDBY_FILE_MANAGEMENT='AUTO'
STANDBY_ARCHIVE_DEST='/OPT/ORACLE/ARC/PRIMEDB/STDBY'

# The following parameter is required only if the primary and standby databases
# are located on the same system.

LOCK_NAME_SPACE=PRIMEDB_H

10. Configure the Listener for Both the Primary and Standby Databases and Restart/reload the listener(s).

11. Because the online logs were not copied from the primary system, the redo logs will have to be cleared. To clear the redo logs, use the following statement for all the groups:

ALTER DATABASE CLEAR LOGFILE GROUP 1;

ALTER DATABASE CLEAR LOGFILE GROUP 2;


12. Recover the Logical Standby Database up to the SCN recorded in step 5 above.

Use the command below to recover the database:

ALTER DATABASE RECOVER AUTOMATIC FROM '/opt/oracle/ARC/PRIMEDB_H/'
UNTIL CHANGE 2856516 USING BACKUP CONTROLFILE;


If error 'ORA-279: change %s generated at %s needed for thread %s' comes the recovery will have to be canceled and recover it manually using the following command.

ALTER DATABASE RECOVER LOGFILE 2>'/opt/oracle/ARC/PRIMEDB_H/MYDB_0001_0000000004.arc'

13. Turn on Data Guard on the Logical Standby Database.

ALTER DATABASE GUARD ALL;

14. Open the Logical Standby Database:

ALTER DATABASE OPEN RESETLOGS;

15. Reset the Database Name of the Logical Standby Database:

Run the Oracle DBNEWID (nid) utility to change the database name of the logical standby database. This will change the database name in the control file. Until now, the dbname in the control file is the primary db name.

SHUTDOWN IMMEDIATE;

STARTUP MOUNT PFILE=''

EXIT


$> export ORACLE_SID=MYDB_H

Modify the init.ora file and set parameter DB_NAME=MYDB_H, and if password file is used then delete and recreate the password file.

SHUTDOWN IMMEDIATE;

16. Create a server parameter file for the standby database:

CREATE SPFILE FROM PFILE=;

17. Restart the Logical Standby Database:

STARTUP MOUNT;
ALTER DATABASE OPEN RESETLOGS;


18. Create a New Temporary File for the Logical Standby Database:

The temporary files are not included as a part of the closed backup operation, so the temporary file will have to be recreated manually.

To create a Temporary file:

ALTER TABLESPACE TEMP ADD TEMPFILE 2> '/opt/oracle/oradata/PRIMEDB_H/temp01.dbf' SIZE 40M REUSE;

19. Register the Archived Redo Log and Start SQL Apply Operations:

To register the most recently archived redo log and begin applying data from the redo logs to the standby database, perform the following steps:

ALTER DATABASE REGISTER LOGICAL LOGFILE
'/opt/oracle/ARC/MYDB_H/MYDB_0001_0000000005.arc’;


Specify the following SQL statement to begin applying redo logs to the logical standby database using the SCN number:

ALTER DATABASE START LOGICAL STANDBY APPLY INITIAL 2856516;

Note: The INITIAL keyword has to be used only for the first time. To apply redo logs thereafter, the following statements should be used.

ALTER DATABASE STOP LOGICAL STANDBY APPLY;

ALTER DATABASE START LOGICAL STANDBY APPLY;


20. Enable Archiving in Logical Standby Database:

This step has to be performed in the Primary Database to enable archiving to the Logical Standby Database. Use the following statements to start archiving and remote archiving:

ALTER SYSTEM SET LOG_ARCHIVE_DEST_3='SERVICE=payroll3 lgwr NO AFFIRM' 2 > SCOPE=BOTH;

ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_3=ENABLE SCOPE=BOTH;


To start remote archiving either of the following statements can be used:

ALTER SYSTEM ARCHIVE LOG CURRENT;

ALTER SYSTEM SWITCH LOGFILE;


This completes the configuration of Logical Standby Database from Hot Backup.

At the last - SQL Apply with Logical Standby Database is a viable option for customers who need to implement a disaster recovery solution or maximum/high availability solution and use the same resources for reporting and decision support operations. The success in creating a Logical Standby Database depends a lot on how the tasks are executed and on the version is being used. It is very important, before starting the creation of a Logical Standby Database, to make sure that all the Initialization Parameters are set correctly, that all the steps are followed in the correct order and the appropriate parameters are used. If everything is done properly then you should be able to do a clean configuration of the Logical Standby Database in the first go.