# File and Directory Related Parameters
SQLFILE - extracts the DDL from the export dump file.
REUSE_DATAFILES N,Y - Specifies whether it should use existing data files for creating tablespaces during an import
DIRECTORY
LOGFILE
NOLOGFILE
PARFILE
DUMPFILE
# Filtering Parameters
TABLE_EXISTS_ACTION - tells DP import what to do when a table already exists. You can specify the following values
SKIP - DP will skip if the table exists
APPEND - Appends rows to a table
TRUNCATE- Truncates the table and reloads the data from the export dumpfile.
REPLACE - Drops the table if exists and recreates it and reloads it.
CONTENT
EXCLUDE
INCLUDE
QUERY
Note 1: When you use CONTENT=DATA_ONLY, you cannot use either EXCLUDE or INCLUDE.
Note 2: When you use QUERY option DP will use only external tables data method, rather than direct path method.
# Job-Related Parameters
JOB_NAME
STATUS
PARALLEL
# Import Mode related Parameters
FULL
TABLES
SCHEMAS
TABLESPACES
TRANSPORT_TABLESPACES
TRANSPORT_FULL_CHECK - applicable only if you are using NETWORK_LINK parameter.
TRANSPORT_DATAFILES - used during transportable tablespaces to specify the list of data filesthe job should import into the target database.
# REMAP Parameters
REMAP_SCHEMA - from_schema:to_schema (move objects from one schema to another schema)
REMAP_TABLESPACE - 'source_tbs_name':'target_tbs_name' (Move objects from one tablespace to another tablespace)
REMAP_DATAFILE - 'filename_in_source_platform': 'filename_in_target_platform' (Useful during import to a different platform with different naming convention. It changes the source filename to target filename in the sql statements referenced in dump file.)
# Transform Parameters
TRANSFORM - transform_name:value[:object_type]
transform_name - SEGMENT_ATTRIBUTES=Y, N
STORAGE= Y,N
OID= Y,N
PCTSPCAE
object_type - Which type of objects should be transformed.
# Network Link Parameter
NETWORK_LINK - perform an import across the network without using dump files
# Flashback parameter
FLASHBACK_TIME
FLASHBACK_SCN
Thanks
Friday, December 19, 2008
Monday, December 15, 2008
Data Pump Export Parameters
The Following are the various data pump export parameters.
# File and directory related parameters
DIRECTORY - Directory object name
DUMPFILE - File name. Can specify multiple file names
LOGFILE - File name
NOLOGFILE - NY
COMPRESSION - [METADATA_ONLY] NONE. (Only metadata is compressed others not possible)
FILESIZE - Size of the dumpfile. If size is reached the job is stopped.
PARFILE - Parameter file name.
# Export Mode related parameters
FULL
SCHEMAS
TABLES
TABLESPACES
TRANSPORT_TABLESPACES
TRANSPORT_FULL_CHECK - Y N. Y - 2 way dependency. N - 1 way dependency.
# Export Filtering parameters
CONTENT - ALL DATA_ONLY METADATA_ONLY
EXCLUDE - object_type[:name_clause](Filter the type of database objects for eg., package, index etc)
INCLUDE - object_type[:name_clause] (name_clause lets you to apply SQL Function)
QUERY (Filters the data within an object)
SAMPLE - [[schema_name.]table_name:]sample_percent
# Estimation Parameters
ESTIMATE - {blocks statistics} (how much space the export job is going to consume)
ESTIMATE_ONLY - {Y N} (estimates space without actually starting the job)
# The network link parameters
NETWORK_LINK (database link) (Export data from remote database and put the dump file in the local server)
# Job related parameters
JOB_NAME (Explicitly name the job. The master table will be same as job_name)
STATUS (status of the job at a specified interval in time in seconds)
FLASHBACK_SCN (If you specify this parameter, the export will be consistent as of this SCN)
FLASHBACK_TIME (export will be consistent as of this time)
PARALLEL (Lets you decide the number of Worker process)
# Encryption Parameter
ENCRYPTION_PASSWORD - pwd (to prevent the encrypted data to be written as clear text in the dump file)
# Interactive mode export Paramters/Commands
ATTACH-schema.Export_jobname (To intervene a running export job)
Ctrl + C
Thanks
# File and directory related parameters
DIRECTORY - Directory object name
DUMPFILE - File name. Can specify multiple file names
LOGFILE - File name
NOLOGFILE - NY
COMPRESSION - [METADATA_ONLY] NONE. (Only metadata is compressed others not possible)
FILESIZE - Size of the dumpfile. If size is reached the job is stopped.
PARFILE - Parameter file name.
# Export Mode related parameters
FULL
SCHEMAS
TABLES
TABLESPACES
TRANSPORT_TABLESPACES
TRANSPORT_FULL_CHECK - Y N. Y - 2 way dependency. N - 1 way dependency.
# Export Filtering parameters
CONTENT - ALL DATA_ONLY METADATA_ONLY
EXCLUDE - object_type[:name_clause](Filter the type of database objects for eg., package, index etc)
INCLUDE - object_type[:name_clause] (name_clause lets you to apply SQL Function)
QUERY (Filters the data within an object)
SAMPLE - [[schema_name.]table_name:]sample_percent
# Estimation Parameters
ESTIMATE - {blocks statistics} (how much space the export job is going to consume)
ESTIMATE_ONLY - {Y N} (estimates space without actually starting the job)
# The network link parameters
NETWORK_LINK (database link) (Export data from remote database and put the dump file in the local server)
# Job related parameters
JOB_NAME (Explicitly name the job. The master table will be same as job_name)
STATUS (status of the job at a specified interval in time in seconds)
FLASHBACK_SCN (If you specify this parameter, the export will be consistent as of this SCN)
FLASHBACK_TIME (export will be consistent as of this time)
PARALLEL (Lets you decide the number of Worker process)
# Encryption Parameter
ENCRYPTION_PASSWORD - pwd (to prevent the encrypted data to be written as clear text in the dump file)
# Interactive mode export Paramters/Commands
ATTACH-schema.Export_jobname (To intervene a running export job)
Ctrl + C
Thanks
Labels:
Data Pump
Wednesday, December 10, 2008
Directory Objects
Directory Objects
Directory Object is a logical structure that represents a physical location of a file system.
It is used in External Tables, UTL_FILE package and in Data Pump.
Important Points
Directory Object is a logical structure that represents a physical location of a file system.
It is used in External Tables, UTL_FILE package and in Data Pump.
Important Points
- Always owned by SYS user (Even created by another user).
- Names must be unique.
- You must have CREATE ANY DIRECTORY privilege to create Directory object.
- By default the owner gets the READ WRITE privilege on the directory object.
- Grant necessary privilege on this object if you want other user to access.
- READ and WRITE privilege means only oracle database will read and write on behalf of a user. The user has no direct access to these files.
Creating Directory Object
SQL> Create or Replace Directory data_pump_dir As ‘/oracle/export/dpump’;
SQL> Grant Read,Write on directory data_pump_dir to public;
Data Dictionary Views
SQL> Select * from ALL_DIRECTORIES;
It returns owner, directory name and directory path.
SQL> Select * from ALL_TAB_PRIVS where privilege in (‘READ’, ‘WRITE’);
It shows whether you have required read/write privilege.
Thanks
Labels:
10g Features,
Data Pump
Tuesday, November 18, 2008
Managing FRA
The following queries are helpful in managing the FRA.
SQL> Select * from v$recovery_file_dest;
Name - Current location
Space Limit - FRA size
Space Reclaimable - Space reclaimable by deleting the obsolete files.
SQL> select * from v$flash_recovery_area_usage;
Used to check the space being used by different files and the space reclaimable by deleting the files that are obsolete.
Labels:
FRA
FRA Errors
When the FRA is full, Oracle issues the following error message,
ORA-19815: Warning: db_recovery_file_dest_size of XXXXXX bytes is 100% used, and has 0 remaining bytes available.
ORA-19809: Limit set by the db_recovery_dest_file_size is exceeded.
ORA-19804: Oracle is unable to reclaim a specified amount of bytes from the limit set by the db_recovery_file_dest_size.
ORA-19815: Warning: db_recovery_file_dest_size of XXXXXX bytes is 100% used, and has 0 remaining bytes available.
ORA-19809: Limit set by the db_recovery_dest_file_size is exceeded.
ORA-19804: Oracle is unable to reclaim a specified amount of bytes from the limit set by the db_recovery_file_dest_size.
FRA - File Deletion Policy
Automatic File Deletion Policy for FRA
- Multiplexed redo log files and control files (Permanent files) are never deleted.
- Files older than the days set by the retention policy are available for deletion.
- Files that are copied to the tape are eligible for deletion. However they are removed only when there is no space in the FRA.
Labels:
FRA
FRA Parameters
The following parameters are involved in setting up a FRA.
1. DB_CREATE_FILE_DEST - location for all OMF data files.
2. DB_CREATE_ONLINE_LOG_DEST_n - location for control files and online redo log files. If this parameter is not set then oracle creates all three types of files in the DB_CREATE_FILE_DEST location.
3. DB_RECOVERY_FILE_DEST_SIZE - Size of FRA.
4. DB_RECOVERY_FILE_DEST - Location of FRA.
5. LOG_ARCHIVE-DEST_N - Location of Archive log files.
For eg.,
DB_CREATE_FILE_DEST = /oradata/dbfiles/
LOG_ARCHIVE-DEST_1 = 'LOCATION=/export/archive/arc_dest1'
LOG_ARCHIVE-DEST_2 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST'
DB_RECOVERY_FILE_DEST_SIZE =350GB
DB_RECOVERY_FILE_DEST='LOCATION=/fra/rec_area'
one copy of current control file and online redo log files are stored in FRA also.
Control Files, redolog files and FRA
DB_CREATE_ONLINE_DEST_n : Setting this init parameter enable Oracle to create OMF control files and online redolog files in the location specified by the parameter.
DB_RECOVERY_FILE_DEST : Setting this init parameter enables Oracle to create OMF control files and online redolog files in the FRA.
Specifying both the above parameters enable Oracle to create OMF based control and redolog files on both the locations.
Omitting the above parameters will enable Oracle to create non-OMF based control and redolog files in the system specific default location.
Thanks
1. DB_CREATE_FILE_DEST - location for all OMF data files.
2. DB_CREATE_ONLINE_LOG_DEST_n - location for control files and online redo log files. If this parameter is not set then oracle creates all three types of files in the DB_CREATE_FILE_DEST location.
3. DB_RECOVERY_FILE_DEST_SIZE - Size of FRA.
4. DB_RECOVERY_FILE_DEST - Location of FRA.
5. LOG_ARCHIVE-DEST_N - Location of Archive log files.
For eg.,
DB_CREATE_FILE_DEST = /oradata/dbfiles/
LOG_ARCHIVE-DEST_1 = 'LOCATION=/export/archive/arc_dest1'
LOG_ARCHIVE-DEST_2 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST'
DB_RECOVERY_FILE_DEST_SIZE =350GB
DB_RECOVERY_FILE_DEST='LOCATION=/fra/rec_area'
one copy of current control file and online redo log files are stored in FRA also.
Control Files, redolog files and FRA
DB_CREATE_ONLINE_DEST_n : Setting this init parameter enable Oracle to create OMF control files and online redolog files in the location specified by the parameter.
DB_RECOVERY_FILE_DEST : Setting this init parameter enables Oracle to create OMF control files and online redolog files in the FRA.
Specifying both the above parameters enable Oracle to create OMF based control and redolog files on both the locations.
Omitting the above parameters will enable Oracle to create non-OMF based control and redolog files in the system specific default location.
Thanks
Labels:
Backup,
FRA,
Parameters
Setup Flash Recovery Area
To setup the FRA, you need to set two init paramters in the following order.
SQL> Alter system set DB_RECOVERY_FILE_DEST_SIZE=375809638400 scope=spfile;
SQL> Alter system set DB_RECOVERY_FILE_DEST=+ASM_FLASH scope=spfile;
When you configure FRA, LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST init parameters become obsolete. You must use the parameters LOG_ARCHIVE_DEST_n.
Log_Archive_dest_10 is by default set to USE_DB_RECOVERY_FILE_DEST. Archived log files are stored in the FRA by default. If you have configured other locations a copy of archived files will also be placed in other locations as well.
USE_DB_RECOVERY_FILE_DEST refers to the FRA.
You can also use OEM and DBCA to configure FRA.
Disable FRA
Set DB_RECOVERY_FILE_DEST=''
SQL> Alter system set DB_RECOVERY_FILE_DEST_SIZE=375809638400 scope=spfile;
SQL> Alter system set DB_RECOVERY_FILE_DEST=+ASM_FLASH scope=spfile;
When you configure FRA, LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST init parameters become obsolete. You must use the parameters LOG_ARCHIVE_DEST_n.
Log_Archive_dest_10 is by default set to USE_DB_RECOVERY_FILE_DEST. Archived log files are stored in the FRA by default. If you have configured other locations a copy of archived files will also be placed in other locations as well.
USE_DB_RECOVERY_FILE_DEST refers to the FRA.
You can also use OEM and DBCA to configure FRA.
Disable FRA
Set DB_RECOVERY_FILE_DEST=''
Privilege to access all objects
Grant privilege to access all the objects from one schema to other
To grant privilege from one schema A to another Schema B
connect to schema A.
SQL> Grant Select any table to b;
SQL> Grant update any table to b;
or
SQL> Grant select any table, update any table, delete any table to b;
SQL> connect to schema B
SQL Select * from A.table_name;
there is no select any view privilege. the select any table privilege will automatically grant select privilege for views.
Note: The Grant select any table privilege is not object privilege, it is system privilege.
To grant privilege from one schema A to another Schema B
connect to schema A.
SQL> Grant Select any table to b;
SQL> Grant update any table to b;
or
SQL> Grant select any table, update any table, delete any table to b;
SQL> connect to schema B
SQL Select * from A.table_name;
there is no select any view privilege. the select any table privilege will automatically grant select privilege for views.
Note: The Grant select any table privilege is not object privilege, it is system privilege.
ORA-01031 Insufficient Privilege
While using the flashback feature, i encountered the following error message, when i tried to query dba_tables which is owned by sys user.
Select * from dba_tables as of timestamp sysdate-30/1440;
ORA-01031 Insufficient Privilege
The above error message is resulted due to the setting of the init parameter O7_DICTIONARY_ACCESSIBILITY to FALSE.
O7_DICTIONARY_ACCESSIBILITY
Possible Values - False (default) and True
When set to False it restricts the system privilege to access the objects owned by the sys user.
When set to true it allows the access to the objects owned by the SYS user.
In order to access the objects owned by sys user despite setting the parameter to false, you must explicitly grant the following roles,
SELECT_CATALOG_ROLE
EXECUTE_CATALOG_ROLE
DELETE_CATALOG_ROLE
Setting the Parameter to true is not recommended for security reasons. For eg., setting the parameter to true will ignore the parameter remote_login_passwordfile parameter. Hence you can connect to the database remotely and hack the database.
Select * from dba_tables as of timestamp sysdate-30/1440;
ORA-01031 Insufficient Privilege
The above error message is resulted due to the setting of the init parameter O7_DICTIONARY_ACCESSIBILITY to FALSE.
O7_DICTIONARY_ACCESSIBILITY
Possible Values - False (default) and True
When set to False it restricts the system privilege to access the objects owned by the sys user.
When set to true it allows the access to the objects owned by the SYS user.
In order to access the objects owned by sys user despite setting the parameter to false, you must explicitly grant the following roles,
SELECT_CATALOG_ROLE
EXECUTE_CATALOG_ROLE
DELETE_CATALOG_ROLE
Setting the Parameter to true is not recommended for security reasons. For eg., setting the parameter to true will ignore the parameter remote_login_passwordfile parameter. Hence you can connect to the database remotely and hack the database.
Labels:
Errors,
Parameters
Monday, November 3, 2008
0403-031 - Fork Function Failed. There is not enough Memory available
Last week our Production Server was hanged. When I issued a command it took a lot of time and finally displayed the following error message.
0403-031 - Fork Function Failed. There is not enough Memory available.
The reason for this is there is no enough Page space.
To monitor the page space issue the following command
$ lsps -a
Page Space Physical Volume Volume Group Size %Used
----------- ---------------- -------------- ---- ------
Paging00 hdisk0 rootvg 16GB 25%
hd6 hdisk1 rootvg 8GB 95%
You can use the following command to find out top 3 processes utilizing most of the page space.
$ svmon -gP -t 3
To add Paging space
1. Identify the physical volume where you want to add page space (For eg. hdisk1).
2. Identify the PPsize (say 128MB) and Free PPs (500) available by using the command lspv hdisk1.
3. Determine how much space you want to add (say 8192MB).
4. Divide 8192MB/PP Size(128) = 64. Hence you want to increase 64 logical partitions.
5. Use the chps command to increase the paging space.
$ chps -s64 hd6
Alternatively you can use SMIT tool to increase the page space.
0403-031 - Fork Function Failed. There is not enough Memory available.
The reason for this is there is no enough Page space.
To monitor the page space issue the following command
$ lsps -a
Page Space Physical Volume Volume Group Size %Used
----------- ---------------- -------------- ---- ------
Paging00 hdisk0 rootvg 16GB 25%
hd6 hdisk1 rootvg 8GB 95%
You can use the following command to find out top 3 processes utilizing most of the page space.
$ svmon -gP -t 3
To add Paging space
1. Identify the physical volume where you want to add page space (For eg. hdisk1).
2. Identify the PPsize (say 128MB) and Free PPs (500) available by using the command lspv hdisk1.
3. Determine how much space you want to add (say 8192MB).
4. Divide 8192MB/PP Size(128) = 64. Hence you want to increase 64 logical partitions.
5. Use the chps command to increase the paging space.
$ chps -s64 hd6
Alternatively you can use SMIT tool to increase the page space.
Thursday, October 9, 2008
Archive to No Archive Mode
Steps to convert the database from Archivelog Mode to No Archive log mode.
SQL> Alter system set log_archive_start=false scope=spfile;
Database Altered
SQL> Shutdown Immediate
Database Closed
Database Dismounted
Instance Shutdown
SQL> Startup Mount;
Oracle Instance started
.....
Database Mounted
SQL> Alter Database NoArchivelog;
Database Altered
SQL> Alter database Open;
Database Altered.
SQL> Archivelog list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 3108
Next log sequence to archive 3109
Current log sequence 3109
Your database is in NoArchivelog Mode.
SQL> Alter system set log_archive_start=false scope=spfile;
Database Altered
SQL> Shutdown Immediate
Database Closed
Database Dismounted
Instance Shutdown
SQL> Startup Mount;
Oracle Instance started
.....
Database Mounted
SQL> Alter Database NoArchivelog;
Database Altered
SQL> Alter database Open;
Database Altered.
SQL> Archivelog list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 3108
Next log sequence to archive 3109
Current log sequence 3109
Your database is in NoArchivelog Mode.
Labels:
Backup
Thursday, September 18, 2008
Configure Archive Log in RAC
The post has been moved to another location. Click Here to access the post.
Start and Stop Instances in RAC
You can start and stop database and instances in RAC environment using Server Control (SRVCTL) utility.
Always use the SRVCTL utility from ORACLE_HOME location
$ cd $ORACLE_HOME
To start a database
$ srvctl start database -d database_name
To stop a database
$ srvctl stop database -d database_name
To start an instance
$ srvctl start instance -d database_name -i instance_name
To stop an instance
$ srvctl stop instance -d database_name -i instance_name
Always use the SRVCTL utility from ORACLE_HOME location
$ cd $ORACLE_HOME
To start a database
$ srvctl start database -d database_name
To stop a database
$ srvctl stop database -d database_name
To start an instance
$ srvctl start instance -d database_name -i instance_name
To stop an instance
$ srvctl stop instance -d database_name -i instance_name
Labels:
RAC
Thursday, September 11, 2008
Exp-00091 Exporting questionable statistics.
Set the statistics=none to avoid this error.
The default value is Estimate.
The error is caused by one of the following
1. Your table rows have fatel error while exporting.
2. You have used query parameter
3. client and server characterset mismatch.
After importing the data you have to gather fresh statistics to use it. So it is always advisable to use statistics=none.
The default value is Estimate.
The error is caused by one of the following
1. Your table rows have fatel error while exporting.
2. You have used query parameter
3. client and server characterset mismatch.
After importing the data you have to gather fresh statistics to use it. So it is always advisable to use statistics=none.
Labels:
Errors
Wednesday, September 10, 2008
Will DDL statement commit automatically
I have read in some books that issuing ddl statement will commit the transaction automatically irrespective of the success of the DDL statement.
I performed a small test and came to a conclusion that,
SQL> create table testddl ( a number, b date);
Table created.
SQL> insert into testddl values (1, sysdate);
1 row created.
SQL> insert into testddl values (2, sysdate);
1 row created.
Not committed.
SQL> alter table testddl add constraint a_uq unique (a);
Table altered.
Automatically commits - ddl statement succeed.
SQL> rollback;
Rollback complete.
SQL> select * from testddl;
A B
-- --------
1 10-SEP-08
2 10-SEP-08
SQL> insert into testddl values (3, sysdate);
1 row created.
SQL> insert into testddl values (4, sysdate);
1 row created.
not committed
SQL> alter table testddl add constraint b_fk b references test (b);
alter table testddl add constraint b_fk b references test (b)
*
ERROR at line 1:ORA-01430: column being added already exists in table
Error but automatically commits. DDL Failed due to logical error.
SQL> rollback;
Rollback complete.
SQL> select * from testddl;
A B
-- --------
1 10-SEP-08
2 10-SEP-08
3 10-SEP-08
4 10-SEP-08
SQL> insert into testddl values (5, sysdate);
1 row created.
SQL> insert into testddl values (6, sysdate);
1 row created.
not committed
SQL> alter table testddl drp constraint a_uq;
alter table testddl drp constraint a_uq
*
ERROR at line 1:ORA-01735: invalid ALTER TABLE option
Error. ddl failed due to syntax error.Will not commit automatically.
SQL> rollback;
Rollback complete.
SQL> select * from testddl;
A B
-- --------
1 10-SEP-08
2 10-SEP-08
3 10-SEP-08
4 10-SEP-08
This is how the ddl statement is processed.
DDL parse
If success
commit
execute ddl
commit
end if
else
do nothing.
Hope you have enjoyed...
I performed a small test and came to a conclusion that,
- If ddl statement is executed successfully, the transaction is committed automatically.
- If ddl statement fails due to logical error, the transaction is committed automatically.
- If ddl statement fails due to syntax error, the transaction is neither committed nor rolled back, the transaction is still open.
SQL> create table testddl ( a number, b date);
Table created.
SQL> insert into testddl values (1, sysdate);
1 row created.
SQL> insert into testddl values (2, sysdate);
1 row created.
Not committed.
SQL> alter table testddl add constraint a_uq unique (a);
Table altered.
Automatically commits - ddl statement succeed.
SQL> rollback;
Rollback complete.
SQL> select * from testddl;
A B
-- --------
1 10-SEP-08
2 10-SEP-08
SQL> insert into testddl values (3, sysdate);
1 row created.
SQL> insert into testddl values (4, sysdate);
1 row created.
not committed
SQL> alter table testddl add constraint b_fk b references test (b);
alter table testddl add constraint b_fk b references test (b)
*
ERROR at line 1:ORA-01430: column being added already exists in table
Error but automatically commits. DDL Failed due to logical error.
SQL> rollback;
Rollback complete.
SQL> select * from testddl;
A B
-- --------
1 10-SEP-08
2 10-SEP-08
3 10-SEP-08
4 10-SEP-08
SQL> insert into testddl values (5, sysdate);
1 row created.
SQL> insert into testddl values (6, sysdate);
1 row created.
not committed
SQL> alter table testddl drp constraint a_uq;
alter table testddl drp constraint a_uq
*
ERROR at line 1:ORA-01735: invalid ALTER TABLE option
Error. ddl failed due to syntax error.Will not commit automatically.
SQL> rollback;
Rollback complete.
SQL> select * from testddl;
A B
-- --------
1 10-SEP-08
2 10-SEP-08
3 10-SEP-08
4 10-SEP-08
This is how the ddl statement is processed.
DDL parse
If success
commit
execute ddl
commit
end if
else
do nothing.
Hope you have enjoyed...
Labels:
Miscellaneous
Sunday, September 7, 2008
dblinks...compile problem in forms6i
I faced a bizarre problem while using database link in forms6i. I created a public database link. When I queried through sql*plus I was able to retrieve the data. That means everything was fine with tnsnames.ora and the network. But when I issued the same statement in the forms6i, i was not able to compile it.When i tried to compile (Ctl + k) and (Ctl + Shift +K) the form builder was closed without throwing any error message. I created synonym for the database link, but still the problem persists. Our database is in 9i and the remote database is in 10g.I thought it could be due to the version problem. I created a view for that statement, and finally i was able to compile it.
So if any of you are facing such problem, one solution is to create a view for the statement that uses database link.If any of you had faced such error and got a solution please do share with me.
So if any of you are facing such problem, one solution is to create a view for the statement that uses database link.If any of you had faced such error and got a solution please do share with me.
Labels:
Miscellaneous
Monday, September 1, 2008
ORA-03113 End of File on Communication Channel
Last week we had a problem with the annoying error message - ORA-03113 end of file on communication channel. Usually the users will run reports that will take 3 to 4 hours. Since last week they were not able to run the reports, it terminated with the error ORA-03113.
There was no error reported in the alert log file and no trace file generated. I even tried to alter the value of SQLNET.ORA parameter sqlnet.expire_time, but nothing worked.
The network was fine and there is no DBLINK in the database.
After much diagnosis and discussion with the networking people, i found the error was due to the timeout connection setting in the firewall. We changed the value from 1 hr to 5 hr and now we are not facing the error.
How to solve ORA-03113 Error
It occurs because the connection between client and server process was broken. We need to find out why the connection got broken.There are many reasons for that. Use the following steps to find out the possible reason.
Step 1: Look for the Alert log file for any error message. If the ORA-03113 error is reported in the alert log it will be accompanied by any othererror message. For example Oracle internal error, or server down etc etc etc.. try to solve that issue. If there is no error in the alert log or no trace file generated then it may not be due to a database error.
Step 2: Look for listener.log file size. If the file size is too large, try to stop the listener, create a new listener.log and restart the listener.
$ LSNRCTL STOP
$ mv listener.log listener_old.log
$ LSNRCTL START ( This will create a new listener.log file).
Step 3: Check for network issues. Ping the server. Check for any protocol error. Get the help of networking people in resolving these errors.
Step 4: Firewall error. There is a setting in the firewall namely, connection timeout, look for the value in this setting. Change to higher valueand check. One way to find whether the error is due to firewall setting is open a telnet session simultaneously and leave it idle till your report runs or returns error. Then try to issue any command in the telnet if the session hangs and you are not able to type the command then the error is due to the firewall setting.
Step 5: If none of the above step works. Contact Oracle Support.
There was no error reported in the alert log file and no trace file generated. I even tried to alter the value of SQLNET.ORA parameter sqlnet.expire_time, but nothing worked.
The network was fine and there is no DBLINK in the database.
After much diagnosis and discussion with the networking people, i found the error was due to the timeout connection setting in the firewall. We changed the value from 1 hr to 5 hr and now we are not facing the error.
How to solve ORA-03113 Error
It occurs because the connection between client and server process was broken. We need to find out why the connection got broken.There are many reasons for that. Use the following steps to find out the possible reason.
Step 1: Look for the Alert log file for any error message. If the ORA-03113 error is reported in the alert log it will be accompanied by any othererror message. For example Oracle internal error, or server down etc etc etc.. try to solve that issue. If there is no error in the alert log or no trace file generated then it may not be due to a database error.
Step 2: Look for listener.log file size. If the file size is too large, try to stop the listener, create a new listener.log and restart the listener.
$ LSNRCTL STOP
$ mv listener.log listener_old.log
$ LSNRCTL START ( This will create a new listener.log file).
Step 3: Check for network issues. Ping the server. Check for any protocol error. Get the help of networking people in resolving these errors.
Step 4: Firewall error. There is a setting in the firewall namely, connection timeout, look for the value in this setting. Change to higher valueand check. One way to find whether the error is due to firewall setting is open a telnet session simultaneously and leave it idle till your report runs or returns error. Then try to issue any command in the telnet if the session hangs and you are not able to type the command then the error is due to the firewall setting.
Step 5: If none of the above step works. Contact Oracle Support.
Labels:
Errors
Tuesday, July 22, 2008
Configure Archive Log Mode
Oracle databases are created in NOARCHIVELOG mode. This means that you can only take consistent cold backups.
In order to perform online backups your database must be running in ARCHIVELOG mode. Archive log files can also be transmitted and applied to a standby database inDR (Disaster Recovery) site.
To find out the mode database is running. Login as sys user and issue the following statement.
SQL> archive log list
Database log mode : Archive Mode
Automatic archival : Enabled
Archive destination : /cgildb/oradata/archive
Oldest online log sequence : 3413
Next log sequence to archive : 3415
Current log sequence : 3415
You can also issue the following statement.
SQL> Select log_mode from v$database;
LOG_MODE
---------------
Archivelog
Convert NoArchive to ArchiveLog mode
Step 1: Set the following Initialization Parameter
1. Log_archive_dest_1 = 'LOCATION=/export/prodarch' MANDATORY REOPEN
2. Log_archive_dest_2 = 'SERVICE=stbyprod' OPTIONAL REOPEN=60
3. Log_archive_format = 'prod_%t_%r_%s.arc'
4. Log_archive_max_processes= 2
5. Log_archive_min_succeed_dest = 1
6. Log_archive_start= True (deprecated from oracle 10g).
Step 2:
SQL> Shutdown Immediate
Database Closed
Database Dismounted
Instance Shutdown
SQL> Startup Mount
Oracle Instance started
.....
Database Mounted
SQL> Alter Database Archivelog;
Database Altered
SQL> Alter Database Open;
Database Altered.
Thats it..Your database is in Archivelog mode now..
About Related Initialization Parameter
1. Log_archive_dest_n : n = 1 to 10. The destinationcan be on a local machine or a remote machine (Standby). The syntax is,
log_archive_dest_n = 'null' Service = service_name (from tnsnames) Location = directory [Mandatory Optional] [Reopen [= integer]]
where,
Mandatory - archiving must succeed at this destination.
Reopen - (time in secs) retries to archive when it did not succeed the first time. (default 300)
2. Log_archive_format: Format of the archived log filenames. Use any text in combination with any of the substitution variable to uniquely name the file. The substitution variables are,
%s - Log Sequence Number
%t - Thread Number
%r - Resetlogs ID (Ensure uniqueness when you reset the log sequence number)
%d - Database ID.
3. Log_archive_dest_state_n: (Enable Defer alternate reset)
Enable - available (default)
Defer - temporarily unavailable
Alternate - Initially deferred. Available when there is a failure in the primary destination.
4. Log_archive_max_processes : Maximum number of active archive processes. default 2. Range 1 - 10.
5. Log_archive_min_succeed_dest: Minimum number of destinations that arcn must write before overwriting the filled in online redo log file. Default - 1.
6. Log_archive_duplex_dest: When you use this parameter you can archive to a maximum of two destination. The first destination is given by log_archive_dest parameter and the second destination by this parameter.
7. Log_archive_start: Set this parameter to true to enable the archiver starts when you restart the instance. This parameter is not dynamic, so you must restart the instance to take effect.
SQL> alter system set log_archive_start=true scope=spfile;
SQL> Shutdown Immediate;
To enable automatic archiving with out shutting down the instance,
SQL> Alter system archive log start;
--Change the log_archive_start to true during next startup.
In 10g this parameter has been deprecated. By enabling the the archive log mode, the archiver will start automatically.
In order to perform online backups your database must be running in ARCHIVELOG mode. Archive log files can also be transmitted and applied to a standby database inDR (Disaster Recovery) site.
To find out the mode database is running. Login as sys user and issue the following statement.
SQL> archive log list
Database log mode : Archive Mode
Automatic archival : Enabled
Archive destination : /cgildb/oradata/archive
Oldest online log sequence : 3413
Next log sequence to archive : 3415
Current log sequence : 3415
You can also issue the following statement.
SQL> Select log_mode from v$database;
LOG_MODE
---------------
Archivelog
Convert NoArchive to ArchiveLog mode
Step 1: Set the following Initialization Parameter
1. Log_archive_dest_1 = 'LOCATION=/export/prodarch' MANDATORY REOPEN
2. Log_archive_dest_2 = 'SERVICE=stbyprod' OPTIONAL REOPEN=60
3. Log_archive_format = 'prod_%t_%r_%s.arc'
4. Log_archive_max_processes= 2
5. Log_archive_min_succeed_dest = 1
6. Log_archive_start= True (deprecated from oracle 10g).
Step 2:
SQL> Shutdown Immediate
Database Closed
Database Dismounted
Instance Shutdown
SQL> Startup Mount
Oracle Instance started
.....
Database Mounted
SQL> Alter Database Archivelog;
Database Altered
SQL> Alter Database Open;
Database Altered.
Thats it..Your database is in Archivelog mode now..
About Related Initialization Parameter
1. Log_archive_dest_n : n = 1 to 10. The destinationcan be on a local machine or a remote machine (Standby). The syntax is,
log_archive_dest_n = 'null' Service = service_name (from tnsnames) Location = directory [Mandatory Optional] [Reopen [= integer]]
where,
Mandatory - archiving must succeed at this destination.
Reopen - (time in secs) retries to archive when it did not succeed the first time. (default 300)
2. Log_archive_format: Format of the archived log filenames. Use any text in combination with any of the substitution variable to uniquely name the file. The substitution variables are,
%s - Log Sequence Number
%t - Thread Number
%r - Resetlogs ID (Ensure uniqueness when you reset the log sequence number)
%d - Database ID.
3. Log_archive_dest_state_n: (Enable Defer alternate reset)
Enable - available (default)
Defer - temporarily unavailable
Alternate - Initially deferred. Available when there is a failure in the primary destination.
4. Log_archive_max_processes : Maximum number of active archive processes. default 2. Range 1 - 10.
5. Log_archive_min_succeed_dest: Minimum number of destinations that arcn must write before overwriting the filled in online redo log file. Default - 1.
6. Log_archive_duplex_dest: When you use this parameter you can archive to a maximum of two destination. The first destination is given by log_archive_dest parameter and the second destination by this parameter.
7. Log_archive_start: Set this parameter to true to enable the archiver starts when you restart the instance. This parameter is not dynamic, so you must restart the instance to take effect.
SQL> alter system set log_archive_start=true scope=spfile;
SQL> Shutdown Immediate;
To enable automatic archiving with out shutting down the instance,
SQL> Alter system archive log start;
--Change the log_archive_start to true during next startup.
In 10g this parameter has been deprecated. By enabling the the archive log mode, the archiver will start automatically.
Labels:
Backup
Friday, July 18, 2008
Primary Key with Null Value...Amazing
Who said Primary key column cannot have null values...Here is a method you can create primary key with null values...
Create a table with null values.
create table tbl_null (a number);
insert into tbl_null values (null);
insert into tbl_null values (null);
insert into tbl_null values (null);
insert into tbl_null values (null);
commit;
select * from tbl_null;
4 rows selected
Now create a table with primary key column having null values..
Create table tbl_pri_null ( a primary key) as select * from tbl_null;
Table created
desc tbl_pri_null
Name - A
Null ? - Not Null
Type - Number
Select * from tbl_pri_null;
4 rows selected
All null values.....
Thats it. Folks!
Try and have fun....
Create a table with null values.
create table tbl_null (a number);
insert into tbl_null values (null);
insert into tbl_null values (null);
insert into tbl_null values (null);
insert into tbl_null values (null);
commit;
select * from tbl_null;
4 rows selected
Now create a table with primary key column having null values..
Create table tbl_pri_null ( a primary key) as select * from tbl_null;
Table created
desc tbl_pri_null
Name - A
Null ? - Not Null
Type - Number
Select * from tbl_pri_null;
4 rows selected
All null values.....
Thats it. Folks!
Try and have fun....
Labels:
Miscellaneous
Subscribe to:
Posts (Atom)