Dataguard issue:- URGENT!!!!!!!!!!!!!!!
Dear all,
please help me in solving the problem of the dataguard, the problem i am facing is when ever a archive log is created it is first applied in standby system and next it is applied to PRIMARY system and if the STANDBY connection is lost the primary hangs.
I am doing this dataguard on oracle 9i on HP-Unix
If i put the value of log_archive_dest_2 as 'SERVICE=STANDBY OPTIONAL ARCH ASYNC NOAFFIRM REGISTER' then i am getting an error ORA-16025: parameter LOG_ARCHIVE_DEST_2 contains repeated or conflicting attributes.
the protection mode is maximum performance.
can any one help me out of this problem.
The parameters i have used for primary and standby are given below.
## ADDED PARAMETERS FOR DATAGUARD on PRIMARY
dg_broker_start=true
standby_file_management='AUTO'
log_archive_dest_state_1=ENABLE
log_archive_dest_state_2=ENABLE
log_archive_dest_1='LOCATION=/oracle/C30/saparch/C30arch MANDATORY'
log_archive_dest_2='SERVICE=STANDBY ARCH NOAFFIRM'
remote_archive_enable=true
standby_archive_dest='/oracle/C30/saparch/C30arch'
fal_server='SERVICE=STANDBY'
fal_client='SERVICE=PRIMARY'
## ADDED PARAMETERS FOR DATAGUARD ON STANDBY
dg_broker_start=true
standby_file_management='AUTO'
log_archive_dest_state_2='defer'
log_archive_dest_1='LOCATION=/oracle/C30/saparch/C30arch'
#log_archive_dest_2='SERVICE=PRIMARY OPTIONAL ARCH ASYNC NOAFFIRM REGISTER'
standby_archive_dest='/oracle/C30/saparch/C30arch'
fal_client='SERVICE=STANDBY'
fal_server='SERVICE=PRIMARY'
Please help me out!!!!!!!!!!!
thanks kamaljeet for reply,
the protection mode is MAXIMUM PERFORMANCE.
Please help me in sorting this problem.
Similar Messages
-
Hai frnds
I have got a new dell studio 15 laptop.iam much interested in oracle applications and for the purpose of learning oracle applications i got it.when i tried to install redhat enterprise linux 4 replacing windows xp on my machine it throws the error message as driver not found and the installation does not proceed.when i asked the computer dealers they said that i have to go for some newer rhel version for installing linux in my laptop since its a new brand.
before few days i tried to install oracle applications 11.5.10.2 on rhel 5 but i got a lot of errors and the oracle forums advised me to go for rhel 4 for installing oracle applications 11.5.10.2.now iam confused and dont know wat to do .
my aim is install oracle applications 11.5.10.2 on some newer version other than rhel 4 which supports my dell laptop
i want to know whether installation of oracle applications 11.5.10.2 is possible on rhel 8 or rhel 9.will the installation goes properly.
Thanks and Regards
Parithi.A.Hi,
Please see your other thread.
install issue -urgent
install issue -urgent
Regards,
Hussein -
SQL Query Group By Issues - Urgent
I currently have an issue writing a pl\sql report, I can get part of the way to the results I want but the group by clause is causing problems, because I have to add more columns to the group by, dispersing the figures further, I have tried it with coalesce for each of the task types but I still get the same results, I am getting close to the results I need but not quite there yet. I would really appreciate it if someone could take at look at this for me as it is an urgent requirement.
The report is based on the tables similar to the following:
TASKS, ORGANISATIONS, POSITIONS
A position is a member of an organisation.
A task has a position assigned to it.
The SQL for the tables and to insert the data that would produce the report is detailed below:
CREATE TABLE TASKS
( TASK_ID NUMBER NOT NULL ENABLE,
TASK_TYPE VARCHAR2 (15 BYTE) NOT NULL ENABLE,
STATUS VARCHAR2 (15 BYTE) NOT NULL ENABLE,
POS_ID NUMBER NOT NULL ENABLE,
CONSTRAINT TASKS_PK PRIMARY KEY (TASK_ID));
CREATE TABLE ORGANISATIONS
( ORG_ID NUMBER NOT NULL ENABLE,
ORG_NAME VARCHAR2 (15 BYTE) NOT NULL ENABLE,
CONSTRAINT ORGANISATIONS_PK PRIMARY KEY (ORG_ID));
CREATE TABLE POSITIONS
( POS_ID NUMBER NOT NULL ENABLE,
POS_NAME VARCHAR2 (25 BYTE) NOT NULL ENABLE,
ORG_ID NUMBER NOT NULL ENABLE,
CONSTRAINT POSITIONS_PK PRIMARY KEY (POS_ID));
INSERT INTO ORGANISATIONS (ORG_ID, ORG_NAME) VALUES (1,'ABC');
INSERT INTO ORGANISATIONS (ORG_ID, ORG_NAME) VALUES (2,'DEF');
INSERT INTO ORGANISATIONS (ORG_ID, ORG_NAME) VALUES (3,'EFG');
INSERT INTO POSITIONS (POS_ID, POS_NAME, ORG_ID) VALUES (1,'ABC-1', 1);
INSERT INTO POSITIONS (POS_ID, POS_NAME, ORG_ID) VALUES (3,'ABC-2', 1);
INSERT INTO POSITIONS (POS_ID, POS_NAME, ORG_ID) VALUES (2,'ABC-3', 1);
INSERT INTO POSITIONS (POS_ID, POS_NAME, ORG_ID) VALUES (5,'DEF-2', 2);
INSERT INTO POSITIONS (POS_ID, POS_NAME, ORG_ID) VALUES (4,'DEF-1', 2);
INSERT INTO POSITIONS (POS_ID, POS_NAME, ORG_ID) VALUES (7,'EFG-1', 3);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (12,'TASK_TYPE_3','LIVE',3);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (11,'TASK_TYPE_2','LIVE',3);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (10,'TASK_TYPE_2','LIVE',2);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (9,'TASK_TYPE_2','LIVE',2);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (8,'TASK_TYPE_1','LIVE',3);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (7,'TASK_TYPE_1','LIVE',3);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (6,'TASK_TYPE_1','LIVE',3);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (5,'TASK_TYPE_1','LIVE',3);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (4,'TASK_TYPE_1','LIVE',2);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (3,'TASK_TYPE_3','LIVE',1);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (2,'TASK_TYPE_1','LIVE',1);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (1,'TASK_TYPE_1','LIVE',1);
INSERT INTO TASKS (TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (13,'TASK_TYPE_3','LIVE',3);
The report should detail the following information based on the information in the tables:
1st Column
Organisation
ABC
DEF
EFG
2nd Column
No. of Positions in Organsiation
3
2
1
With total of the number of people in all of the organisation 6
3rd Column
Number of tasks assigned to the organisation of task type1
2
1
4
4th Column
Number of tasks assigned to the organisation of task type 2
0
2
1
5th Column
Number of tasks assigned to the organisation of task type 3
1
0
2
Total no of tasks assigned to the Organisation
3
3
7
Message was edited by:
Suzy_r_82
Message was edited by:
Suzy_r_82Hi,
Apologies, my insert statements where incorrect, if you try the data below instead it should give you output I was expecting
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (1,'TASK_TYPE_1', 'LIVE',1);
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (2,'TASK_TYPE_1', 'LIVE',2);
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (3,'TASK_TYPE_1', 'LIVE',5);
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (4,'TASK_TYPE_1', 'LIVE',7);
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (5,'TASK_TYPE_1', 'LIVE',7);
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (6,'TASK_TYPE_1', 'LIVE',7);
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (7,'TASK_TYPE_1', 'LIVE',7);
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (8,'TASK_TYPE_2', 'LIVE',4);
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (9,'TASK_TYPE_2', 'LIVE',5);
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (10,'TASK_TYPE_3', 'LIVE',1);
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (11,'TASK_TYPE_3', 'LIVE',7);
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (12,'TASK_TYPE_2', 'LIVE',7);
INSERT INTO TASKS( TASK_ID, TASK_TYPE, STATUS, POS_ID) VALUES (13,'TASK_TYPE_3', 'LIVE',7);
The results I would like are
ORG No. of Pos No of Task No of Task No of Task Total no
In Org Type1 for Type2 for Type3 for of Tasks
Org Org Org for Org
ABC 3 2 0 1 3
DEF 2 1 2 0 3
EFG 1 4 1 2 7
Total 6The results I get are multiple lines for each organisation, I would like to rollup these lines so I can have one line per organisation.
Hope this helps a bit more, I appreciate the help, let me know if you need more information
Thanks
Suzy -
0P_FYEAR issue - Urgent - Please help
Hi,
We are developing six new BCS queries. We have already have about 30 reports.
1. We are using the Fiscal Year variable (0FISCYEAR) - 0P_FYEAR in most of the queries. In the new reports, particularly in one query, the previous year is not working with -1 offset value. but, the same variable is working in other new reports for previous year. Please help how to rectify the issue.
2. I have started trying many options in QA environment and found that there was a option to choose CONSTANT SELECTION in the 0FISCYEAR. So, I tried to select the Constant selection and found that the PREVIOUS YEAR IS WORKING for the query, for which earlier it was not working. Now, my another issue is that previous year is not working for all other queries which we have used the 0P_FYEAR as variable.
Since I have tried this in QA, all the reports in QA are not working for previuos year. Please advise urgently to correct the same.
Thanks & Regards,Hi ,
Have you used Fiscal Yaer variant in your query?
When you use these variables you should restrict querieswith correct fiscal yaer Variant.
Jaya -
Hello Everyone,
I am facing a issue with Dataguard setup. Following is the description:
Purpose:
Setup a Dataguard using Oracle Data Guard Solution between Production & DR(Physical standby) databases.
Problem Statement:
In case the network connectivity interrupted between Primary database and Physical Standby database, the Primary database is unable to respond to application servers. This issue occurred when the log shipment process is on. However, if the log shipment of Oracle Data Guard is stopped then production database/ system is working fine even if the connectivity between Primary database and Physical Standby database is interrupted.
Standby database is configured in high performance mode.
Environment:
Database Software Primary and Standby Server – Oracle10g Enterprise with Partition option, 64 bit, Version – 10.2.0.4
Primary Database server is configured with Two Sun M5000 nodes in OS cluster environment, Active and Passive Mode, Sun Cluster Suite 3.2 and OS Solaris 10
Standby Database Server is configured, Server – V890, OS Solaris 10
Java based multiple application are connected with Primary database using JDBC type 4 driver to processed the request.
Two independent IPMP are configured on Primary database server, one for application network and second for data guard network.
Application network are configured with dedicated switch and data guard network is connected with different switch.
Single listener is configured on Physical IP and Application is connecting to database through virtual IP dynamically assigned through cluster serviceSQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL, DATABASE_ROLE FROM V$DATABASE;
PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE
MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PRIMARY
SQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL, DATABASE_ROLE FROM V$DATABASE;
PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE
MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PHYSICAL STANDBY
And this is the Alert log file snapshot and all other necessary information.
Errors in file /oracle/admin/prtp/udump/prtp_rfs_3634.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 00:01:49 2010
Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 00:01:49 2010
FAL[server, ARC0]: Error 16009 creating remote archivelog file 'prtp'
FAL[server, ARC0]: FAL archive failed, see trace file.
Sat Aug 28 00:01:49 2010
Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Sat Aug 28 00:01:49 2010
ORACLE Instance prtp - Archival Error. Archiver continuing.
Sat Aug 28 00:01:49 2010
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[2]: Assigned to RFS process 3636
RFS[2]: Not using real application clusters
Sat Aug 28 00:01:49 2010
Errors in file /oracle/admin/prtp/udump/prtp_rfs_3636.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 01:29:41 2010
Thread 1 advanced to log sequence 24582 (LGWR switch)
Current log# 6 seq# 24582 mem# 0: /oradata1/prtp/redo-log/redo06_1.log
Current log# 6 seq# 24582 mem# 1: /oradata2/prtp/redo-log/redo06_2.log
LGWR: Standby redo logfile selected for thread 1 sequence 24583 for destination LOG_ARCHIVE_DEST_2
Sat Aug 28 01:29:42 2010
Thread 1 advanced to log sequence 24583 (LGWR switch)
Current log# 7 seq# 24583 mem# 0: /oradata1/prtp/redo-log/redo07_1.log
Current log# 7 seq# 24583 mem# 1: /oradata2/prtp/redo-log/redo07_2.log
Sat Aug 28 01:44:38 2010
LGWR: Standby redo logfile selected for thread 1 sequence 24584 for destination LOG_ARCHIVE_DEST_2
Sat Aug 28 01:44:38 2010
Thread 1 advanced to log sequence 24584 (LGWR switch)
Current log# 8 seq# 24584 mem# 0: /oradata1/prtp/redo-log/redo08_1.log
Current log# 8 seq# 24584 mem# 1: /oradata2/prtp/redo-log/redo08_2.log
Sat Aug 28 01:59:39 2010
LGWR: Standby redo logfile selected for thread 1 sequence 24585 for destination LOG_ARCHIVE_DEST_2
Sat Aug 28 01:59:39 2010
Thread 1 advanced to log sequence 24585 (LGWR switch)
Current log# 1 seq# 24585 mem# 0: /oradata1/prtp/redo-log/redo01_1.log
Current log# 1 seq# 24585 mem# 1: /oradata2/prtp/redo-log/redo01_2.log
Sat Aug 28 02:14:38 2010
LGWR: Standby redo logfile selected for thread 1 sequence 24586 for destination LOG_ARCHIVE_DEST_2
Sat Aug 28 02:14:38 2010
Thread 1 advanced to log sequence 24586 (LGWR switch)
Current log# 2 seq# 24586 mem# 0: /oradata1/prtp/redo-log/redo02_1.log
Current log# 2 seq# 24586 mem# 1: /oradata2/prtp/redo-log/redo02_2.log
Sat Aug 28 02:29:39 2010
LGWR: Standby redo logfile selected for thread 1 sequence 24587 for destination LOG_ARCHIVE_DEST_2
Sat Aug 28 02:29:39 2010
Thread 1 advanced to log sequence 24587 (LGWR switch)
Current log# 3 seq# 24587 mem# 0: /oradata1/prtp/redo-log/redo03_1.log
Current log# 3 seq# 24587 mem# 1: /oradata2/prtp/redo-log/redo03_2.log
Sat Aug 28 02:44:38 2010
LGWR: Standby redo logfile selected for thread 1 sequence 24588 for destination LOG_ARCHIVE_DEST_2
Errors in file /oracle/admin/prtp/udump/prtp_rfs_9611.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 01:27:56 2010
Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 01:27:56 2010
FAL[server, ARC0]: Error 16009 creating remote archivelog file 'prtp'
FAL[server, ARC0]: FAL archive failed, see trace file.
Sat Aug 28 01:27:56 2010
Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Sat Aug 28 01:27:56 2010
ORACLE Instance prtp - Archival Error. Archiver continuing.
Sat Aug 28 01:27:56 2010
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[18]: Assigned to RFS process 9613
RFS[18]: Not using real application clusters
Sat Aug 28 01:27:56 2010
Errors in file /oracle/admin/prtp/udump/prtp_rfs_9613.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 01:27:56 2010
Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 01:27:56 2010
FAL[server, ARC0]: Error 16009 creating remote archivelog file 'prtp'
FAL[server, ARC0]: FAL archive failed, see trace file.
Sat Aug 28 01:27:56 2010
Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Sat Aug 28 01:27:56 2010
ORACLE Instance prtp - Archival Error. Archiver continuing.
Sat Aug 28 01:29:39 2010
Thread 1 cannot allocate new log, sequence 24581
Private strand flush not complete
Current log# 4 seq# 24580 mem# 0: /oradata1/prtp/redo-log/redo04_1.log
Current log# 4 seq# 24580 mem# 1: /oradata2/prtp/redo-log/redo04_2.log
NAME TYPE VALUE
O7_DICTIONARY_ACCESSIBILITY boolean FALSE
active_instance_count integer
aq_tm_processes integer 1
archive_lag_target integer 900
asm_diskgroups string
asm_diskstring string
asm_power_limit integer 1
audit_file_dest string /oracle/ora10g/rdbms/audit
audit_sys_operations boolean FALSE
audit_syslog_level string
audit_trail string NONE
background_core_dump string partial
background_dump_dest string /oracle/admin/prtp/bdump
backup_tape_io_slaves boolean FALSE
bitmap_merge_area_size integer 1048576
blank_trimming boolean FALSE
buffer_pool_keep string
buffer_pool_recycle string
circuits integer
cluster_database boolean FALSE
cluster_database_instances integer 1
cluster_interconnects string
commit_point_strength integer 1
commit_write string
compatible string 10.2.0
control_file_record_keep_time integer 7
control_files string /oradata1/prtp/control/control
01.ctl, /oradata2/prtp/control
/control02.ctl, /oradata3/prtp
/control/control03.ctl
core_dump_dest string /oracle/admin/prtp/cdump
cpu_count integer 48
create_bitmap_area_size integer 8388608
create_stored_outlines string
cursor_sharing string FORCE
cursor_space_for_time boolean TRUE
db_16k_cache_size big integer 0
db_2k_cache_size big integer 0
db_32k_cache_size big integer 0
db_4k_cache_size big integer 0
db_8k_cache_size big integer 0
db_block_buffers integer 0
db_block_checking string FALSE
db_block_checksum string TRUE
db_block_size integer 8192
db_cache_advice string ON
db_cache_size big integer 6G
db_create_file_dest string
db_create_online_log_dest_1 string
db_create_online_log_dest_2 string
db_create_online_log_dest_3 string
db_create_online_log_dest_4 string
db_create_online_log_dest_5 string
db_domain string
db_file_multiblock_read_count integer 16
db_file_name_convert string
db_files integer 200
db_flashback_retention_target integer 0
db_keep_cache_size big integer 0
db_name string prtp
db_recovery_file_dest string
db_recovery_file_dest_size big integer 0
db_recycle_cache_size big integer 0
db_unique_name string prtp
db_writer_processes integer 6
dbwr_io_slaves integer 0
ddl_wait_for_locks boolean FALSE
dg_broker_config_file1 string /oracle/ora10g/dbs/dr1prtp.dat
dg_broker_config_file2 string /oracle/ora10g/dbs/dr2prtp.dat
dg_broker_start boolean FALSE
disk_asynch_io boolean TRUE
dispatchers string
distributed_lock_timeout integer 60
dml_locks integer 19380
drs_start boolean FALSE
event string 10511 trace name context forev
er, level 2
fal_client string prtp
fal_server string stndby
fast_start_io_target integer 0
fast_start_mttr_target integer 600
fast_start_parallel_rollback string LOW
file_mapping boolean FALSE
fileio_network_adapters string
filesystemio_options string asynch
fixed_date string
gc_files_to_locks string
gcs_server_processes integer 0
global_context_pool_size string
global_names boolean FALSE
hash_area_size integer 131072
hi_shared_memory_address integer 0
hs_autoregister boolean TRUE
ifile file
instance_groups string
instance_name string prtp
instance_number integer 0
instance_type string RDBMS
java_max_sessionspace_size integer 0
java_pool_size big integer 160M
java_soft_sessionspace_limit integer 0
job_queue_processes integer 10
large_pool_size big integer 560M
ldap_directory_access string NONE
license_max_sessions integer 0
license_max_users integer 0
license_sessions_warning integer 0
local_listener string
lock_name_space string
lock_sga boolean FALSE
log_archive_config string
log_archive_dest string
log_archive_dest_1 string location=/archive/archive-log/
MANDATORY
log_archive_dest_10 string
log_archive_dest_2 string service=stndby LGWR
log_archive_dest_3 string
log_archive_dest_4 string
log_archive_dest_5 string
log_archive_dest_6 string
log_archive_dest_7 string
log_archive_dest_8 string
log_archive_dest_9 string
log_archive_dest_state_1 string enable
log_archive_dest_state_10 string enable
log_archive_dest_state_2 string ENABLE
log_archive_dest_state_3 string enable
log_archive_dest_state_4 string enable
log_archive_dest_state_5 string enable
log_archive_dest_state_6 string enable
log_archive_dest_state_7 string enable
log_archive_dest_state_8 string enable
log_archive_dest_state_9 string enable
log_archive_duplex_dest string
log_archive_format string arc_%t_%s_%r.arc
log_archive_local_first boolean TRUE
log_archive_max_processes integer 2
log_archive_min_succeed_dest integer 1
log_archive_start boolean FALSE
log_archive_trace integer 0
log_buffer integer 20971520
log_checkpoint_interval integer 0
log_checkpoint_timeout integer 1800
log_checkpoints_to_alert boolean FALSE
log_file_name_convert string
logmnr_max_persistent_sessions integer 1
max_commit_propagation_delay integer 0
max_dispatchers integer
max_dump_file_size string UNLIMITED
max_enabled_roles integer 150
max_shared_servers integer
nls_calendar string
nls_comp string
nls_currency string
nls_date_format string
nls_date_language string
nls_dual_currency string
nls_iso_currency string
nls_language string AMERICAN
nls_length_semantics string BYTE
nls_nchar_conv_excp string FALSE
nls_numeric_characters string
nls_sort string
nls_territory string AMERICA
nls_time_format string
nls_time_tz_format string
nls_timestamp_format string
nls_timestamp_tz_format string
object_cache_max_size_percent integer 10
object_cache_optimal_size integer 102400
olap_page_pool_size big integer 0
open_cursors integer 4500
open_links integer 30
open_links_per_instance integer 30
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
os_authent_prefix string ops$
os_roles boolean FALSE
parallel_adaptive_multi_user boolean TRUE
parallel_automatic_tuning boolean FALSE
parallel_execution_message_size integer 2152
parallel_instance_group string
parallel_max_servers integer 960
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_threads_per_cpu integer 2
pga_aggregate_target big integer 3G
plsql_ccflags string
plsql_code_type string INTERPRETED
plsql_compiler_flags string INTERPRETED, NON_DEBUG
plsql_debug boolean FALSE
NAME TYPE VALUE
plsql_native_library_dir string
plsql_native_library_subdir_count integer 0
plsql_optimize_level integer 2
plsql_v2_compatibility boolean FALSE
plsql_warnings string DISABLE:ALL
pre_11g_enable_capture boolean FALSE
pre_page_sga boolean FALSE
processes integer 4000
query_rewrite_enabled string TRUE
query_rewrite_integrity string enforced
rdbms_server_dn string
read_only_open_delayed boolean FALSE
recovery_parallelism integer 0
recyclebin string OFF
remote_archive_enable string true
remote_dependencies_mode string TIMESTAMP
remote_listener string
remote_login_passwordfile string EXCLUSIVE
remote_os_authent boolean FALSE
remote_os_roles boolean FALSE
replication_dependency_tracking boolean TRUE
resource_limit boolean TRUE
resource_manager_plan string
resumable_timeout integer 0
rollback_segments string
serial_reuse string disable
service_names string prtp
session_cached_cursors integer 0
session_max_open_files integer 10
sessions integer 4405
sga_max_size big integer 20G
sga_target big integer 20G
shadow_core_dump string partial
shared_memory_address integer 0
shared_pool_reserved_size big integer 214748364
shared_pool_size big integer 4G
shared_server_sessions integer
shared_servers integer 0
skip_unusable_indexes boolean TRUE
smtp_out_server string smtp.banglalinkgsm.com
sort_area_retained_size integer 0
sort_area_size integer 65536
spfile string /oradata1/prtp/pfile/spfileprt
p.ora
sql92_security boolean FALSE
sql_trace boolean FALSE
sql_version string NATIVE
sqltune_category string DEFAULT
standby_archive_dest string ?/dbs/arch
standby_file_management string AUTO
star_transformation_enabled string FALSE
statistics_level string TYPICAL
streams_pool_size big integer 0
tape_asynch_io boolean TRUE
thread integer 0
timed_os_statistics integer 0
timed_statistics boolean TRUE
trace_enabled boolean TRUE
tracefile_identifier string
transactions integer 4845
transactions_per_rollback_segment integer 5
undo_management string AUTO
undo_retention integer 15000
undo_tablespace string UNDOTBS
use_indirect_data_buffers boolean FALSE
user_dump_dest string /oracle/admin/prtp/udump
utl_file_dir string
workarea_size_policy string AUTO -
Next log sequence to archive in Standby Database (RAC Dataguard Issue)
Hi All,
I just had implemented Data Guard in our server. My primary Database is RAC configured, but it is only single node. The other Instance was removed and converted it to Developement Instance. For the reason I kept the primary as RAC is when I will implement dataguard, my Primary Database is RAC with 7 nodes.
The first test is successful, and I was able to "switchover" from my primary to standby. I failed in the 'FAILOVER" test.
I restore my primary server and redo the setup.
BTW, my standby DB is physical standby.
When I try to switchover again and issue archive log list, below is my output.
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 38
*Next log sequence to archive 0*
Current log sequence 38
SQL> select open_mode, database_role from v$database;
OPEN_MODE DATABASE_ROLE
MOUNTED PHYSICAL STANDBY
===============================================
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 38
*Next log sequence to archive 38*
Current log sequence 38
SQL> select open_mode, database_role from v$database;
OPEN_MODE DATABASE_ROLE
READ WRITE PRIMARY
In my first attempt to switchover before I failed in "FAILOVER" test, I also issue +archive log list+ in both primary and standby database, and if I remember it right, next log sequence on both should be identical. Am I right on this?
Thanks in Advance.
Jay AOr Am i just overthinking on this?
Is dataguard only looking for the current and oldest log sequence? -
Hi
Could you please help me on this...
We started Oracle data guard setup for a production database where primary is located in Europe Region and standby is located in Latin America.
We configured dataguard parameters on primary and performed cold backup of DB size nearly one terabyte. At the same time we kept standby server kept ready.
But it took 15 days to reach cold backup and get restored on to standby location. In these 15 days we shipped manually all the archive logs to standby which were generated on primary.
After restoration on standby server, today i created standby contro lfile on primary and transferred to standby using scp and then copied to standby control file locations. As per the standby database setup procedure i performed all the steps...
I am able to mount the database in standby mode. After that i given the command "RECOVER STANDBY DATABASE;" to apply all the logs which were shipped manually in these 15 days....I am getting error given below:
Physical Standby Database mounted.
Completed: alter database mount standby database
Mon Jun 28 07:53:33 2010
Starting Data Guard Broker (DMON)
INSV started with pid=22, OS id=1246
Mon Jun 28 07:54:35 2010
ALTER DATABASE RECOVER standby database
Mon Jun 28 07:54:35 2010
Media Recovery Start
Managed Standby Recovery not using Real Time Apply
Mon Jun 28 07:54:35 2010
Errors in file /oracle/P19/saptrace/background/p19_dbw0_1173.trc:
ORA-01157: cannot identify/lock data file 322 - see DBWR trace file
ORA-01110: data file 322: '/oracle/P19/sapdata1/sr3_289/sr3.data289'
ORA-27037: unable to obtain file status
HPUX-ia64 Error: 2: No such file or directory
Additional information: 3
Mon Jun 28 07:54:35 2010
Errors in file /oracle/P19/saptrace/background/p19_dbw0_1173.trc:
ORA-01157: cannot identify/lock data file 323 - see DBWR trace file
ORA-01110: data file 323: '/oracle/P19/sapdata1/sr3_290/sr3.data290'
ORA-27037: unable to obtain file status
HPUX-ia64 Error: 2: No such file or directory
Additional information: 3
Mon Jun 28 07:54:35 2010
Errors in file /oracle/P19/saptrace/background/p19_dbw0_1173.trc:
ORA-01157: cannot identify/lock data file 324 - see DBWR trace file
ORA-01110: data file 324: '/oracle/P19/sapdata2/sr3_291/sr3.data291'
ORA-27037: unable to obtain file status
HPUX-ia64 Error: 2: No such file or directory
Additional information: 3
Mon Jun 28 07:54:35 2010
Errors in file /oracle/P19/saptrace/background/p19_dbw0_1173.trc:
ORA-01157: cannot identify/lock data file 325 - see DBWR trace file
ORA-01110: data file 325: '/oracle/P19/sapdata3/sr3_292/sr3.data292'
ORA-27037: unable to obtain file status
HPUX-ia64 Error: 2: No such file or directory
The above datafiles are added after the cold backup on primary....I am getting this error due to the latest standby controlfile used on standby....so i used the below commands on standby database:
alter database datafile '/oracle/P19/sapdata1/sr3_289/sr3.data289' offline drop;
alter database datafile '/oracle/P19/sapdata1/sr3_290/sr3.data290' offline drop;
alter database datafile '/oracle/P19/sapdata2/sr3_291/sr3.data291' offline drop;
alter database datafile '/oracle/P19/sapdata3/sr3_292/sr3.data292' offline drop;
and then recovery started by applying the logs...please find the details from alert log file given below:
Media Recovery Log /oracle/P19/oraarch/bkp_arch_dest/P19arch1_401780_624244340.dbf
Mon Jun 28 08:37:22 2010
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Mon Jun 28 08:37:22 2010
ALTER DATABASE RECOVER CONTINUE DEFAULT
Mon Jun 28 08:37:22 2010
Media Recovery Log /oracle/P19/oraarch/bkp_arch_dest/P19arch1_401781_624244340.dbf
Mon Jun 28 08:38:02 2010
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Mon Jun 28 08:38:02 2010
ALTER DATABASE RECOVER CONTINUE DEFAULT
Mon Jun 28 08:38:02 2010
Media Recovery Log /oracle/P19/oraarch/bkp_arch_dest/P19arch1_401782_624244340.dbf
Mon Jun 28 08:38:32 2010
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Mon Jun 28 08:38:32 2010
ALTER DATABASE RECOVER CONTINUE DEFAULT
Mon Jun 28 08:38:32 2010
Media Recovery Log /oracle/P19/oraarch/bkp_arch_dest/P19arch1_401783_624244340.dbf
Mon Jun 28 08:39:05 2010
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Mon Jun 28 08:39:05 2010
ALTER DATABASE RECOVER CONTINUE DEFAULT
After the manual shipped logs applying completion, i started the dataguard setup....logs are shipping and applying perfectly.....
Media Recovery Waiting for thread 1 sequence 407421
Fetching gap sequence in thread 1, gap sequence 407421-407506
Thu Jul 1 00:26:41 2010
RFS[2]: Archived Log: '/oracle/P19/oraarch/P19arch1_407529_624244340.dbf'
Thu Jul 1 00:26:49 2010
RFS[3]: Archived Log: '/oracle/P19/oraarch/P19arch1_407530_624244340.dbf'
Thu Jul 1 00:27:17 2010
RFS[1]: Archived Log: '/oracle/P19/oraarch/P19arch1_407531_624244340.dbf'
Thu Jul 1 00:28:41 2010
RFS[2]: Archived Log: '/oracle/P19/oraarch/P19arch1_407532_624244340.dbf'
Thu Jul 1 00:29:14 2010
RFS[3]: Archived Log: '/oracle/P19/oraarch/P19arch1_407421_624244340.dbf'
Thu Jul 1 00:29:19 2010
Media Recovery Log /oracle/P19/oraarch/P19arch1_407421_624244340.dbf
Thu Jul 1 00:29:24 2010
RFS[1]: Archived Log: '/oracle/P19/oraarch/P19arch1_407422_624244340.dbf'
Thu Jul 1 00:29:51 2010
Media Recovery Log /oracle/P19/oraarch/P19arch1_407422_624244340.dbf
But the above files showing as recover as status...could you please tell how to go ahead on this....
NAME
STATUS
/oracle/P19/sapdata1/sr3_289/sr3.data289
RECOVER
/oracle/P19/sapdata1/sr3_290/sr3.data290
RECOVER
/oracle/P19/sapdata2/sr3_291/sr3.data291
RECOVER
NAME
STATUS
/oracle/P19/sapdata3/sr3_292/sr3.data292
RECOVER
can i recover these files in standby mount mode? Any other solution is there? All archivelogs applied, log shipping and applying is going on...
Thank You....try this out
1.On the Primary server issue this command
SQL> Alter database backup control file to trace;
2.Go to your Udump directory and look for the trace file that has been generated by this comman.
3.This file will contain the CREATE CONTROLFILE statement. It will have two sets of statement one with RESETLOGS and another withOUT RESETLOGS. use RESETLOGS option of CREATE CONTROLFILE statement. Now copy and paste the statement in a file. Let it be c.sql
4.Now open a file named as c.sql file in text editor and set the database name from [example:ica] to [example:prod] shown in an example below
CREATE CONTROLFILE
SET DATABASE prod
LOGFILE GROUP 1 ('/u01/oracle/ica/redo01_01.log',
'/u01/oracle/ica/redo01_02.log'),
GROUP 2 ('/u01/oracle/ica/redo02_01.log',
'/u01/oracle/ica/redo02_02.log'),
GROUP 3 ('/u01/oracle/ica/redo03_01.log',
'/u01/oracle/ica/redo03_02.log')
RESETLOGS
DATAFILE '/u01/oracle/ica/system01.dbf' SIZE 3M,
'/u01/oracle/ica/rbs01.dbs' SIZE 5M,
'/u01/oracle/ica/users01.dbs' SIZE 5M,
'/u01/oracle/ica/temp01.dbs' SIZE 5M
MAXLOGFILES 50
MAXLOGMEMBERS 3
MAXLOGHISTORY 400
MAXDATAFILES 200
MAXINSTANCES 6
ARCHIVELOG5. Start the database in NOMOUNT STATE
SQL>Stratup Nomount;
6.Now execute the script to create a new control file
SQL> @/u01/oracle/c.sql
7. Now open the database
SQL>Alter database open resetlogs;
IMPORTANT NOTE: Before implementing this suggested solution try it out on ur laptop or PC if possible
Edited by: Suhail Faraaz on Jun 30, 2010 3:00 PM
Edited by: Suhail Faraaz on Jun 30, 2010 3:03 PM -
FCS2 "Something Happened" message at load, other upgrade issues URGENT
Final cut studio has been giving me a "Something Happened" message every time I load it since after I upgraded it. What is this?? It seemed at the time that the upgrade to FCP Studio 2 installation went well...no weird messages or anything.
I did the upgrade at the end of last year and Ive been using the software OK with three exceptions:
1.- The FCP "Something Happened" message always appears when the OS loads...
2.- The new COMPRESSOR wont load at all....ever! (I installed ALL upgrades frpom the web and ALSO downloaded the compressor upgrade (from Nov 07) and installed it "by hand", but to no avail.
4.- FCP cannot render exotic effects (For example the ones that work in Motion)Final Cut pro asks me to upgrade my graphics card from the old 60 mb NVIDIA GEFORCE that came with the first system to a larger one.
I just bought it through the net and it should be here in a couple of weeks. (I am in Nicaragua, Central America).
I dont know what to do about the compressor (I need it urgently but dont know how to uninstall and reinstall ONLY compressor).
My machine specs are
Model POWER MAC 7.2
CPU Power PC 970 (2.2)
2 CPU Units
Level 2 cache 512 KB
Memory 4 GB
Bus speed 1 GHZ
Start ROM version 5.1.5 F0
Serial G8429CVLPXDDear Jerry
Im very afraid of doing a full reinstall for a couple of reasons:
1.-We dont have a reliable and FCP-savy MAC techie in Nicaragua.
2.-I am not 100% sure I have ALL of the disks for all of the previous versions of the software
3.- There are a lot of other things on that drive that I am afraid of losing.
Do you have any other ideas??
Do you think issues will be resolved when the new video card comes??
Is there any way of installing (In the meantime) only the previous version of COMPRESSOR??
Thanks for any help-
Carlos
Tanks -
[JPF/NetUI]NetUI tree issue---Urgent
Gurus, :)
I had an issue when i used the NetUI true in my project(on Weblogic10gR3), when i keep clicking the netUI tree,
i got a message in the jsp: PageFlow /Controller.jpf: Could not find exception handler method <b>handleException</b>.; but actually i have an exception handler for the SocketException called handleSocketException, and there is also an exception handler method called handleException. weird weird.. ;)
In the Log, there was an exception thrown:
com.cup.service.jpf.ServiceController - Service Management SocketError
java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:326)
at weblogic.servlet.internal.CharsetChunkOutput.print(CharsetChunkOutput.java:233)
at weblogic.servlet.internal.ChunkOutputWrapper.print(ChunkOutputWrapper.java:153)
at weblogic.servlet.jsp.JspWriterImpl.print(JspWriterImpl.java:176)
at org.apache.beehive.netui.tags.AbstractSimpleTag.write(AbstractSimpleTag.java:152)
at org.apache.beehive.netui.tags.tree.Tree.doTag(Tree.java:936)
at jsp_servlet._com._cup._soa._catweb._service._jpf.__index._jspService(__index.java:333)
at weblogic.servlet.jsp.JspBase.service(JspBase.java:34)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at org.apache.beehive.netui.pageflow.PageFlowPageFilter.runPage(PageFlowPageFilter.java:385)
at org.apache.beehive.netui.pageflow.PageFlowPageFilter.doFilter(PageFlowPageFilter.java:284)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:503)
at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:251)
at org.apache.beehive.netui.pageflow.internal.DefaultForwardRedirectHandler.forward(DefaultForwardRedirectHandler.java:128)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.doForward(PageFlowRequestProcessor.java:1801)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processForwardConfig(PageFlowRequestProcessor.java:1674)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:241)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:556)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:853)
at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:631)
at org.apache.beehive.netui.pageflow.PageFlowActionServlet.process(PageFlowActionServlet.java:158)
at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at com.cup.filter.LoginFilter.doFilter(LoginFilter.java:50)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
the class marked as bold is a filter in which i checked if the netUI tree is emplty, and the logic in the LoginFilter.java:50 is only : chain.doFilter(request, response);
it seems that a socket exception is thrown with no reason, is it a NetUI issue????
It's really urgent, could someone help me with that? ;)
Thanks a lot!!!!!
Edited by: Xu Wen on Sep 3, 2009 8:26 PM
Edited by: Xu Wen on Sep 3, 2009 10:23 PM
Edited by: Xu Wen on Sep 3, 2009 11:17 PMdeepak,
Sorry for replying your post so late as i am just back from my vacation. the entire thing is as follows:
2009-09-16 14:30:45,546 ERROR [[ACTIVE] ExecuteThread: '21' for queue: 'weblogic.kernel.Default (self-tuning)'] com.cup.soa.catweb.service.jpf.ServiceController - Service Management SocketError
java.net.SocketException: Software caused connection abort: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:326)
at weblogic.servlet.internal.CharsetChunkOutput.print(CharsetChunkOutput.java:233)
at weblogic.servlet.internal.ChunkOutputWrapper.print(ChunkOutputWrapper.java:153)
at weblogic.servlet.jsp.JspWriterImpl.print(JspWriterImpl.java:176)
at org.apache.beehive.netui.tags.AbstractSimpleTag.write(AbstractSimpleTag.java:152)
at org.apache.beehive.netui.tags.tree.Tree.doTag(Tree.java:936)
at jsp_servlet._com._cup._soa._catweb._service._jpf.__index._jspService(__index.java:338)
at weblogic.servlet.jsp.JspBase.service(JspBase.java:34)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at org.apache.beehive.netui.pageflow.PageFlowPageFilter.runPage(PageFlowPageFilter.java:385)
at org.apache.beehive.netui.pageflow.PageFlowPageFilter.doFilter(PageFlowPageFilter.java:284)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:503)
at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:251)
at org.apache.beehive.netui.pageflow.internal.DefaultForwardRedirectHandler.forward(DefaultForwardRedirectHandler.java:128)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.doForward(PageFlowRequestProcessor.java:1801)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processForwardConfig(PageFlowRequestProcessor.java:1674)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:241)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:556)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:853)
at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:631)
at org.apache.beehive.netui.pageflow.PageFlowActionServlet.process(PageFlowActionServlet.java:158)
at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at com.cup.soa.catweb.filter.LoginFilter.doFilter(LoginFilter.java:50)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
2009-09-16 14:30:48,000 ERROR [[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] com.cup.soa.catweb.service.jpf.ServiceController - Service Management SocketError
java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:326)
at weblogic.servlet.internal.CharsetChunkOutput.print(CharsetChunkOutput.java:233)
at weblogic.servlet.internal.ChunkOutputWrapper.print(ChunkOutputWrapper.java:153)
at weblogic.servlet.jsp.JspWriterImpl.print(JspWriterImpl.java:176)
at org.apache.beehive.netui.tags.AbstractSimpleTag.write(AbstractSimpleTag.java:152)
at org.apache.beehive.netui.tags.tree.Tree.doTag(Tree.java:936)
at jsp_servlet._com._cup._soa._catweb._service._jpf.__index._jspService(__index.java:338)
at weblogic.servlet.jsp.JspBase.service(JspBase.java:34)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at org.apache.beehive.netui.pageflow.PageFlowPageFilter.runPage(PageFlowPageFilter.java:385)
at org.apache.beehive.netui.pageflow.PageFlowPageFilter.doFilter(PageFlowPageFilter.java:284)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:503)
at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:251)
at org.apache.beehive.netui.pageflow.internal.DefaultForwardRedirectHandler.forward(DefaultForwardRedirectHandler.java:128)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.doForward(PageFlowRequestProcessor.java:1801)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processForwardConfig(PageFlowRequestProcessor.java:1674)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:241)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:556)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:853)
at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:631)
at org.apache.beehive.netui.pageflow.PageFlowActionServlet.process(PageFlowActionServlet.java:158)
at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at com.cup.soa.catweb.filter.LoginFilter.doFilter(LoginFilter.java:50)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
2009-09-16 14:30:48,921 ERROR [[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'] com.cup.soa.catweb.service.jpf.ServiceController - Service Management SocketError
java.net.SocketException: Software caused connection abort: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:326)
at weblogic.servlet.internal.CharsetChunkOutput.print(CharsetChunkOutput.java:233)
at weblogic.servlet.internal.ChunkOutputWrapper.print(ChunkOutputWrapper.java:153)
at weblogic.servlet.jsp.JspWriterImpl.print(JspWriterImpl.java:176)
at org.apache.beehive.netui.tags.AbstractSimpleTag.write(AbstractSimpleTag.java:152)
at org.apache.beehive.netui.tags.tree.Tree.doTag(Tree.java:936)
at jsp_servlet._com._cup._soa._catweb._service._jpf.__index._jspService(__index.java:338)
at weblogic.servlet.jsp.JspBase.service(JspBase.java:34)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at org.apache.beehive.netui.pageflow.PageFlowPageFilter.runPage(PageFlowPageFilter.java:385)
at org.apache.beehive.netui.pageflow.PageFlowPageFilter.doFilter(PageFlowPageFilter.java:284)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:503)
at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:251)
at org.apache.beehive.netui.pageflow.internal.DefaultForwardRedirectHandler.forward(DefaultForwardRedirectHandler.java:128)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.doForward(PageFlowRequestProcessor.java:1801)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processForwardConfig(PageFlowRequestProcessor.java:1674)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:241)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:556)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:853)
at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:631)
at org.apache.beehive.netui.pageflow.PageFlowActionServlet.process(PageFlowActionServlet.java:158)
at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at com.cup.soa.catweb.filter.LoginFilter.doFilter(LoginFilter.java:50)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
2009-09-16 14:30:49,062 ERROR [[ACTIVE] ExecuteThread: '19' for queue: 'weblogic.kernel.Default (self-tuning)'] com.cup.soa.catweb.service.jpf.ServiceController - Service Management SocketError
java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:326)
at weblogic.servlet.internal.CharsetChunkOutput.print(CharsetChunkOutput.java:233)
at weblogic.servlet.internal.ChunkOutputWrapper.print(ChunkOutputWrapper.java:153)
at weblogic.servlet.jsp.JspWriterImpl.print(JspWriterImpl.java:176)
at org.apache.beehive.netui.tags.AbstractSimpleTag.write(AbstractSimpleTag.java:152)
at org.apache.beehive.netui.tags.tree.Tree.doTag(Tree.java:936)
at jsp_servlet._com._cup._soa._catweb._service._jpf.__index._jspService(__index.java:338)
at weblogic.servlet.jsp.JspBase.service(JspBase.java:34)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at org.apache.beehive.netui.pageflow.PageFlowPageFilter.runPage(PageFlowPageFilter.java:385)
at org.apache.beehive.netui.pageflow.PageFlowPageFilter.doFilter(PageFlowPageFilter.java:284)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:503)
at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:251)
at org.apache.beehive.netui.pageflow.internal.DefaultForwardRedirectHandler.forward(DefaultForwardRedirectHandler.java:128)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.doForward(PageFlowRequestProcessor.java:1801)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processForwardConfig(PageFlowRequestProcessor.java:1674)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:241)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:556)
at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:853)
at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:631)
at org.apache.beehive.netui.pageflow.PageFlowActionServlet.process(PageFlowActionServlet.java:158)
at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at com.cup.soa.catweb.filter.LoginFilter.doFilter(LoginFilter.java:50)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
Still the message saying handleException is not found . (Weird)
Any idea what's wrong??
Thanks a lot
Regards,
Wen
Edited by: Xu Wen on Sep 21, 2009 2:00 AM -
hi ,
I have the following setup.
* I have a network shared drive on which i installed Oracle 9.0.1.1 Enterprise DB. The shared drive is on a different machine, but same NT network domain. The installation went smooth without any issues. But, when i tried starting the listener i'm getting the following error.
* The machine on which the shared folder is, has Linux installed on it.
Failed to start service, error 65.
TNS-12560: TNS:protocol adapter error
TNS-00530: Protocol adapter error
32-bit Windows Error: 65: Unknown error
Can some one help me on this ?
This is a very urgent need.
thanks in advance,
SriramIf you are using windows 2000, then you may want to check the account which is being used to start the listener.
You can do this from the services window...double-click the TNS service and check the Log On tab to see who it is starting the service as...you may need to specify using "This Account".
Also, check the SQLNET.ORA file to see if there is a line that reads:
sqlnet.authentication_services=(NTS)
If so, try commenting it out.
This solved my issue using Oracle 8.1.7. -
Pro Tools/Logic bouncing issue URGENT
I have an urgent Logic/Pro Tools question. I've asked several people and searched online but I can't find the problem. For some reason, my bounced Logic files come out as 5 mono tracks in Pro Tools.. but they're bounced as interleaved in Logic. I'm not sure what the issue is.. I have a deadline for this track (Jan 5th) help is REALLY appreciated. Thanks in advance
Quality shouldn't differ between interleaved and separate right/left mono files.
If you want Logic to work like Protools and be more compatible with PT, go into Preferences/Audio and uncheck "Universal Track Mode". Now Logic will use separate files for left/right.
Most DAWs can import interleaved files because they also use them as the main recording option, ProTools has always stuch with single right/left files.
pancenter- -
Master with two details issue --- urgent
Hi,
This is an urgent issue. Please help solve this.
Page has one sub-tab layout region and beneath it advanced table - in advanced table region. The sub-tab region shows the master data(vo1) whereas the advanced table - in advanced table is used to display the details (vo2) and its details (vo3) data.
The relationship between 3 vos in AM is as follows:
vo1
|--------- conencted to vo2 via vl1
-----------------------------|-------------- connected to vo3 via vl2
How to set the CHILD_VIEW_ATTRIBUTE_NAME and VIEW_LINK_NAME on each of this region so that when vo1 is queried the data of vo2 and vo3 is fetched automatically and displayed?
Or is there any other way to make this haapen with the above mentioned region layout?
Appreciate your help. Thanks in advance.
Mitiksha
Edited by: Mitiksha on Sep 17, 2009 2:00 PMYou could use "New"/"BC" and create a ViewLink with the same name as the FK Assoc. If you edit the PackageModule, you should be able to see your FK link in there.
-
hi,
I am using Oracle 10g on windows 2003 enterprise edition.
I had 4gb of RAM on my system, and now it is extended to 8gb. but oracle is taking only 4gb as before it is not getting increased.I checked this by increasing SGA to 6gb and next to 5gb, but it failed.
The memory is detected both in BIOS and OS side.
What should i now do to increase this memory usage for oracle.
Please help me resolving this issue. ITS URGENT.
Thanks In Advance.It is just a comment on the kind of thread title you have set, URGENT!!!!!!!!!!!!!!!!!!!!!! (uppercase and !!!) won't give more priority to your thread, all threads are just treated the same and they are answered on a random way and it depends on the poster availability. It is not a paid service and we all here are on a voluntary basis. If you want to have an issue replied urgently you should open a Service Request with using your Oracle Support Contract, or have a paid consultant service.
~ Madrid -
Remedy issue URGENT!! Please help
Hi All,
I am not able to login to remedy client as I'm getting the below error
*‘RPC: Miscellaneous tli error - System error (Connection refused)’*
We tried to restart the remedy process, that dont work, Getting the below SQL errorAction Request System initializing.
Starting Remedy AR System server
Also I have checked the network/firewall as there is no issues with their end.
Please anyone help me to resolve this issue.
Error while restarting the remedy process
Action Request System(R) Server Version 4.05.02 patch 1025
Copyright (c) 1991 - 2001 Remedy Corporation. All Rights reserved.
Copyright (c) 1989 - 2001 Verity, Inc. All rights reserved.
Reproduction or disassembly of embodied programs and databases prohibited.
Verity (r) and TOPIC (r) are registered trademarks of Verity, Inc.
390600 : SQL database is not available -- will retry connection (ARNOTE 590)
Notification System Server Version 4.05.02
Copyright (c) 1994 - 2001 Remedy Corporation. All Rights reserved.
110902110300- 24733: Initializing process 24733
110902110300- 24733: DISPLAY_CONFIGURATION===================================
110902110300- 24733: EXTERNAL START (-X) FALSE
110902110300- 24733: RESTART (-r) FALSE
110902110300- 24733: Check-Users: (-c) FALSE
110902110300- 24733: Debug-Level: (-d) 21
110902110300- 24733: Disable-Shared-Memory: FALSE
110902110300- 24733: Hold-Time: (-h) 2592000 seconds = 30.0 days
110902110300- 24733: Max-Users: (-u) 1000
110902110300- 24733: Notifier-Outbound-Port: 0
110902110300- 24733: Notifier-Specific-Port: 0
110902110300- 24733: Private-RPC-Socket: 0
110902110300- 24733: Private-Specific-Port: 0
110902110300- 24733: Register-With-Portmapper: FALSE
110902110300- 24733: Send-Timeout: (-t) 7
110902110300- 24733: TCD-Specific-Port: 32768
110902110300- 24733: ========================================================
110902110300- 24733: AR System server: remedy01
110902110300- 24733: AR ServerNameWithDomain: remedy01.ndc.lucent.com
110902110300- 24733: HostnameWithDomain: remedy01.ndc.lucent.com
110902110300- 24733: StartServerDaemons
Notification Send Server Version 4.05.02
Copyright (c) 1991 - 2001 Remedy Corporation. All Rights reserved.
110902110300- 24736: Initializing process 24736
110902110300- 24736: ProcessFiles: called with loginFd(0)=9 and notificationFd(1)=10
110902110300- 24736: ProcessFiles: start Notifications at offset 0.
110902110300- 24736: ProcessFiles: reopening nfyfile (new notificationFd=10)
110902110302- 24733: StartServerDaemons daemon 0 started
Action Request System(R) Mail Daemon Version 4.05.02
Copyright (c) 1991 - 2001 Remedy Corporation. All Rights reserved.
MailFileName: /usr/mail/fxbrophy
Action Request System initialization is complete.
390600 : Cannot initialize contact with SQL database (ARERR 551)
Stop server
390600 : AR System server terminated -- fatal error encountered (ARNOTE 21)
Action Request System(R) Server Version 4.05.02 patch 1025
Copyright (c) 1991 - 2001 Remedy Corporation. All Rights reserved.
Copyright (c) 1989 - 2001 Verity, Inc. All rights reserved.
Reproduction or disassembly of embodied programs and databases prohibited.
Verity (r) and TOPIC (r) are registered trademarks of Verity, Inc.
390600 : SQL database is not available -- will retry connection (ARNOTE 590)
Thanks,
SajithWhy are you posting this on the Oracle forums, shouldn't you be talking to Remedy or BMC or whoever provides support for the product?
Also this (or other public forums) is generally not the place for urgent production issues. There are paid support channels for such issues.
Anyway, a hint:
I would probably dig into the very vague "SQL database is not available" message. Does the system have details in logs? What clues does the actual/underlying error messages provide? Is the database in question actually up and reachable from the client (i.e. app server) host?
Edited by: orafad on Sep 26, 2011 2:20 PM -
Hi experts,
I am facing a peculiar issue with Validation Check, in Process Control.
I have done the following:
1. set the Validation Top account, in my AppSettings
2. Enable SupportsProcessManagement - in appropriate scenario
3. Manage Submission Phases is set to * - in Phase1 column - My Validation Account's submission Group is set to 1.
4. I have 4 independent Validation hierarchies, each one, having a submission group as 1, and are set in ValidationAccount, ValidationAccount2, ValidationAccount3, ValidationAccount4 respectively.
When I plot the Validation Account, in a data grid, I can see values, coming in the grid.
However, when I start Process Management, the Pass / Fail, is showing as Pass, even if there is value in the Validation Accounts
When I click on the Validation Pass / Fail icon, the Validation Report inside, shows blank - not even a Zero value - throughout the grid. As a result, the entity is able to promote to the next review level, even if Validation is not cleared.
Is there any setting, which I am missing out on, which is not showing Validation errors, whereas I can see Validation values in the Data Grid?
thanks
Never mind.. I found out the issue.. It was related to Rules and Metadata setting.
Edited by: Indraneel Mazumder on Aug 2, 2011 10:50 PMI'm also facing the same issue,may i know what changes you made in metadata.quick reply is appreciated.
Maybe you are looking for
-
My fingerprint reader is not recognized by DigitalPersona when I upgraded to Windows 7
My fingerprint reader worked with Vista but when I upgraded to Windows 7 it is not recognized by DigitalPersona. I have a dv3-1075us with the AMD Turion X2 dual core mobile RM-72 2.10 Ghz processor. I have updated the BIOS to F.15, there is not a b
-
Should I use a static method or an instance method?
Just a simple function to look at a HashMap and tell how many non-null entries. This will be common code that will run on a multi-threaded weblogic app server and potentially serve many apps running at once. Does this have to be an instance method? O
-
HUGE PROBLEM WITH 2DAYS ZEN V P
my zen was playing a video and it suddenly freezed....i pushed the reset button and the screen went black but i cant open the device now what should i do? HELP!
-
I have tried a number of applications for arranging media. The ones I am looking to use at work however, we like to have using feeds from sites like flickr. We have Flipboard, which looks great, and usually works pretty great too, but there are error
-
MacBook Pro not connecting to the Internet in Europe (Germany)
I am trying to connect my computer to the internet in Germany. I was able to connect with no issues in Frankfurt. I am now in another city and the network requires a WPA password, which I have tried to enter. Everytime I enter the password, the messa