Log area is full
Hi SAPDB Gurus,
Our quality box live cache log area has become full and when I try to run a log backup I get below message.
-24755,ERR_SESSIONLIMIT: No session of type 'User' available
-24994,ERR_RTE: Runtime environment error
2,utility session is already in use
I had increased the MAXUSERTASKS parameter dynamically , again I am unable to take log backup.
I tried shutting down the live cache, but this is also not stopping.
can somebody help me here ?, as currently my quality APO box down because of this.
My APO version is SCM5.0, LIVE cache 7.6.04 and OS is windows server 2003.
Thanks,
Chetan
Chetan Ramachandra wrote:>
> Hi SAPDB Gurus,
>
> Our quality box live cache log area has become full and when I try to run a log backup I get below message.
>
> -24755,ERR_SESSIONLIMIT: No session of type 'User' available
> -24994,ERR_RTE: Runtime environment error
> 2,utility session is already in use
Hmm... Did you start another backup before?
> I had increased the MAXUSERTASKS parameter dynamically , again I am unable to take log backup.
You may modify the parameter value for MAXUSERTASKS while the instance is up and running, but this would only become active with the next restart of the liveCache. This parameter is in fact not dynamically changeable.
Besides, since your UTILITY session is already in use, changing this parameter value does not do a thing about your problem.
There's always just one utility session possible, regardless of the MAXUSERTASKS value!
> I tried shutting down the live cache, but this is also not stopping.
Did you try db_stop ?
> can somebody help me here ?, as currently my quality APO box down because of this.
>
> My APO version is SCM5.0, LIVE cache 7.6.04 and OS is windows server 2003.
Well, you have to perform a log backup to resolve this issue.
For that you have to have performed a complete data backup before.
Alternatively you may switch the logmode to OVERWRITE if this liveCache instance does not need to be recovered at all.
regards,
Lars
Similar Messages
-
When I tried to use tcode sgen to compile 59nnn programs (sap_aba, sap_basis), the log area got 100% full and the job stopped. The last time I checked prior to this was more than 40K programs were compiled.
Should have splitted up into two batches:
Pi_basis, sap_aba
Sap_basis, sap_bw
Because pi_basis and sap_bw have much less objects to be generated.
Set Auto Log On. The good thing is system knows to pick up where it left off. Smart MiniSAP.
Edited by: Passing By on Nov 17, 2011 4:14 AMChetan Ramachandra wrote:>
> Hi SAPDB Gurus,
>
> Our quality box live cache log area has become full and when I try to run a log backup I get below message.
>
> -24755,ERR_SESSIONLIMIT: No session of type 'User' available
> -24994,ERR_RTE: Runtime environment error
> 2,utility session is already in use
Hmm... Did you start another backup before?
> I had increased the MAXUSERTASKS parameter dynamically , again I am unable to take log backup.
You may modify the parameter value for MAXUSERTASKS while the instance is up and running, but this would only become active with the next restart of the liveCache. This parameter is in fact not dynamically changeable.
Besides, since your UTILITY session is already in use, changing this parameter value does not do a thing about your problem.
There's always just one utility session possible, regardless of the MAXUSERTASKS value!
> I tried shutting down the live cache, but this is also not stopping.
Did you try db_stop ?
> can somebody help me here ?, as currently my quality APO box down because of this.
>
> My APO version is SCM5.0, LIVE cache 7.6.04 and OS is windows server 2003.
Well, you have to perform a log backup to resolve this issue.
For that you have to have performed a complete data backup before.
Alternatively you may switch the logmode to OVERWRITE if this liveCache instance does not need to be recovered at all.
regards,
Lars -
Hi All,
We install SRM system with DB2 9.1, We have using logarchmeth1. The log directory /db2/SID/log_dir is getting full every time.
We are taking full online backup including logs every week for the Development server.
Can i delete the logs in the log_dir..? as we are taking full online backup including logs. And we dont have tape to take backup of archive logs.
Please suggest..
Regards,
SreekanthHi All,
Please find the db2 config parameters below. please suggest is it ok or need to change any thing.
H:\db2\db2srd\db2_software\BIN>db2 get db cfg for SRD
Database Configuration for Database SRD
Database configuration release level = 0x0b00
Database release level = 0x0b00
Database territory = en_US
Database code page = 1208
Database code set = UTF-8
Database country/region code = 1
Database collating sequence = IDENTITY_16BIT
Alternate collating sequence (ALT_COLLATE) =
Database page size = 16384
Dynamic SQL Query management (DYN_QUERY_MGMT) = DISABLE
Discovery support for this database (DISCOVER_DB) = ENABLE
Restrict access = NO
Default query optimization class (DFT_QUERYOPT) = 5
Degree of parallelism (DFT_DEGREE) = 1
Continue upon arithmetic exceptions (DFT_SQLMATHWARN) = NO
Default refresh age (DFT_REFRESH_AGE) = 0
Default maintained table types for opt (DFT_MTTB_TYPES) = SYSTEM
Number of frequent values retained (NUM_FREQVALUES) = 10
Number of quantiles retained (NUM_QUANTILES) = 20
Backup pending = NO
Database is consistent = NO
Rollforward pending = NO
Restore pending = NO
Multi-page file allocation enabled = YES
Log retain for recovery status = NO
User exit for logging status = YES
Self tuning memory (SELF_TUNING_MEM) = ON
Size of database shared memory (4KB) (DATABASE_MEMORY) = 2795520
Database memory threshold (DB_MEM_THRESH) = 10
Max storage for lock list (4KB) (LOCKLIST) = AUTOMATIC
Percent. of lock lists per application (MAXLOCKS) = AUTOMATIC
Package cache size (4KB) (PCKCACHESZ) = AUTOMATIC
Sort heap thres for shared sorts (4KB) (SHEAPTHRES_SHR) = AUTOMATIC
Sort list heap (4KB) (SORTHEAP) = AUTOMATIC
Database heap (4KB) (DBHEAP) = 25000
Catalog cache size (4KB) (CATALOGCACHE_SZ) = 2560
Log buffer size (4KB) (LOGBUFSZ) = 1024
Utilities heap size (4KB) (UTIL_HEAP_SZ) = 10000
Buffer pool size (pages) (BUFFPAGE) = 10000
Max size of appl. group mem set (4KB) (APPGROUP_MEM_SZ) = 128000
Percent of mem for appl. group heap (GROUPHEAP_RATIO) = 25
Max appl. control heap size (4KB) (APP_CTL_HEAP_SZ) = 1600
SQL statement heap (4KB) (STMTHEAP) = 5120
Default application heap (4KB) (APPLHEAPSZ) = 4096
Statistics heap size (4KB) (STAT_HEAP_SZ) = 15000
Interval for checking deadlock (ms) (DLCHKTIME) = 10000
Lock timeout (sec) (LOCKTIMEOUT) = 3600
Changed pages threshold (CHNGPGS_THRESH) = 40
Number of asynchronous page cleaners (NUM_IOCLEANERS) = AUTOMATIC
Number of I/O servers (NUM_IOSERVERS) = AUTOMATIC
Index sort flag (INDEXSORT) = YES
Sequential detect flag (SEQDETECT) = YES
Default prefetch size (pages) (DFT_PREFETCH_SZ) = 32
Track modified pages (TRACKMOD) = ON
Default number of containers = 1
Default tablespace extentsize (pages) (DFT_EXTENT_SZ) = 2
Max number of active applications (MAXAPPLS) = AUTOMATIC
Average number of active applications (AVG_APPLS) = AUTOMATIC
Max DB files open per application (MAXFILOP) = 1950
Log file size (4KB) (LOGFILSIZ) = 16380
Number of primary log files (LOGPRIMARY) = 20
Number of secondary log files (LOGSECOND) = 40
Changed path to log files (NEWLOGPATH) =
Path to log files = H:\db2\SRD\log_dir\NO
DE0000\
Overflow log path (OVERFLOWLOGPATH) =
Mirror log path (MIRRORLOGPATH) =
First active log file = S0002249.LOG
Block log on disk full (BLK_LOG_DSK_FUL) = YES
Percent max primary log space by transaction (MAX_LOG) = 0
Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = 0
Group commit count (MINCOMMIT) = 1
Percent log file reclaimed before soft chckpt (SOFTMAX) = 300
Log retain for recovery enabled (LOGRETAIN) = OFF
User exit for logging enabled (USEREXIT) = OFF
HADR database role = STANDARD
HADR local host name (HADR_LOCAL_HOST) =
HADR local service name (HADR_LOCAL_SVC) =
HADR remote host name (HADR_REMOTE_HOST) =
HADR remote service name (HADR_REMOTE_SVC) =
HADR instance name of remote server (HADR_REMOTE_INST) =
HADR timeout value (HADR_TIMEOUT) = 120
HADR log write synchronization mode (HADR_SYNCMODE) = NEARSYNC
First log archive method (LOGARCHMETH1) = DISK:H:\db2\SRD\log_d
ir\NODE0000\
Options for logarchmeth1 (LOGARCHOPT1) =
Second log archive method (LOGARCHMETH2) = OFF
Options for logarchmeth2 (LOGARCHOPT2) =
Failover log archive path (FAILARCHPATH) =
Number of log archive retries on error (NUMARCHRETRY) = 5
Log archive retry Delay (secs) (ARCHRETRYDELAY) = 20
Vendor options (VENDOROPT) =
Auto restart enabled (AUTORESTART) = ON
Index re-creation time and redo index build (INDEXREC) = SYSTEM (RESTART)
Log pages during index build (LOGINDEXBUILD) = OFF
Default number of loadrec sessions (DFT_LOADREC_SES) = 1
Number of database backups to retain (NUM_DB_BACKUPS) = 12
Recovery history retention (days) (REC_HIS_RETENTN) = 60
TSM management class (TSM_MGMTCLASS) =
TSM node name (TSM_NODENAME) =
TSM owner (TSM_OWNER) =
TSM password (TSM_PASSWORD) =
Automatic maintenance (AUTO_MAINT) = ON
Automatic database backup (AUTO_DB_BACKUP) = OFF
Automatic table maintenance (AUTO_TBL_MAINT) = ON
Automatic runstats (AUTO_RUNSTATS) = ON
Automatic statistics profiling (AUTO_STATS_PROF) = OFF
Automatic profile updates (AUTO_PROF_UPD) = OFF
Automatic reorganization (AUTO_REORG) = OFF
And i have cut and pasted the old logs to another location.
Regards,
Sreekanth -
LiveCach problem with log Area
In LiveCache monitoring I can see the message: "Log area full - back up log!"
I use the RSLVCBACKUP interactively with the parameter "Complete Data Backup". After running an error is displayed:
"Failed to determine media (Database connection: LCA)
Message no. LVC505 ".
Is it true I'm doing taking such measures for Log Area Backup. And if all true, how can I avoid mistakes LVC505?Sounds like you did not configure a backup medium for your liveCache.
To get rid of the log-full situation you may just use DBMGUI now and take a full backup and activate autolog backup afterwards.
Once this is done you may check what's wrong with your report usage.
regards,
Lars -
How to determine which archive logs are needed in flashback.
Hi,
Let's assume I have archive logs 1,2,3,4, then a "backup database plus archivelogs" in RMAN, and then archive logs 5+6. If I want to flashback my database to a point immediately after the backup, how do I determine which archive logs are needed?
I would assume I'd only need archive logs 5 and/or 6 since I did a full backup plus archivelogs and the database would have been checkpointed at that time. I'd also assume archive logs 1,2,3,4 would be obsolete as they would have been flushed to the datafiles in the checkpoint.
Are my assumptions correct? If not what queries can I run to determine what files are needed for a flashback using the latest checkpointed datafiles?
Thanks.Thanks for the explanation, let me be more specific with my problem.
I am trying to do a flashback on a failed primary database, the only reason why I want to do a flashback is because dataguard uses the flashback command to try and synchronize the failed database. Specifically dataguard is trying to run:
FLASHBACK DATABASE TO SCN 865984
But it fails, if I run it manually then I get:
SQL> FLASHBACK DATABASE TO SCN 865984;
FLASHBACK DATABASE TO SCN 865984
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38761: redo log sequence 5 in thread 1, incarnation 3 could not be accessed
Looking at the last checkpoint I see:
CHECKPOINT_CHANGE#
865857
Also looking at the archive logs:
RECID STAMP THREAD# SEQUENCE# FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# RESETLOGS_CHANGE# RESETLOGS
25 766838550 1 1 863888 10-NOV-11 863892 863888 10-NOV-11
26 766838867 1 2 863892 10-NOV-11 864133 863888 10-NOV-11
27 766839225 1 3 864133 10-NOV-11 864289 863888 10-NOV-11
28 766839340 1 4 864289 10-NOV-11 864336 863888 10-NOV-11
29 766840698 1 5 864336 10-NOV-11 865640 863888 10-NOV-11
30 766841128 1 6 865640 10-NOV-11 865833 863888 10-NOV-11
31 766841168 1 7 865833 10-NOV-11 865857 863888 10-NOV-11
How can I determine what archive logs are needed by a flashback command? I deleted any archive logs with a SCN less than the checkpoint #, I can restore them from backup but I am trying to figure out how to query what is required for a flashback. Maybe this coincides with the point that flashbacks have nothing to do with the backups of datafiles or the checkpoints? -
Whether t-logs are truncated in DB's particpating in High Availability. SQLAlways On 2012
Hi,
How the transaction logs are cleared for DB's participating in SQL AlwaysOn.I have a DB participating in HA.Now in order to use alwaysON DB recovery model should be set to full. NOw with this recovery model my transaction logs will grow continuously.
How t-logs can be cleared for the DB's participating in AG. Is there any recommendation?
Thanks
ManishHello,
Have you scheduled log backup on primary.Secondary replica might be applying transaction log records of this database to a corresponding secondary database.
Whats does below query returns
select name,log_resuse_wait_desc from sys.databases where name='db_name'
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
What if the log directory is full?
I've just switched from Oracle to DB2.
I have a simple question: what if the log directory is full before the logs are archived?
Forgive me if this is to basic.
Thanks!Hello, this depends on the setting of the blk_log_dsk_ful database configuration parameter.
[http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.db2.luw.admin.config.doc/doc/r0005787.html]
SAP standard setting is YES -
Logs are not getting applied in the standby database
Hello,
I have created a physical standby database and in it the logs are not getting applied..
following is an extract of the standby alert log
Wed Sep 05 07:53:59 2012
Media Recovery Log /u01/oracle/oradata/ABC/archives/1_37638_765704228.arc
Error opening /u01/oracle/oradata/ABC/archives/1_37638_765704228.arc
Attempting refetch
Media Recovery Waiting for thread 1 sequence 37638
Fetching gap sequence in thread 1, gap sequence 37638-37643
Wed Sep 05 07:53:59 2012
RFS[46]: Assigned to RFS process 3081
RFS[46]: Allowing overwrite of partial archivelog for thread 1 sequence 37638
RFS[46]: Opened log for thread 1 sequence *37638* dbid 1723205832 branch 765704228
Wed Sep 05 07:55:34 2012
RFS[42]: Possible network disconnect with primary database
However, the archived files are getting copied to the standby server.
I tried registering and recovering the logs but it also failed..
Follows some of the information,
Primary
Oralce 11R2 EE
SQL> select max(sequence#) from v$log where archived='YES';
MAX(SEQUENCE#)
37668
SQL> select DEST_NAME, RECOVERY_MODE,DESTINATION,ARCHIVED_SEQ#,APPLIED_SEQ#, SYNCHRONIZATION_STATUS,SYNCHRONIZED,GAP_STATUS from v$archive_dest_status where DEST_NAME = 'LOG_ARCHIVE_DEST_3';
DEST_NAME RECOVERY_MODE DESTINATION ARCHIVED_SEQ# APPLIED_SEQ# SYNCHRONIZATION_STATUS SYNCHRONIZED GAP_STATUS
LOG_ARCHIVE_DEST_3 MANAGED REAL TIME APPLY ABC 37356 0 CHECK CONFIGURATION NO RESOLVABLE GAP
Standby
Oralce 11R2 EE
SQL> select max(sequence#) from v$archived_log where applied='YES';
MAX(SEQUENCE#)
37637
SQL> select * from v$archive_gap;
no rows selected
SQL> select open_mode, database_role from v$database;
OPEN_MODE DATABASE_ROLE
READ ONLY WITH APPLY PHYSICAL STANDBY
Please help me to troubleshoot this and get the standby in sync..
Thanks a lot..the results are as follows..
SQL> select process, status, group#, thread#, sequence# from v$managed_standby order by process, group#, thread#, sequence#;
PROCESS STATUS GROUP# THREAD# SEQUENCE#
ARCH CLOSING 1 1 37644
ARCH CLOSING 1 1 37659
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
MRP0 WAIT_FOR_GAP N/A 1 37638
RFS IDLE N/A 0 0
RFS IDLE N/A 0 0
RFS IDLE N/A 0 0
RFS RECEIVING N/A 1 37638
RFS RECEIVING N/A 1 37639
RFS RECEIVING N/A 1 37640
RFS RECEIVING N/A 1 37641
RFS RECEIVING N/A 1 37642
RFS RECEIVING N/A 1 37655
RFS RECEIVING N/A 1 37673
RFS RECEIVING N/A 1 37675
42 rows selected.
SQL>
SQL> select name,value, time_computed from v$dataguard_stats;
NAME VALUE TIME_COMPUTED
transport lag +00 02:44:33 09/05/2012 09:25:58
apply lag +00 03:14:30 09/05/2012 09:25:58
apply finish time +00 00:01:09.974 09/05/2012 09:25:58
estimated startup time 12 09/05/2012 09:25:58
SQL> select timestamp , facility, dest_id, message_num, error_code, message from v$dataguard_status order by timestamp
TIMESTAMP FACILITY DEST_ID MESSAGE_NUM ERROR_CODE MESSAGE
05-SEP-12 Remote File Server 0 60 0 RFS[13]: Assigned to RFS process 2792
05-SEP-12 Remote File Server 0 59 0 RFS[12]: Assigned to RFS process 2790
05-SEP-12 Log Apply Services 0 61 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 62 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 63 0 Managed Standby Recovery Canceled
05-SEP-12 Log Apply Services 0 64 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 65 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 66 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 67 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 68 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Network Services 0 69 0 RFS[5]: Possible network disconnect with primary database
05-SEP-12 Network Services 0 70 0 RFS[6]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 71 0 RFS[14]: Assigned to RFS process 2829
05-SEP-12 Remote File Server 0 72 0 RFS[15]: Assigned to RFS process 2831
05-SEP-12 Network Services 0 73 0 RFS[9]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 74 0 RFS[16]: Assigned to RFS process 2833
05-SEP-12 Network Services 0 75 0 RFS[1]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 76 0 RFS[17]: Assigned to RFS process 2837
05-SEP-12 Network Services 0 77 0 RFS[3]: Possible network disconnect with primary database
05-SEP-12 Network Services 0 78 0 RFS[2]: Possible network disconnect with primary database
05-SEP-12 Network Services 0 79 0 RFS[7]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 80 0 RFS[18]: Assigned to RFS process 2848
05-SEP-12 Network Services 0 81 0 RFS[16]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 82 0 RFS[19]: Assigned to RFS process 2886
05-SEP-12 Network Services 0 83 0 RFS[19]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 84 0 RFS[20]: Assigned to RFS process 2894
05-SEP-12 Log Apply Services 0 85 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 86 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 87 0 Managed Standby Recovery Canceled
05-SEP-12 Remote File Server 0 89 0 RFS[22]: Assigned to RFS process 2900
05-SEP-12 Remote File Server 0 88 0 RFS[21]: Assigned to RFS process 2898
05-SEP-12 Remote File Server 0 90 0 RFS[23]: Assigned to RFS process 2902
05-SEP-12 Remote File Server 0 91 0 Primary database is in MAXIMUM PERFORMANCE mode
05-SEP-12 Remote File Server 0 92 0 RFS[24]: Assigned to RFS process 2904
05-SEP-12 Remote File Server 0 93 0 RFS[25]: Assigned to RFS process 2906
05-SEP-12 Log Apply Services 0 94 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 95 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 96 0 Managed Standby Recovery starting Real Time Apply
05-SEP-12 Log Apply Services 0 97 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Log Transport Services 0 98 0 ARCa: Beginning to archive thread 1 sequence 37644 (7911979302-7912040568)
05-SEP-12 Log Transport Services 0 99 0 ARCa: Completed archiving thread 1 sequence 37644 (0-0)
05-SEP-12 Network Services 0 100 0 RFS[8]: Possible network disconnect with primary database
05-SEP-12 Log Apply Services 0 101 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 102 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 103 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 104 0 Managed Standby Recovery Canceled
05-SEP-12 Network Services 0 105 0 RFS[20]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 106 0 RFS[26]: Assigned to RFS process 2930
05-SEP-12 Network Services 0 107 0 RFS[24]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 108 0 RFS[27]: Assigned to RFS process 2938
05-SEP-12 Network Services 0 109 0 RFS[14]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 110 0 RFS[28]: Assigned to RFS process 2942
05-SEP-12 Network Services 0 111 0 RFS[15]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 112 0 RFS[29]: Assigned to RFS process 2986
05-SEP-12 Network Services 0 113 0 RFS[17]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 114 0 RFS[30]: Assigned to RFS process 2988
05-SEP-12 Log Apply Services 0 115 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 116 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 117 0 Managed Standby Recovery starting Real Time Apply
05-SEP-12 Log Apply Services 0 118 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Network Services 0 119 0 RFS[18]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 120 0 RFS[31]: Assigned to RFS process 3003
05-SEP-12 Network Services 0 121 0 RFS[26]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 122 0 RFS[32]: Assigned to RFS process 3005
05-SEP-12 Network Services 0 123 0 RFS[27]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 124 0 RFS[33]: Assigned to RFS process 3009
05-SEP-12 Remote File Server 0 125 0 RFS[34]: Assigned to RFS process 3012
05-SEP-12 Log Apply Services 0 127 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 126 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 128 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 129 0 Managed Standby Recovery Canceled
05-SEP-12 Network Services 0 130 0 RFS[32]: Possible network disconnect with primary database
05-SEP-12 Log Apply Services 0 131 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 132 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 133 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 134 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 135 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Network Services 0 136 0 RFS[33]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 137 0 RFS[35]: Assigned to RFS process 3033
05-SEP-12 Log Apply Services 0 138 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 139 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 140 0 Managed Standby Recovery Canceled
05-SEP-12 Remote File Server 0 141 0 RFS[36]: Assigned to RFS process 3047
05-SEP-12 Log Apply Services 0 142 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 143 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 144 0 Managed Standby Recovery starting Real Time Apply
05-SEP-12 Log Apply Services 0 145 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Network Services 0 146 0 RFS[35]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 147 0 RFS[37]: Assigned to RFS process 3061
05-SEP-12 Network Services 0 148 0 RFS[36]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 149 0 RFS[38]: Assigned to RFS process 3063
05-SEP-12 Remote File Server 0 150 0 RFS[39]: Assigned to RFS process 3065
05-SEP-12 Network Services 0 151 0 RFS[25]: Possible network disconnect with primary database
05-SEP-12 Network Services 0 152 0 RFS[21]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 153 0 Archivelog record exists, but no file is found
05-SEP-12 Remote File Server 0 154 0 RFS[40]: Assigned to RFS process 3067
05-SEP-12 Network Services 0 155 0 RFS[37]: Possible network disconnect with primary database -
Hi,
I want to fetch the list of users who all are having full access to the sharepoint list using client object model with .Net
Please let me know if any property for the user object or any other way to get it.
Thanks in advance.Here you are complete code i created from some years it lists all groups and users, you can just add a check in the permissions loop to see if it is equal to Full Control.
Private void GetData(object obj)
MyArgs args = obj as MyArgs;
try
if (args == null)
return; // called without parameters or invalid type
using (ClientContext clientContext = new ClientContext(args.URL))
// clientContext.AuthenticationMode = ClientAuthenticationMode.;
NetworkCredential credentials = new NetworkCredential(args.UserName, args.Password, args.Domain);
clientContext.Credentials = credentials;
RoleAssignmentCollection roles = clientContext.Web.RoleAssignments;
ListViewItem lvi;
ListViewItem.ListViewSubItem lvsi;
ListViewItem lvigroup;
ListViewItem.ListViewSubItem lvsigroup;
clientContext.Load(roles);
clientContext.ExecuteQuery();
foreach (RoleAssignment orole in roles)
clientContext.Load(orole.Member);
clientContext.ExecuteQuery();
//name
//MessageBox.Show(orole.Member.LoginName);
lvi = new ListViewItem();
lvi.Text = orole.Member.LoginName;
lvsi = new ListViewItem.ListViewSubItem();
lvsi.Text = orole.Member.PrincipalType.ToString();
lvi.SubItems.Add(lvsi);
//get the type group or user
// MessageBox.Show(orole.Member.PrincipalType.ToString());
if (orole.Member.PrincipalType.ToString() == "SharePointGroup")
lvigroup = new ListViewItem();
lvigroup.Text = orole.Member.LoginName;
// args.GroupsList.Items.Add(lvigroup);
DoUpdate1(lvigroup);
Group group = clientContext.Web.SiteGroups.GetById(orole.Member.Id);
UserCollection collUser = group.Users;
clientContext.Load(collUser);
clientContext.ExecuteQuery();
foreach (User oUser in collUser)
lvigroup = new ListViewItem();
lvigroup.Text = "";
lvsigroup = new ListViewItem.ListViewSubItem();
lvsigroup.Text = oUser.LoginName;
lvigroup.SubItems.Add(lvsigroup);
//args.GroupsList.Items.Add(lvigroup);
DoUpdate1(lvigroup);
// MessageBox.Show(oUser.LoginName);
RoleDefinitionBindingCollection roleDefsbindings = null;
roleDefsbindings = orole.RoleDefinitionBindings;
clientContext.Load(roleDefsbindings);
clientContext.ExecuteQuery();
//permission level
lvsi = new ListViewItem.ListViewSubItem();
string permissionsstr = string.Empty;
for (int i = 0; i < roleDefsbindings.Count; i++)
if (i == roleDefsbindings.Count - 1)
permissionsstr = permissionsstr += roleDefsbindings[i].Name;
else
permissionsstr = permissionsstr += roleDefsbindings[i].Name + ", ";
lvsi.Text = permissionsstr;
lvi.SubItems.Add(lvsi);
// args.PermissionsList.Items.Add(lvi);
DoUpdate2(lvi);
catch (Exception ex)
MessageBox.Show(ex.Message);
finally
DoUpdate3();
Kind Regards, John Naguib Technical Consultant/Architect MCITP, MCPD, MCTS, MCT, TOGAF 9 Foundation -
Restored standby database from primary; now no logs are shipped
Hi
We recently had a major network/SAN issue and had to restore our standby database from a backup of the primary. To do this, we restored the database to the standby, created a standby controlfile on the primary, copied this across to the control file locations and started in standby recover and applied the logs manually/registered to get it back up to speed.
However, no new logs are being shipped across from the primary.
Have we missed a step somewhere?
One thing we've noticed is that there is no RFS process running on the standby:
SQL> SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY;
PROCESS CLIENT_P SEQUENCE# STATUS
ARCH ARCH 0 CONNECTED
ARCH ARCH 0 CONNECTED
MRP0 N/A 100057 WAIT_FOR_LOG
How do we start this? Or will it only show if the arc1 process on the primary is sending files?
The arc1 process is showing at OS level on the primary but I'm wondering if its faulty somehow?
There are NO errors in the alert logs in the primary or the standby. There's not even the normal FAL gap sequence type error - in the standby it's just saying 'waiting for log' and a number from ages ago. It's like the primary isn't even talking to the standby. The listener is up and running ok though...
What else can we check/do?
If we manually copy across files and do an 'alter database register' then they are applied to the standby without issue; there's just no automatic log shipping going on...
Thanks
RossHi all
Many thanks for all the responses.
The database is 10.2.0.2.0, on AIX 6.
I believe the password files are ok; we've had issues previously and this is always flagged in the alert log on the primary - not the case here.
Not set to DEFER on primary; log_archive_dest_2 is set to service="STBY_PHP" optional delay=720 reopen=30 and log_archive_dest_state_2 is set to ENABLE.
I ran those troubleshooting scripts, info from standby:
SQL> @troubleshoot
NAME DISPLAY_VALUE
db_file_name_convert
db_name PHP
db_unique_name PHP
dg_broker_config_file1 /oracle/PHP/102_64/dbs/dr1PHP.dat
dg_broker_config_file2 /oracle/PHP/102_64/dbs/dr2PHP.dat
dg_broker_start FALSE
fal_client STBY_PHP
fal_server PHP
local_listener
log_archive_config
log_archive_dest_2 service=STBY_PHP optional delay=30 reopen=30
log_archive_dest_state_2 DEFER
log_archive_max_processes 2
log_file_name_convert
remote_login_passwordfile EXCLUSIVE
standby_archive_dest /oracle/PHP/oraarch/PHParch
standby_file_management AUTO
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE
PHP PHP MAXIMUM PERFORM PHYSICAL S MOUNTED
ANCE TANDBY
THREAD# MAX(SEQUENCE#)
1 100149
PROCESS STATUS THREAD# SEQUENCE#
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
MRP0 WAIT_FOR_LOG 1 100150
NAME VALUE UNIT TIME_COMPUTED
apply finish time day(2) to second(1) interval
apply lag day(2) to second(0) interval
estimated startup time 8 second
standby has been open N
transport lag day(2) to second(0) interval
NAME Size MB Used MB
0 0
On the primary, the script has froze!! How long should it take? Got as far as this:
SQL> @troubleshoot
NAME DISPLAY_VALUE
db_file_name_convert
db_name PHP
db_unique_name PHP
dg_broker_config_file1 /oracle/PHP/102_64/dbs/dr1PHP.dat
dg_broker_config_file2 /oracle/PHP/102_64/dbs/dr2PHP.dat
dg_broker_start FALSE
fal_client STBY_R1P
fal_server R1P
local_listener
log_archive_config
log_archive_dest_2 service="STBY_PHP" optional delay=720 reopen=30
log_archive_dest_state_2 ENABLE
log_archive_max_processes 2
log_file_name_convert
remote_login_passwordfile EXCLUSIVE
standby_archive_dest /oracle/PHP/oraarch/PHParch
standby_file_management AUTO
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE SWITCHOVER_STATUS
PHP PHP MAXIMUM PERFORMANCE PRIMARY READ WRITE SESSIONS ACTIVE
THREAD# MAX(SEQUENCE#)
1 100206
NOW - before you say it - :) - yes, I'm aware that fal_client as STBY_R1P and fal_server as R1P are incorrect - should be PHP - but it looks like it's always been this way! Well, as least for the last 4 years where it's worked fine, as I found an old SP file and it still has R1P set in there...?!?
Any ideas?
Ross -
Why is the Log Area size much smaller than the log volume
I have been following up on an Early Watch report that has been generated for our production SCM 5.0 system running liveCache 7.6.02 Build 14. The alert says "The LOG volumes size in your system is too small. Recommendation: Configure LOG Volumes to at least 2 GB". There are two interesting things about this.
1) I have spent the last couple of days reading all the OSS Notes, and MaxDB documentation I could find, and this does not seem to be documented as a recommendation anywhere. Does this seem like a realistic recommendation without taking into account the level of change actifity?
2) There is a single log volume allocated with size 2,097,160 KB. In production LC10 and DBMGUI report this to be correct size under volume details, but only list the total log area size as 1,706,328 (81% of the volume size). We have a non-production environment with exactly the same size log volume, but it reports that the log area size is 2,032,008 KB (97% of the volume). What leads to these different amounts of wasted space, and is there any way of getting the database to start using it?
Thanks,
MarkHi Natalia,
I did read 869267, several times. It does not answer my questions which is why I posted here.
DBMGUI version = 7.6.00.25
DBMCLI commands for PL1 (Production)
> xinstinfo PL1
IndepData : /sapdb/data
IndepPrograms : /sapdb/programs
InstallationPath : /sapdb/PL1/db
Kernelversion : KERNEL 7.6.02 BUILD 014-123-152-175
Rundirectory : /sapdb/data/wrk/PL1
> dbmcli -d PL1 -u control,control
dbmcli on PL1>db_state
OK
State
ONLINE
dbmcli on PL1>info log
OK
END
Name | Value
Log Mirrored = NO
Log Writing = ON
Log Automatic Overwrite = OFF
Max. Size (KB) = 1706328
Backup Segment Size (KB) = 699048
Used Size (KB) = 104640
Used Size (%) = 6
Not Saved (KB) = 104640
Not Saved (%) = 6
Log Since Last Data Backup (KB) = 0
Savepoints = 5210
Checkpoints = 0
Physical Reads = 2469115
Physical Writes = 15655616
Queue Size (KB) = 48000
Queue Overflows = 646
Group Commits = 98205
Waits for Logwriter = 10957511
Max. Waits = 10
Average Waits = 0
OMS Log Used Pages = 0
OMS Min. Free Pages = 0
dbmcli on PL1>param_getvolsall
OK
LOG_MIRRORED NO
MAXLOGVOLUMES 2
MAXDATAVOLUMES 14
LOG_VOLUME_NAME_001 262145 F /sapdb/PL1/saplog/DISKL001
DATA_VOLUME_NAME_0001 524289 F /sapdb/PL1/sapdata/DISKD0001
DATA_VOLUME_NAME_0002 524289 F /sapdb/PL1/sapdata/DISKD0002
DATA_VOLUME_NAME_0003 524289 F /sapdb/PL1/sapdata/DISKD0003
DATA_VOLUME_NAME_0004 524289 F /sapdb/PL1/sapdata/DISKD0004
DATA_VOLUME_NAME_0005 524289 F /sapdb/PL1/sapdata/DISKD0005
DATA_VOLUME_NAME_0006 524289 F /sapdb/PL1/sapdata/DISKD0006
DATA_VOLUME_NAME_0007 524289 F /sapdb/PL1/sapdata/DISKD0007
DATA_VOLUME_NAME_0008 524289 F /sapdb/PL1/sapdata/DISKD0008
DATA_VOLUME_NAME_0009 1048577 F /sapdb/PL1/sapdata/DISKD0009
DATA_VOLUME_NAME_0010 1048577 F /sapdb/PL1/sapdata/DISKD0010
DATA_VOLUME_NAME_0011 1048577 F /sapdb/PL1/sapdata/DISKD0011
DATA_VOLUME_NAME_0012 1048577 F /sapdb/PL1/sapdata/DISKD0012
dbmcli on PL1>param_directget MAXCPU
OK
MAXCPU 12
dbmcli on PL1>param_directget MAX_LOG_QUEUE_COUNT
OK
MAX_LOG_QUEUE_COUNT 0
dbmcli on PL1>param_directget LOG_IO_QUEUE
OK
LOG_IO_QUEUE 6000
> xinstinfo SL1
IndepData : /sapdb/data
IndepPrograms : /sapdb/programs
InstallationPath : /sapdb/SL1/db
Kernelversion : KERNEL 7.6.02 BUILD 014-123-152-175
Rundirectory : /sapdb/data/wrk/SL1
dbmcli on SL1>db_state
OK
State
ONLINE
dbmcli on SL1>info log
OK
END
Name | Value
Log Mirrored = NO
Log Writing = ON
Log Automatic Overwrite = OFF
Max. Size (KB) = 2032008
Backup Segment Size (KB) = 699048
Used Size (KB) = 3824
Used Size (%) = 0
Not Saved (KB) = 3824
Not Saved (%) = 0
Log Since Last Data Backup (KB) = 0
Savepoints = 1256
Checkpoints = 0
Physical Reads = 2178269
Physical Writes = 4969914
Queue Size (KB) = 16000
Queue Overflows = 21201
Group Commits = 643
Waits for Logwriter = 751336
Max. Waits = 4
Average Waits = 0
OMS Log Used Pages = 0
OMS Min. Free Pages = 0
dbmcli on SL1>param_getvolsall
OK
LOG_MIRRORED NO
MAXLOGVOLUMES 2
MAXDATAVOLUMES 10
LOG_VOLUME_NAME_001 262145 F /sapdb/SL1/saplog/DISKL001
DATA_VOLUME_NAME_0001 262145 F /sapdb/SL1/sapdata1/DISKD0001
DATA_VOLUME_NAME_0002 262145 F /sapdb/SL1/sapdata2/DISKD0002
DATA_VOLUME_NAME_0003 1048577 F /sapdb/SL1/sapdata3/DISKD0003
DATA_VOLUME_NAME_0004 1048577 F /sapdb/SL1/sapdata4/DISKD0004
DATA_VOLUME_NAME_0005 783501 F /sapdb/SL1/sapdata1/DISKD0005
DATA_VOLUME_NAME_0006 783501 F /sapdb/SL1/sapdata2/DISKD0006
dbmcli on SL1>param_directget MAXCPU
OK
MAXCPU 4
dbmcli on SL1>param_directget MAX_LOG_QUEUE_COUNT
OK
MAX_LOG_QUEUE_COUNT 0
dbmcli on SL1>param_directget LOG_IO_QUEUE
OK
LOG_IO_QUEUE 2000
Thanks for the explaination of the reserved space for the Log Queue pages. This does explain why there is the difference between the two. I think we probably have our log segment size too large. As you can see we do get occasional log queue overflows. Do you suggest we increase the size of our log IO queue higher, and allocate more log volume space to compensate?
select * from SYSINFO.LOGSTATISTICS (on PL1)
1706328;334176;19;334176;19;1879;20305192;64109066;7806480;182151514;12;48000
DBMGUI Log Area Usage
Total Size: 2048.01 MB
Free Log Area: 1330.38 MB
Used Log Area: 335.96 MB
Unsaved Log Area: 335.96
Log since Last Data Backup: 0.00 MB
Thanks,
Mark -
Can I use ink cartridges from my old printer in my new one. They are nearly full.
I got a new HP printer that uses the same ink as my old one. the cartridges are practically full. Can I use them in the new one or do I have to toss them? Thanks
Hi rag742,
What is the model no. of both printers?
Although I am an HP employee, I am speaking for myself and not for HP.
--Say "Thanks" by clicking the Kudos Star in the post that helped you.
--Please mark the post that solves your problem as "Accepted Solution" -
Hi All
In Message monitoring(RWB) in adapter engine i am getting the following error
SOAP: response message contains an error XIAdapter/PARSING/ADAPTER.SOAP_EXCEPTION - soap fault: Server was unable to process request. ---> The event log file is full
Can any one suggest me what might be the problem
Thanks
Jayaraman
Edited by: Jayaraman P on May 20, 2010 4:27 PM
Edited by: Jayaraman P on May 20, 2010 4:28 PM>
Jayaraman P wrote:
> SOAP: response message contains an error XIAdapter/PARSING/ADAPTER.SOAP_EXCEPTION - soap fault: Server was unable to process request. ---> The event log file is full
this is because of a problem at the WS server (it might mostly be a windows server).
You can request the WS team to have a look into this issue. it is not a PI problem. -
Archive logs are missing in hot backup
Hi All,
We are using the following commands to take hot backup of our database. Hot backup is fired by "backup" user on Linux system.
=======================
rman target / nocatalog <<EOF
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '$backup_dir/$date/%F';
run {
allocate channel oem_backup_disk1 type disk format '$backup_dir/$date/%U';
#--Switch archive logs for all threads
sql 'alter system archive log current';
backup as COMPRESSED BACKUPSET database include current controlfile;
#--Switch archive logs for all threads
sql 'alter system archive log current';
#--Backup Archive logs and delete what we've backedup
backup as COMPRESSED BACKUPSET archivelog all not backed up delete all input;
release channel oem_backup_disk1;
allocate channel for maintenance type disk;
delete noprompt obsolete device type disk;
release channel;
exit
EOF
=======================
Due to which after command (used 2 times) "sql 'alter system archive log current';" I see the following lines in alert log 2 times. Because of this all the online logs are not getting archived (Missing 2 logs per day), the backup taken is unusable when restoring. I am worried about this. I there any to avoid this situation.
=======================
Errors in file /u01/oracle/admin/rac/udump/rac1_ora_3546.trc:
ORA-19504: failed to create file "+DATA/rac/1_32309_632680691.dbf"
ORA-17502: ksfdcre:4 Failed to create file +DATA/rac/1_32309_632680691.dbf
ORA-15055: unable to connect to ASM instance
ORA-01031: insufficient privileges
=======================
Regards,
Kunal.All thanks you for help, pleas find additional information. I goth the following error as log sequence was missing. Everyday during hotbackup, there are 2 missing archive logs, which makes our backup inconsistent and useless.
archive log filename=/mnt/xtra-backup/ora_archivelogs/1_32531_632680691.dbf thread=1 sequence=32531
archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28768_632680691.dbf thread=2 sequence=28768
archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28769_632680691.dbf thread=2 sequence=28769
archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28770_632680691.dbf thread=2 sequence=28770
archive log filename=/mnt/xtra-backup/ora_archivelogs/1_32532_632680691.dbf thread=1 sequence=32532
archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28771_632680691.dbf thread=2 sequence=28771
archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf thread=2 sequence=28772
archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf thread=2 sequence=28773
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 12/13/2012 04:22:56
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf'
ORA-00310: archived log contains sequence 28772; sequence 28773 required
ORA-00334: archived log: '/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf'
Let me try the susggestions provided above. -
In Aperture 3 and using File>Get Info, I'm told that many photos, which are 7+ Mb's are only One Mb or less. Are the full-sized masters held elsewhere, and if so how can I get to them. You've guessed I'm a newbie - just 2 months with Apple and I'm very confused - any advice would be gratefully received.
You should probably have this thread moved to the Aperture Discussion group and you’ll find lots of useful tips in the threads there:
https://discussions.apple.com/community/professional_applications/aperture
Are you looking at these sizes within Aperture using the metadata view or inspecting them outside using the Finder and Show Package Contents?
I am wondering how you are importing the pictures from your camera? When you import from your camera, the right side of your display window will have a lot of options that you can select to grab your images from the camera.
Does your camera shoot RAW+JPEG? If so, you may not be importing them correctly and only picking up the JPEGS. You can import them so they stay together or separate them. The master images are all at the Project Level whereas the views you see in Albums for example are aliases that point back to your Project.
Here’s a good Aperture web site (I have no financial or other interest in this site) to help you using Aperture:
http://aperture.maccreate.com/
This article talks about RAW and JPEG images:
http://aperture.maccreate.com/2011/05/24/matching-raw-files-to-jpeg-pairs-in-ape rture-3/
Maybe you are looking for
-
Mac Pro w/ Lion Having Issues
Every since I upgraded my early 2009 Mac Pro 2.66 Quad-core Xeon, I've been having some strange issues. For example, my accounts were set to log out after a certain amount of time of inactivity. By the time the account logged out, my display would ha
-
Can not upgrade to desktop manager 5.0 in Vista
Frustrated- trying to delete 4.7 (not working) and do fresh install of 5.0 - get registry error when uninstall tries to delete certain obscure entries with 1405 error code - do not have permission to delete key etc. - I am running as administrator
-
Sample Flash applications that can be embedded on DMD
Does anyone have any sample flash applications that read their configuration data from an external file and that can be embeded in a DMD presentation? And MOST IMPORTANTLY with the .fla file? The ciscoet.com portal has plenty of samples but they are
-
How to open a password protected PDF file on the iPhone
I've got a PDF file that is password protected and I transferred it to my iPhone using Air Sharing. When I go to open it, it asks for the password which I enter, but then it closes the program and doesn't open the PDF. Any ideas? Thanks
-
Dear, Is RDAC supported in Oracle Unbreakable Linux Kernel 2.6.32-100.0.19.el5? If supported, please point to the download link. We require this in order to plan for the OEL 5.5 kernel upgrade to ULK and to connect SUN SAN storage array 6180. Thanks,