Maxdb Log Area is full
When I tried to use tcode sgen to compile 59nnn programs (sap_aba, sap_basis), the log area got 100% full and the job stopped. The last time I checked prior to this was more than 40K programs were compiled.
Should have splitted up into two batches:
Pi_basis, sap_aba
Sap_basis, sap_bw
Because pi_basis and sap_bw have much less objects to be generated.
Set Auto Log On. The good thing is system knows to pick up where it left off. Smart MiniSAP.
Edited by: Passing By on Nov 17, 2011 4:14 AM
Chetan Ramachandra wrote:>
> Hi SAPDB Gurus,
>
> Our quality box live cache log area has become full and when I try to run a log backup I get below message.
>
> -24755,ERR_SESSIONLIMIT: No session of type 'User' available
> -24994,ERR_RTE: Runtime environment error
> 2,utility session is already in use
Hmm... Did you start another backup before?
> I had increased the MAXUSERTASKS parameter dynamically , again I am unable to take log backup.
You may modify the parameter value for MAXUSERTASKS while the instance is up and running, but this would only become active with the next restart of the liveCache. This parameter is in fact not dynamically changeable.
Besides, since your UTILITY session is already in use, changing this parameter value does not do a thing about your problem.
There's always just one utility session possible, regardless of the MAXUSERTASKS value!
> I tried shutting down the live cache, but this is also not stopping.
Did you try db_stop ?
> can somebody help me here ?, as currently my quality APO box down because of this.
>
> My APO version is SCM5.0, LIVE cache 7.6.04 and OS is windows server 2003.
Well, you have to perform a log backup to resolve this issue.
For that you have to have performed a complete data backup before.
Alternatively you may switch the logmode to OVERWRITE if this liveCache instance does not need to be recovered at all.
regards,
Lars
Similar Messages
-
Hi SAPDB Gurus,
Our quality box live cache log area has become full and when I try to run a log backup I get below message.
-24755,ERR_SESSIONLIMIT: No session of type 'User' available
-24994,ERR_RTE: Runtime environment error
2,utility session is already in use
I had increased the MAXUSERTASKS parameter dynamically , again I am unable to take log backup.
I tried shutting down the live cache, but this is also not stopping.
can somebody help me here ?, as currently my quality APO box down because of this.
My APO version is SCM5.0, LIVE cache 7.6.04 and OS is windows server 2003.
Thanks,
ChetanChetan Ramachandra wrote:>
> Hi SAPDB Gurus,
>
> Our quality box live cache log area has become full and when I try to run a log backup I get below message.
>
> -24755,ERR_SESSIONLIMIT: No session of type 'User' available
> -24994,ERR_RTE: Runtime environment error
> 2,utility session is already in use
Hmm... Did you start another backup before?
> I had increased the MAXUSERTASKS parameter dynamically , again I am unable to take log backup.
You may modify the parameter value for MAXUSERTASKS while the instance is up and running, but this would only become active with the next restart of the liveCache. This parameter is in fact not dynamically changeable.
Besides, since your UTILITY session is already in use, changing this parameter value does not do a thing about your problem.
There's always just one utility session possible, regardless of the MAXUSERTASKS value!
> I tried shutting down the live cache, but this is also not stopping.
Did you try db_stop ?
> can somebody help me here ?, as currently my quality APO box down because of this.
>
> My APO version is SCM5.0, LIVE cache 7.6.04 and OS is windows server 2003.
Well, you have to perform a log backup to resolve this issue.
For that you have to have performed a complete data backup before.
Alternatively you may switch the logmode to OVERWRITE if this liveCache instance does not need to be recovered at all.
regards,
Lars -
Steps to empty SAPDB (MaxDB) log file
Hello All,
i am on Redhat Unix Os with NW 7.1 CE and SAPDB as Back end. I am trying to login but my log file is full. Ii want to empty log file but i havn't done any data backup yet. Can anybody guide me how toproceed to handle this problem.
I do have some idea what to do like the steps below
1. take databackup (but i want to skip this step if possible) since this is a QA system and we are not a production company.
2. Take log backup using same methos as data backup but with Log type (am i right or there is somethign else)
3. It will automatically overwrite log after log backups.
or should i use this as an alternative, i found this in note Note 869267 - FAQ: SAP MaxDB LOG area
Can the log area be overwritten cyclically without having to make a log backup?
Yes, the log area can be automatically overwritten without log backups. Use the DBM command
util_execute SET LOG AUTO OVERWRITE ON
to set this status. The behavior of the database corresponds to the DEMO log mode in older versions. With version 7.4.03 and above, this behavior can be set online.
Log backups are not possible after switching on automatic overwrite. Backup history is broken down and flagged by the abbreviation HISTLOST in the backup history (dbm.knl file). The backup history is restarted when you switch off automatic overwrite without log backups using the command
util_execute SET LOG AUTO OVERWRITE OFF
and by creating a complete data backup in the ADMIN or ONLINE status.
Automatic overwrite of the log area without log backups is NOT suitable for production operation. Since no backup history exists for the following changes in the database, you cannot track transactions in the case of recovery.
any reply will be highly appreciated.
Thanks
ManiHello Mani,
1. Please review the document u201CUsing SAP MaxDB X Server Behind a Firewallu201D at MAXDB library
http://maxdb.sap.com/doc/7_7/44/bbddac91407006e10000000a155369/content.htm
u201CTo enable access to X Server (and thus the database) behind a firewall using a client program such as Database Studio, open the necessary ports in your firewall and restrict access to these ports to only those computers that need to access the database.u201D
Is the database server behind a Firewall? If yes, then the Xserver port need to be open. You could restrict access to this port to the computers of your database administrators, for example.
Is "nq2host" the name of the database server? Could you ping to the server "nq2host" from your machine?
2. And if the database server and your PC in the local area NetWork you could start the x_server on the database server & connect to the database using the DB studio on your PC, as you already told by Lars.
See the document u201CNetwork Communicationu201D at
http://maxdb.sap.com/doc/7_7/44/d7c3e72e6338d3e10000000a1553f7/content.htm
Thank you and best regards, Natalia Khlopina -
hello experts,
I need to change the log mode which is in "overwrite mode"
currently. and increase LOG_IO_QUEUE size.
how can i do it.
our maxdb version is 7.6 and O/S linux 2.6.16
plz suggest steps .
thanks and regardsDear Kavitha,
-> Please review the SAP notes::
869267 FAQ: MaxDB LOG area
< "35. Can the log area be overwritten cyclically without having to make a log backup?"
"52. How large should the LOG_IO_QUEUE be when configured? " >
719652 Setting initial parameters for liveCache 7.5 or 7.6
819641 FAQ: MaxDB Performance
If you are SAP customer, you are able to read the SAP notes.
-> The MAXDB documentation also give you answers on the reported questions::
http://maxdb.sap.com/documentation/ -> Open the SAP MaxDB 7.6 Library
-> Glossary
"Changing the Values of Database Parameters" at
http://maxdb.sap.com/doc/7_6/9b/e6dc41765b6024e10000000a1550b0/content.htm
"Log Queue"
http://maxdb.sap.com/doc/7_6/23/c806c81e20f946a59a421e01c42c3b/content.htm
< The new value of the LOG_IO_QUEUE parameter will be activated only after
the database will be restarted from offline mode. >
"Displaying and Changing Database Parameters" using DBMGUI tool at
http://maxdb.sap.com/doc/7_6/84/d8d198570411d4aa82006094b92fad/content.htm
-> Please pay attention ::
"Automatic overwrite of the log area without log backups is NOT
recommended for production operation. Since no backup history exists
for the following changes in the database, you cannot track transactions
in the case of recovery. "
Run dbm command 'param_getexplain LOG_IO_QUEUE'.
May be the log devspaces can be moved to a faster disk, to accelerate the physical log I/O.
-> What is the version of the database? < Please also give the patch and build number >
Why do you need to change the log mode which is in "overwrite mode"
and increase the value of the database parameter LOG_IO_QUEUE?
Thank you and best regards, Natalia Khlopina -
Hi All,
We install SRM system with DB2 9.1, We have using logarchmeth1. The log directory /db2/SID/log_dir is getting full every time.
We are taking full online backup including logs every week for the Development server.
Can i delete the logs in the log_dir..? as we are taking full online backup including logs. And we dont have tape to take backup of archive logs.
Please suggest..
Regards,
SreekanthHi All,
Please find the db2 config parameters below. please suggest is it ok or need to change any thing.
H:\db2\db2srd\db2_software\BIN>db2 get db cfg for SRD
Database Configuration for Database SRD
Database configuration release level = 0x0b00
Database release level = 0x0b00
Database territory = en_US
Database code page = 1208
Database code set = UTF-8
Database country/region code = 1
Database collating sequence = IDENTITY_16BIT
Alternate collating sequence (ALT_COLLATE) =
Database page size = 16384
Dynamic SQL Query management (DYN_QUERY_MGMT) = DISABLE
Discovery support for this database (DISCOVER_DB) = ENABLE
Restrict access = NO
Default query optimization class (DFT_QUERYOPT) = 5
Degree of parallelism (DFT_DEGREE) = 1
Continue upon arithmetic exceptions (DFT_SQLMATHWARN) = NO
Default refresh age (DFT_REFRESH_AGE) = 0
Default maintained table types for opt (DFT_MTTB_TYPES) = SYSTEM
Number of frequent values retained (NUM_FREQVALUES) = 10
Number of quantiles retained (NUM_QUANTILES) = 20
Backup pending = NO
Database is consistent = NO
Rollforward pending = NO
Restore pending = NO
Multi-page file allocation enabled = YES
Log retain for recovery status = NO
User exit for logging status = YES
Self tuning memory (SELF_TUNING_MEM) = ON
Size of database shared memory (4KB) (DATABASE_MEMORY) = 2795520
Database memory threshold (DB_MEM_THRESH) = 10
Max storage for lock list (4KB) (LOCKLIST) = AUTOMATIC
Percent. of lock lists per application (MAXLOCKS) = AUTOMATIC
Package cache size (4KB) (PCKCACHESZ) = AUTOMATIC
Sort heap thres for shared sorts (4KB) (SHEAPTHRES_SHR) = AUTOMATIC
Sort list heap (4KB) (SORTHEAP) = AUTOMATIC
Database heap (4KB) (DBHEAP) = 25000
Catalog cache size (4KB) (CATALOGCACHE_SZ) = 2560
Log buffer size (4KB) (LOGBUFSZ) = 1024
Utilities heap size (4KB) (UTIL_HEAP_SZ) = 10000
Buffer pool size (pages) (BUFFPAGE) = 10000
Max size of appl. group mem set (4KB) (APPGROUP_MEM_SZ) = 128000
Percent of mem for appl. group heap (GROUPHEAP_RATIO) = 25
Max appl. control heap size (4KB) (APP_CTL_HEAP_SZ) = 1600
SQL statement heap (4KB) (STMTHEAP) = 5120
Default application heap (4KB) (APPLHEAPSZ) = 4096
Statistics heap size (4KB) (STAT_HEAP_SZ) = 15000
Interval for checking deadlock (ms) (DLCHKTIME) = 10000
Lock timeout (sec) (LOCKTIMEOUT) = 3600
Changed pages threshold (CHNGPGS_THRESH) = 40
Number of asynchronous page cleaners (NUM_IOCLEANERS) = AUTOMATIC
Number of I/O servers (NUM_IOSERVERS) = AUTOMATIC
Index sort flag (INDEXSORT) = YES
Sequential detect flag (SEQDETECT) = YES
Default prefetch size (pages) (DFT_PREFETCH_SZ) = 32
Track modified pages (TRACKMOD) = ON
Default number of containers = 1
Default tablespace extentsize (pages) (DFT_EXTENT_SZ) = 2
Max number of active applications (MAXAPPLS) = AUTOMATIC
Average number of active applications (AVG_APPLS) = AUTOMATIC
Max DB files open per application (MAXFILOP) = 1950
Log file size (4KB) (LOGFILSIZ) = 16380
Number of primary log files (LOGPRIMARY) = 20
Number of secondary log files (LOGSECOND) = 40
Changed path to log files (NEWLOGPATH) =
Path to log files = H:\db2\SRD\log_dir\NO
DE0000\
Overflow log path (OVERFLOWLOGPATH) =
Mirror log path (MIRRORLOGPATH) =
First active log file = S0002249.LOG
Block log on disk full (BLK_LOG_DSK_FUL) = YES
Percent max primary log space by transaction (MAX_LOG) = 0
Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = 0
Group commit count (MINCOMMIT) = 1
Percent log file reclaimed before soft chckpt (SOFTMAX) = 300
Log retain for recovery enabled (LOGRETAIN) = OFF
User exit for logging enabled (USEREXIT) = OFF
HADR database role = STANDARD
HADR local host name (HADR_LOCAL_HOST) =
HADR local service name (HADR_LOCAL_SVC) =
HADR remote host name (HADR_REMOTE_HOST) =
HADR remote service name (HADR_REMOTE_SVC) =
HADR instance name of remote server (HADR_REMOTE_INST) =
HADR timeout value (HADR_TIMEOUT) = 120
HADR log write synchronization mode (HADR_SYNCMODE) = NEARSYNC
First log archive method (LOGARCHMETH1) = DISK:H:\db2\SRD\log_d
ir\NODE0000\
Options for logarchmeth1 (LOGARCHOPT1) =
Second log archive method (LOGARCHMETH2) = OFF
Options for logarchmeth2 (LOGARCHOPT2) =
Failover log archive path (FAILARCHPATH) =
Number of log archive retries on error (NUMARCHRETRY) = 5
Log archive retry Delay (secs) (ARCHRETRYDELAY) = 20
Vendor options (VENDOROPT) =
Auto restart enabled (AUTORESTART) = ON
Index re-creation time and redo index build (INDEXREC) = SYSTEM (RESTART)
Log pages during index build (LOGINDEXBUILD) = OFF
Default number of loadrec sessions (DFT_LOADREC_SES) = 1
Number of database backups to retain (NUM_DB_BACKUPS) = 12
Recovery history retention (days) (REC_HIS_RETENTN) = 60
TSM management class (TSM_MGMTCLASS) =
TSM node name (TSM_NODENAME) =
TSM owner (TSM_OWNER) =
TSM password (TSM_PASSWORD) =
Automatic maintenance (AUTO_MAINT) = ON
Automatic database backup (AUTO_DB_BACKUP) = OFF
Automatic table maintenance (AUTO_TBL_MAINT) = ON
Automatic runstats (AUTO_RUNSTATS) = ON
Automatic statistics profiling (AUTO_STATS_PROF) = OFF
Automatic profile updates (AUTO_PROF_UPD) = OFF
Automatic reorganization (AUTO_REORG) = OFF
And i have cut and pasted the old logs to another location.
Regards,
Sreekanth -
Why is the Log Area size much smaller than the log volume
I have been following up on an Early Watch report that has been generated for our production SCM 5.0 system running liveCache 7.6.02 Build 14. The alert says "The LOG volumes size in your system is too small. Recommendation: Configure LOG Volumes to at least 2 GB". There are two interesting things about this.
1) I have spent the last couple of days reading all the OSS Notes, and MaxDB documentation I could find, and this does not seem to be documented as a recommendation anywhere. Does this seem like a realistic recommendation without taking into account the level of change actifity?
2) There is a single log volume allocated with size 2,097,160 KB. In production LC10 and DBMGUI report this to be correct size under volume details, but only list the total log area size as 1,706,328 (81% of the volume size). We have a non-production environment with exactly the same size log volume, but it reports that the log area size is 2,032,008 KB (97% of the volume). What leads to these different amounts of wasted space, and is there any way of getting the database to start using it?
Thanks,
MarkHi Natalia,
I did read 869267, several times. It does not answer my questions which is why I posted here.
DBMGUI version = 7.6.00.25
DBMCLI commands for PL1 (Production)
> xinstinfo PL1
IndepData : /sapdb/data
IndepPrograms : /sapdb/programs
InstallationPath : /sapdb/PL1/db
Kernelversion : KERNEL 7.6.02 BUILD 014-123-152-175
Rundirectory : /sapdb/data/wrk/PL1
> dbmcli -d PL1 -u control,control
dbmcli on PL1>db_state
OK
State
ONLINE
dbmcli on PL1>info log
OK
END
Name | Value
Log Mirrored = NO
Log Writing = ON
Log Automatic Overwrite = OFF
Max. Size (KB) = 1706328
Backup Segment Size (KB) = 699048
Used Size (KB) = 104640
Used Size (%) = 6
Not Saved (KB) = 104640
Not Saved (%) = 6
Log Since Last Data Backup (KB) = 0
Savepoints = 5210
Checkpoints = 0
Physical Reads = 2469115
Physical Writes = 15655616
Queue Size (KB) = 48000
Queue Overflows = 646
Group Commits = 98205
Waits for Logwriter = 10957511
Max. Waits = 10
Average Waits = 0
OMS Log Used Pages = 0
OMS Min. Free Pages = 0
dbmcli on PL1>param_getvolsall
OK
LOG_MIRRORED NO
MAXLOGVOLUMES 2
MAXDATAVOLUMES 14
LOG_VOLUME_NAME_001 262145 F /sapdb/PL1/saplog/DISKL001
DATA_VOLUME_NAME_0001 524289 F /sapdb/PL1/sapdata/DISKD0001
DATA_VOLUME_NAME_0002 524289 F /sapdb/PL1/sapdata/DISKD0002
DATA_VOLUME_NAME_0003 524289 F /sapdb/PL1/sapdata/DISKD0003
DATA_VOLUME_NAME_0004 524289 F /sapdb/PL1/sapdata/DISKD0004
DATA_VOLUME_NAME_0005 524289 F /sapdb/PL1/sapdata/DISKD0005
DATA_VOLUME_NAME_0006 524289 F /sapdb/PL1/sapdata/DISKD0006
DATA_VOLUME_NAME_0007 524289 F /sapdb/PL1/sapdata/DISKD0007
DATA_VOLUME_NAME_0008 524289 F /sapdb/PL1/sapdata/DISKD0008
DATA_VOLUME_NAME_0009 1048577 F /sapdb/PL1/sapdata/DISKD0009
DATA_VOLUME_NAME_0010 1048577 F /sapdb/PL1/sapdata/DISKD0010
DATA_VOLUME_NAME_0011 1048577 F /sapdb/PL1/sapdata/DISKD0011
DATA_VOLUME_NAME_0012 1048577 F /sapdb/PL1/sapdata/DISKD0012
dbmcli on PL1>param_directget MAXCPU
OK
MAXCPU 12
dbmcli on PL1>param_directget MAX_LOG_QUEUE_COUNT
OK
MAX_LOG_QUEUE_COUNT 0
dbmcli on PL1>param_directget LOG_IO_QUEUE
OK
LOG_IO_QUEUE 6000
> xinstinfo SL1
IndepData : /sapdb/data
IndepPrograms : /sapdb/programs
InstallationPath : /sapdb/SL1/db
Kernelversion : KERNEL 7.6.02 BUILD 014-123-152-175
Rundirectory : /sapdb/data/wrk/SL1
dbmcli on SL1>db_state
OK
State
ONLINE
dbmcli on SL1>info log
OK
END
Name | Value
Log Mirrored = NO
Log Writing = ON
Log Automatic Overwrite = OFF
Max. Size (KB) = 2032008
Backup Segment Size (KB) = 699048
Used Size (KB) = 3824
Used Size (%) = 0
Not Saved (KB) = 3824
Not Saved (%) = 0
Log Since Last Data Backup (KB) = 0
Savepoints = 1256
Checkpoints = 0
Physical Reads = 2178269
Physical Writes = 4969914
Queue Size (KB) = 16000
Queue Overflows = 21201
Group Commits = 643
Waits for Logwriter = 751336
Max. Waits = 4
Average Waits = 0
OMS Log Used Pages = 0
OMS Min. Free Pages = 0
dbmcli on SL1>param_getvolsall
OK
LOG_MIRRORED NO
MAXLOGVOLUMES 2
MAXDATAVOLUMES 10
LOG_VOLUME_NAME_001 262145 F /sapdb/SL1/saplog/DISKL001
DATA_VOLUME_NAME_0001 262145 F /sapdb/SL1/sapdata1/DISKD0001
DATA_VOLUME_NAME_0002 262145 F /sapdb/SL1/sapdata2/DISKD0002
DATA_VOLUME_NAME_0003 1048577 F /sapdb/SL1/sapdata3/DISKD0003
DATA_VOLUME_NAME_0004 1048577 F /sapdb/SL1/sapdata4/DISKD0004
DATA_VOLUME_NAME_0005 783501 F /sapdb/SL1/sapdata1/DISKD0005
DATA_VOLUME_NAME_0006 783501 F /sapdb/SL1/sapdata2/DISKD0006
dbmcli on SL1>param_directget MAXCPU
OK
MAXCPU 4
dbmcli on SL1>param_directget MAX_LOG_QUEUE_COUNT
OK
MAX_LOG_QUEUE_COUNT 0
dbmcli on SL1>param_directget LOG_IO_QUEUE
OK
LOG_IO_QUEUE 2000
Thanks for the explaination of the reserved space for the Log Queue pages. This does explain why there is the difference between the two. I think we probably have our log segment size too large. As you can see we do get occasional log queue overflows. Do you suggest we increase the size of our log IO queue higher, and allocate more log volume space to compensate?
select * from SYSINFO.LOGSTATISTICS (on PL1)
1706328;334176;19;334176;19;1879;20305192;64109066;7806480;182151514;12;48000
DBMGUI Log Area Usage
Total Size: 2048.01 MB
Free Log Area: 1330.38 MB
Used Log Area: 335.96 MB
Unsaved Log Area: 335.96
Log since Last Data Backup: 0.00 MB
Thanks,
Mark -
LiveCach problem with log Area
In LiveCache monitoring I can see the message: "Log area full - back up log!"
I use the RSLVCBACKUP interactively with the parameter "Complete Data Backup". After running an error is displayed:
"Failed to determine media (Database connection: LCA)
Message no. LVC505 ".
Is it true I'm doing taking such measures for Log Area Backup. And if all true, how can I avoid mistakes LVC505?Sounds like you did not configure a backup medium for your liveCache.
To get rid of the log-full situation you may just use DBMGUI now and take a full backup and activate autolog backup afterwards.
Once this is done you may check what's wrong with your report usage.
regards,
Lars -
How to determine which archive logs are needed in flashback.
Hi,
Let's assume I have archive logs 1,2,3,4, then a "backup database plus archivelogs" in RMAN, and then archive logs 5+6. If I want to flashback my database to a point immediately after the backup, how do I determine which archive logs are needed?
I would assume I'd only need archive logs 5 and/or 6 since I did a full backup plus archivelogs and the database would have been checkpointed at that time. I'd also assume archive logs 1,2,3,4 would be obsolete as they would have been flushed to the datafiles in the checkpoint.
Are my assumptions correct? If not what queries can I run to determine what files are needed for a flashback using the latest checkpointed datafiles?
Thanks.Thanks for the explanation, let me be more specific with my problem.
I am trying to do a flashback on a failed primary database, the only reason why I want to do a flashback is because dataguard uses the flashback command to try and synchronize the failed database. Specifically dataguard is trying to run:
FLASHBACK DATABASE TO SCN 865984
But it fails, if I run it manually then I get:
SQL> FLASHBACK DATABASE TO SCN 865984;
FLASHBACK DATABASE TO SCN 865984
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38761: redo log sequence 5 in thread 1, incarnation 3 could not be accessed
Looking at the last checkpoint I see:
CHECKPOINT_CHANGE#
865857
Also looking at the archive logs:
RECID STAMP THREAD# SEQUENCE# FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# RESETLOGS_CHANGE# RESETLOGS
25 766838550 1 1 863888 10-NOV-11 863892 863888 10-NOV-11
26 766838867 1 2 863892 10-NOV-11 864133 863888 10-NOV-11
27 766839225 1 3 864133 10-NOV-11 864289 863888 10-NOV-11
28 766839340 1 4 864289 10-NOV-11 864336 863888 10-NOV-11
29 766840698 1 5 864336 10-NOV-11 865640 863888 10-NOV-11
30 766841128 1 6 865640 10-NOV-11 865833 863888 10-NOV-11
31 766841168 1 7 865833 10-NOV-11 865857 863888 10-NOV-11
How can I determine what archive logs are needed by a flashback command? I deleted any archive logs with a SCN less than the checkpoint #, I can restore them from backup but I am trying to figure out how to query what is required for a flashback. Maybe this coincides with the point that flashbacks have nothing to do with the backups of datafiles or the checkpoints? -
Whether t-logs are truncated in DB's particpating in High Availability. SQLAlways On 2012
Hi,
How the transaction logs are cleared for DB's participating in SQL AlwaysOn.I have a DB participating in HA.Now in order to use alwaysON DB recovery model should be set to full. NOw with this recovery model my transaction logs will grow continuously.
How t-logs can be cleared for the DB's participating in AG. Is there any recommendation?
Thanks
ManishHello,
Have you scheduled log backup on primary.Secondary replica might be applying transaction log records of this database to a corresponding secondary database.
Whats does below query returns
select name,log_resuse_wait_desc from sys.databases where name='db_name'
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
What if the log directory is full?
I've just switched from Oracle to DB2.
I have a simple question: what if the log directory is full before the logs are archived?
Forgive me if this is to basic.
Thanks!Hello, this depends on the setting of the blk_log_dsk_ful database configuration parameter.
[http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.db2.luw.admin.config.doc/doc/r0005787.html]
SAP standard setting is YES -
Logs are not getting applied in the standby database
Hello,
I have created a physical standby database and in it the logs are not getting applied..
following is an extract of the standby alert log
Wed Sep 05 07:53:59 2012
Media Recovery Log /u01/oracle/oradata/ABC/archives/1_37638_765704228.arc
Error opening /u01/oracle/oradata/ABC/archives/1_37638_765704228.arc
Attempting refetch
Media Recovery Waiting for thread 1 sequence 37638
Fetching gap sequence in thread 1, gap sequence 37638-37643
Wed Sep 05 07:53:59 2012
RFS[46]: Assigned to RFS process 3081
RFS[46]: Allowing overwrite of partial archivelog for thread 1 sequence 37638
RFS[46]: Opened log for thread 1 sequence *37638* dbid 1723205832 branch 765704228
Wed Sep 05 07:55:34 2012
RFS[42]: Possible network disconnect with primary database
However, the archived files are getting copied to the standby server.
I tried registering and recovering the logs but it also failed..
Follows some of the information,
Primary
Oralce 11R2 EE
SQL> select max(sequence#) from v$log where archived='YES';
MAX(SEQUENCE#)
37668
SQL> select DEST_NAME, RECOVERY_MODE,DESTINATION,ARCHIVED_SEQ#,APPLIED_SEQ#, SYNCHRONIZATION_STATUS,SYNCHRONIZED,GAP_STATUS from v$archive_dest_status where DEST_NAME = 'LOG_ARCHIVE_DEST_3';
DEST_NAME RECOVERY_MODE DESTINATION ARCHIVED_SEQ# APPLIED_SEQ# SYNCHRONIZATION_STATUS SYNCHRONIZED GAP_STATUS
LOG_ARCHIVE_DEST_3 MANAGED REAL TIME APPLY ABC 37356 0 CHECK CONFIGURATION NO RESOLVABLE GAP
Standby
Oralce 11R2 EE
SQL> select max(sequence#) from v$archived_log where applied='YES';
MAX(SEQUENCE#)
37637
SQL> select * from v$archive_gap;
no rows selected
SQL> select open_mode, database_role from v$database;
OPEN_MODE DATABASE_ROLE
READ ONLY WITH APPLY PHYSICAL STANDBY
Please help me to troubleshoot this and get the standby in sync..
Thanks a lot..the results are as follows..
SQL> select process, status, group#, thread#, sequence# from v$managed_standby order by process, group#, thread#, sequence#;
PROCESS STATUS GROUP# THREAD# SEQUENCE#
ARCH CLOSING 1 1 37644
ARCH CLOSING 1 1 37659
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
MRP0 WAIT_FOR_GAP N/A 1 37638
RFS IDLE N/A 0 0
RFS IDLE N/A 0 0
RFS IDLE N/A 0 0
RFS RECEIVING N/A 1 37638
RFS RECEIVING N/A 1 37639
RFS RECEIVING N/A 1 37640
RFS RECEIVING N/A 1 37641
RFS RECEIVING N/A 1 37642
RFS RECEIVING N/A 1 37655
RFS RECEIVING N/A 1 37673
RFS RECEIVING N/A 1 37675
42 rows selected.
SQL>
SQL> select name,value, time_computed from v$dataguard_stats;
NAME VALUE TIME_COMPUTED
transport lag +00 02:44:33 09/05/2012 09:25:58
apply lag +00 03:14:30 09/05/2012 09:25:58
apply finish time +00 00:01:09.974 09/05/2012 09:25:58
estimated startup time 12 09/05/2012 09:25:58
SQL> select timestamp , facility, dest_id, message_num, error_code, message from v$dataguard_status order by timestamp
TIMESTAMP FACILITY DEST_ID MESSAGE_NUM ERROR_CODE MESSAGE
05-SEP-12 Remote File Server 0 60 0 RFS[13]: Assigned to RFS process 2792
05-SEP-12 Remote File Server 0 59 0 RFS[12]: Assigned to RFS process 2790
05-SEP-12 Log Apply Services 0 61 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 62 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 63 0 Managed Standby Recovery Canceled
05-SEP-12 Log Apply Services 0 64 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 65 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 66 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 67 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 68 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Network Services 0 69 0 RFS[5]: Possible network disconnect with primary database
05-SEP-12 Network Services 0 70 0 RFS[6]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 71 0 RFS[14]: Assigned to RFS process 2829
05-SEP-12 Remote File Server 0 72 0 RFS[15]: Assigned to RFS process 2831
05-SEP-12 Network Services 0 73 0 RFS[9]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 74 0 RFS[16]: Assigned to RFS process 2833
05-SEP-12 Network Services 0 75 0 RFS[1]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 76 0 RFS[17]: Assigned to RFS process 2837
05-SEP-12 Network Services 0 77 0 RFS[3]: Possible network disconnect with primary database
05-SEP-12 Network Services 0 78 0 RFS[2]: Possible network disconnect with primary database
05-SEP-12 Network Services 0 79 0 RFS[7]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 80 0 RFS[18]: Assigned to RFS process 2848
05-SEP-12 Network Services 0 81 0 RFS[16]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 82 0 RFS[19]: Assigned to RFS process 2886
05-SEP-12 Network Services 0 83 0 RFS[19]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 84 0 RFS[20]: Assigned to RFS process 2894
05-SEP-12 Log Apply Services 0 85 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 86 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 87 0 Managed Standby Recovery Canceled
05-SEP-12 Remote File Server 0 89 0 RFS[22]: Assigned to RFS process 2900
05-SEP-12 Remote File Server 0 88 0 RFS[21]: Assigned to RFS process 2898
05-SEP-12 Remote File Server 0 90 0 RFS[23]: Assigned to RFS process 2902
05-SEP-12 Remote File Server 0 91 0 Primary database is in MAXIMUM PERFORMANCE mode
05-SEP-12 Remote File Server 0 92 0 RFS[24]: Assigned to RFS process 2904
05-SEP-12 Remote File Server 0 93 0 RFS[25]: Assigned to RFS process 2906
05-SEP-12 Log Apply Services 0 94 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 95 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 96 0 Managed Standby Recovery starting Real Time Apply
05-SEP-12 Log Apply Services 0 97 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Log Transport Services 0 98 0 ARCa: Beginning to archive thread 1 sequence 37644 (7911979302-7912040568)
05-SEP-12 Log Transport Services 0 99 0 ARCa: Completed archiving thread 1 sequence 37644 (0-0)
05-SEP-12 Network Services 0 100 0 RFS[8]: Possible network disconnect with primary database
05-SEP-12 Log Apply Services 0 101 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 102 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 103 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 104 0 Managed Standby Recovery Canceled
05-SEP-12 Network Services 0 105 0 RFS[20]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 106 0 RFS[26]: Assigned to RFS process 2930
05-SEP-12 Network Services 0 107 0 RFS[24]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 108 0 RFS[27]: Assigned to RFS process 2938
05-SEP-12 Network Services 0 109 0 RFS[14]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 110 0 RFS[28]: Assigned to RFS process 2942
05-SEP-12 Network Services 0 111 0 RFS[15]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 112 0 RFS[29]: Assigned to RFS process 2986
05-SEP-12 Network Services 0 113 0 RFS[17]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 114 0 RFS[30]: Assigned to RFS process 2988
05-SEP-12 Log Apply Services 0 115 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 116 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 117 0 Managed Standby Recovery starting Real Time Apply
05-SEP-12 Log Apply Services 0 118 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Network Services 0 119 0 RFS[18]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 120 0 RFS[31]: Assigned to RFS process 3003
05-SEP-12 Network Services 0 121 0 RFS[26]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 122 0 RFS[32]: Assigned to RFS process 3005
05-SEP-12 Network Services 0 123 0 RFS[27]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 124 0 RFS[33]: Assigned to RFS process 3009
05-SEP-12 Remote File Server 0 125 0 RFS[34]: Assigned to RFS process 3012
05-SEP-12 Log Apply Services 0 127 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 126 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 128 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 129 0 Managed Standby Recovery Canceled
05-SEP-12 Network Services 0 130 0 RFS[32]: Possible network disconnect with primary database
05-SEP-12 Log Apply Services 0 131 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 132 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 133 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 134 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 135 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Network Services 0 136 0 RFS[33]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 137 0 RFS[35]: Assigned to RFS process 3033
05-SEP-12 Log Apply Services 0 138 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 139 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 140 0 Managed Standby Recovery Canceled
05-SEP-12 Remote File Server 0 141 0 RFS[36]: Assigned to RFS process 3047
05-SEP-12 Log Apply Services 0 142 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 143 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 144 0 Managed Standby Recovery starting Real Time Apply
05-SEP-12 Log Apply Services 0 145 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Network Services 0 146 0 RFS[35]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 147 0 RFS[37]: Assigned to RFS process 3061
05-SEP-12 Network Services 0 148 0 RFS[36]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 149 0 RFS[38]: Assigned to RFS process 3063
05-SEP-12 Remote File Server 0 150 0 RFS[39]: Assigned to RFS process 3065
05-SEP-12 Network Services 0 151 0 RFS[25]: Possible network disconnect with primary database
05-SEP-12 Network Services 0 152 0 RFS[21]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 153 0 Archivelog record exists, but no file is found
05-SEP-12 Remote File Server 0 154 0 RFS[40]: Assigned to RFS process 3067
05-SEP-12 Network Services 0 155 0 RFS[37]: Possible network disconnect with primary database -
Hi,
I want to fetch the list of users who all are having full access to the sharepoint list using client object model with .Net
Please let me know if any property for the user object or any other way to get it.
Thanks in advance.Here you are complete code i created from some years it lists all groups and users, you can just add a check in the permissions loop to see if it is equal to Full Control.
Private void GetData(object obj)
MyArgs args = obj as MyArgs;
try
if (args == null)
return; // called without parameters or invalid type
using (ClientContext clientContext = new ClientContext(args.URL))
// clientContext.AuthenticationMode = ClientAuthenticationMode.;
NetworkCredential credentials = new NetworkCredential(args.UserName, args.Password, args.Domain);
clientContext.Credentials = credentials;
RoleAssignmentCollection roles = clientContext.Web.RoleAssignments;
ListViewItem lvi;
ListViewItem.ListViewSubItem lvsi;
ListViewItem lvigroup;
ListViewItem.ListViewSubItem lvsigroup;
clientContext.Load(roles);
clientContext.ExecuteQuery();
foreach (RoleAssignment orole in roles)
clientContext.Load(orole.Member);
clientContext.ExecuteQuery();
//name
//MessageBox.Show(orole.Member.LoginName);
lvi = new ListViewItem();
lvi.Text = orole.Member.LoginName;
lvsi = new ListViewItem.ListViewSubItem();
lvsi.Text = orole.Member.PrincipalType.ToString();
lvi.SubItems.Add(lvsi);
//get the type group or user
// MessageBox.Show(orole.Member.PrincipalType.ToString());
if (orole.Member.PrincipalType.ToString() == "SharePointGroup")
lvigroup = new ListViewItem();
lvigroup.Text = orole.Member.LoginName;
// args.GroupsList.Items.Add(lvigroup);
DoUpdate1(lvigroup);
Group group = clientContext.Web.SiteGroups.GetById(orole.Member.Id);
UserCollection collUser = group.Users;
clientContext.Load(collUser);
clientContext.ExecuteQuery();
foreach (User oUser in collUser)
lvigroup = new ListViewItem();
lvigroup.Text = "";
lvsigroup = new ListViewItem.ListViewSubItem();
lvsigroup.Text = oUser.LoginName;
lvigroup.SubItems.Add(lvsigroup);
//args.GroupsList.Items.Add(lvigroup);
DoUpdate1(lvigroup);
// MessageBox.Show(oUser.LoginName);
RoleDefinitionBindingCollection roleDefsbindings = null;
roleDefsbindings = orole.RoleDefinitionBindings;
clientContext.Load(roleDefsbindings);
clientContext.ExecuteQuery();
//permission level
lvsi = new ListViewItem.ListViewSubItem();
string permissionsstr = string.Empty;
for (int i = 0; i < roleDefsbindings.Count; i++)
if (i == roleDefsbindings.Count - 1)
permissionsstr = permissionsstr += roleDefsbindings[i].Name;
else
permissionsstr = permissionsstr += roleDefsbindings[i].Name + ", ";
lvsi.Text = permissionsstr;
lvi.SubItems.Add(lvsi);
// args.PermissionsList.Items.Add(lvi);
DoUpdate2(lvi);
catch (Exception ex)
MessageBox.Show(ex.Message);
finally
DoUpdate3();
Kind Regards, John Naguib Technical Consultant/Architect MCITP, MCPD, MCTS, MCT, TOGAF 9 Foundation -
Restored standby database from primary; now no logs are shipped
Hi
We recently had a major network/SAN issue and had to restore our standby database from a backup of the primary. To do this, we restored the database to the standby, created a standby controlfile on the primary, copied this across to the control file locations and started in standby recover and applied the logs manually/registered to get it back up to speed.
However, no new logs are being shipped across from the primary.
Have we missed a step somewhere?
One thing we've noticed is that there is no RFS process running on the standby:
SQL> SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY;
PROCESS CLIENT_P SEQUENCE# STATUS
ARCH ARCH 0 CONNECTED
ARCH ARCH 0 CONNECTED
MRP0 N/A 100057 WAIT_FOR_LOG
How do we start this? Or will it only show if the arc1 process on the primary is sending files?
The arc1 process is showing at OS level on the primary but I'm wondering if its faulty somehow?
There are NO errors in the alert logs in the primary or the standby. There's not even the normal FAL gap sequence type error - in the standby it's just saying 'waiting for log' and a number from ages ago. It's like the primary isn't even talking to the standby. The listener is up and running ok though...
What else can we check/do?
If we manually copy across files and do an 'alter database register' then they are applied to the standby without issue; there's just no automatic log shipping going on...
Thanks
RossHi all
Many thanks for all the responses.
The database is 10.2.0.2.0, on AIX 6.
I believe the password files are ok; we've had issues previously and this is always flagged in the alert log on the primary - not the case here.
Not set to DEFER on primary; log_archive_dest_2 is set to service="STBY_PHP" optional delay=720 reopen=30 and log_archive_dest_state_2 is set to ENABLE.
I ran those troubleshooting scripts, info from standby:
SQL> @troubleshoot
NAME DISPLAY_VALUE
db_file_name_convert
db_name PHP
db_unique_name PHP
dg_broker_config_file1 /oracle/PHP/102_64/dbs/dr1PHP.dat
dg_broker_config_file2 /oracle/PHP/102_64/dbs/dr2PHP.dat
dg_broker_start FALSE
fal_client STBY_PHP
fal_server PHP
local_listener
log_archive_config
log_archive_dest_2 service=STBY_PHP optional delay=30 reopen=30
log_archive_dest_state_2 DEFER
log_archive_max_processes 2
log_file_name_convert
remote_login_passwordfile EXCLUSIVE
standby_archive_dest /oracle/PHP/oraarch/PHParch
standby_file_management AUTO
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE
PHP PHP MAXIMUM PERFORM PHYSICAL S MOUNTED
ANCE TANDBY
THREAD# MAX(SEQUENCE#)
1 100149
PROCESS STATUS THREAD# SEQUENCE#
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
MRP0 WAIT_FOR_LOG 1 100150
NAME VALUE UNIT TIME_COMPUTED
apply finish time day(2) to second(1) interval
apply lag day(2) to second(0) interval
estimated startup time 8 second
standby has been open N
transport lag day(2) to second(0) interval
NAME Size MB Used MB
0 0
On the primary, the script has froze!! How long should it take? Got as far as this:
SQL> @troubleshoot
NAME DISPLAY_VALUE
db_file_name_convert
db_name PHP
db_unique_name PHP
dg_broker_config_file1 /oracle/PHP/102_64/dbs/dr1PHP.dat
dg_broker_config_file2 /oracle/PHP/102_64/dbs/dr2PHP.dat
dg_broker_start FALSE
fal_client STBY_R1P
fal_server R1P
local_listener
log_archive_config
log_archive_dest_2 service="STBY_PHP" optional delay=720 reopen=30
log_archive_dest_state_2 ENABLE
log_archive_max_processes 2
log_file_name_convert
remote_login_passwordfile EXCLUSIVE
standby_archive_dest /oracle/PHP/oraarch/PHParch
standby_file_management AUTO
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE SWITCHOVER_STATUS
PHP PHP MAXIMUM PERFORMANCE PRIMARY READ WRITE SESSIONS ACTIVE
THREAD# MAX(SEQUENCE#)
1 100206
NOW - before you say it - :) - yes, I'm aware that fal_client as STBY_R1P and fal_server as R1P are incorrect - should be PHP - but it looks like it's always been this way! Well, as least for the last 4 years where it's worked fine, as I found an old SP file and it still has R1P set in there...?!?
Any ideas?
Ross -
Can I use ink cartridges from my old printer in my new one. They are nearly full.
I got a new HP printer that uses the same ink as my old one. the cartridges are practically full. Can I use them in the new one or do I have to toss them? Thanks
Hi rag742,
What is the model no. of both printers?
Although I am an HP employee, I am speaking for myself and not for HP.
--Say "Thanks" by clicking the Kudos Star in the post that helped you.
--Please mark the post that solves your problem as "Accepted Solution" -
Hi All
In Message monitoring(RWB) in adapter engine i am getting the following error
SOAP: response message contains an error XIAdapter/PARSING/ADAPTER.SOAP_EXCEPTION - soap fault: Server was unable to process request. ---> The event log file is full
Can any one suggest me what might be the problem
Thanks
Jayaraman
Edited by: Jayaraman P on May 20, 2010 4:27 PM
Edited by: Jayaraman P on May 20, 2010 4:28 PM>
Jayaraman P wrote:
> SOAP: response message contains an error XIAdapter/PARSING/ADAPTER.SOAP_EXCEPTION - soap fault: Server was unable to process request. ---> The event log file is full
this is because of a problem at the WS server (it might mostly be a windows server).
You can request the WS team to have a look into this issue. it is not a PI problem.
Maybe you are looking for
-
Since I bought my MacBook (mid-2012) the screen has had ghosting problems. Like, since day 0. At first it wasn't really a problem. I wasn't doing graphical work; I didn't need "pixel perfect." But, then I started needing to spend time in Photoshop ag
-
A Critical Error on Search Service 2013
So simply, one of our clients had an issue on an SP machine and decided to do a Bare Metal Restore on the OS Drive (Bad, Bad Idea). that has now created a whole host of issues on that SP box. The main issue that i've got left to fix are, Search Servi
-
my new Macbook Pro does not have idvd and my old one has, so would like to transfer my idvd from my old mac to my new one. Can i do that? and how? thanks.
-
Call Transaction (BAPI) in ABAP Web Dynpro
I'm developing a ABAP WD application which make use of a RFC BAPI. The BAPI will then use call transaction to perform certain operation on R3. <b>CALL TRANSACTION 'IQS12' USING bdcdata MODE 'P' MESSAGES INTO itab.</b> It works fine when I tested i
-
Can iTunes burn DVDs into our idevices?
Since burn CDs is a piece of cake but can iTunes burn DVDs into our idevices.