Urgent: Excessive archive logs 10g 10.2.0.4
Hi,
I am experiencing excessive archive logging.
This has been happenning for the past 3 days. Usually achive logs were taking about 250 Mb.
Yesteday they jumped to 129Gb, today - 30Gb until the hard drive ran out of space.
Now I obviously have
"Archiver is unable to archive a redo log because the output device is full or unavailable" message.
The database is servicing a low number of transations application and awaiting deployment into production.
Not sure where to start.
Any advice would be appreciated.
I can provide any other information necessary to troubleshoot.
Thanks in advance.
run this....it should point out the session currently going:
SELECT s.sid, s.serial#, s.username, s.program,
t.used_ublk, t.used_urec, vsql.sql_text
FROM v$session s, v$transaction t, v$sqlarea vsql
WHERE s.taddr = t.addr
and s.sql_id = vsql.sql_id (+)
ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
This assumes you can log in by maybe moving some of your archives to free up space to resume db operations.
Also, reach out to developers/users (if you can) and see if anyone is testing something or loading tables, etc.
Edited by: DBA_Mike on Apr 14, 2009 10:10 AM
Similar Messages
-
Hello to All,
lot of archives are generating by the production database since 15 days.Nearly 30 archives are generating per hour.Before only one or two archives are going to be genarated per hour.The size of logfile is 300m and database is having 3 log groups.Now i want to know which application or which user is generating this much of archives.How i can find the reason for this much of archive generation.
Thankx...Can you tell us the Oracle version which you are using?
For the time, you can query v$sess_io to findout the session which is generating too much redo.
Jaffar -
How to control too much of archived log generation
Hi ,
This is one of the interview questions,
I have replied to this. Just like to know what is answer of this.
How we can control the excessive archived log generation ?
Thanks,796843 wrote:
Hi ,
This is one of the interview questions,
I have replied to this. Just like to know what is answer of this.
How we can control the excessive archived log generation ?
Thanks,do not do any DML, since only DML generates REDO -
Archive Log Generation in EBusiness Suite
Hi,
I am responsible for EBusiness suite 11.5.10.2 AIX Production server. Until last week (for the past 1.5 years), there were excessive archive log generation (200 MB for every 10 mins) which has been reduced to (200 MB for every 4.5 hours).
I am unable to understand this behavior. The number of users still remain the same and the usage is as usual.
Is there a way I can check what has gone wrong? I could not see any errors also in the alert.log
Please suggest what can be done.
(I have raised this issue in Metalink Forum also and awaiting a response)
Thanks
qALog/archive logs generation is directly related to the level of activities on the database, so it is almost certain that the level of activities have dropped significantly.
If possible, can you run this query and post the result:
select trunc(FIRST_TIME), count(SEQUENCE#) from v$archived_log
where to_char(trunc(FIRST_TIME),'MONYYYY') = 'SEP2007'
group by trunc(first_time)
order by 1
--Adams -
Archive log folders in Oracle 10g
Hi everybody,
I have installed Oracle 10g on my Windows XP Pro SP3 in archivelog mode.
The archived log are saved in different folders according to the save date; for example I have a folder called 2008_06_06 which contains all the archives saved on that day and so on
I'd like to have all archive log saved by Oracle in only one folder, how can I specify this in the installation process or in the parametrs settings?
Thanks, Valeriouser640800 wrote:
Hi everybody,
I have installed Oracle 10g on my Windows XP Pro SP3 in archivelog mode.
The archived log are saved in different folders according to the save date; for example I have a folder called 2008_06_06 which contains all the archives saved on that day and so on
I'd like to have all archive log saved by Oracle in only one folder, Why? What problem does it solve? Oracle knows where those files are, how to back them up, how to restore them from backup, how to delete them when they are no longer needed. Rman will take care of all the housekeeping just fine, thank you very much.
Yes, oracle provides mechanisms for writing the archivelogs to a different directory structure and with a different file name convention. But again, to what end? What problem does it solve by doing so?
how can I specify this in the installation process or in the parametrs settings?Go to tahiti.oracle.com. Drill down to your product and version. Locate the Reference Manual. Open it and scan the table of contents. You will find a section on initialization parms. Use your browser's search function to find parms that deal with "archive".
>
Thanks, Valerio -
Archive log mode in oracle 10g
Hi,
I would like to know the archive log mode in oracle 10g and I use this code in SQLPlus
select log_mode from v$database
But it displayed: "2" not : NOARCHIVELOG or ARCHIVELOG
It displayed a number, not a String.
How could I know this?
ThanksHi Paul
Because I am a newbie in DBA Oracle so I got many difficulties.
You are very kind to help me.
So I have some more questions:
1. when I executed this code, it always reported error:
$ tmp=`${ORACLE_HOME}/bin/sqlplus -s / as sysdba << EOF
set heading off feedback off;
exit
EOF`
tmp='ERROR:
ORA-01031: insufficient privileges
SP2-0306: Invalid option.
Usage: CONN[ECT] [logon] [AS {SYSDBA|SYSOPER}]
where <logon> ::= <username>[<password>][@<connect_identifier>] | /
SP2-0306: Invalid option.
Usage: CONN[ECT] [logon] [AS {SYSDBA|SYSOPER}]
where <logon> ::= <username>[<password>][@<connect_identifier>] | /
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus'
so when I updated like this:
tmp=`${ORACLE_HOME}/bin/sqlplus -s sys/syspass@db02 as sysdba <<EOF
set heading off feedback off;
exit
EOF`
It run correctly.
2. With Paul's guide:
Do not execute Oracle commands from root, execute them as oracle user. This works to me :
$ tmp=`${ORACLE_HOME}/bin/sqlplus -s / as sysdba << EOF
set heading off feedback off
alter database backup controlfile to '${CONTROLFILE_DIR}/<file name>';
alter database backup controlfile to trace;
exit
EOF`
Of course CONTROLFILE_DIR must be set to a directory with write permission for oracle user.
For ex: I have an Unix account: unix/unix
and a Sys Oracle account: oracle/oracle
I login with Unix acount (unix/unix) and call script file that contains above code.
tmp=`${ORACLE_HOME}/bin/sqlplus -s oracle/oracle@db02 as sysdba <<EOF
set heading off feedback off
alter database backup controlfile to '${CONTROLFILE_DIR}/backup_control.ctl';
alter database backup controlfile to trace;
exit
EOF`
Unix report as following: Linux error: 13: Permission denied.
CONTROLFILE_DIR directory is read,write,execute for account unix/unix.
Of course CONTROLFILE_DIR must be set to a directory with write permission for oracle user. You mean I have to create a Unix user is the same to Oracle user so that Oracle user can have permission to write.
Please guilde more detail.
Thanks for your attention.
Message was edited by:
user481034 -
How to reduce excessive redo log generation in Oracle 10G
Hi All,
Please let me know is there any way to reduce excessive redo log generation in Oracle DB 10.2.0.3
previously per day there is only 15 Archive log files are generating but now a days it is increased to 40 to 45
below is the size of redo log file members:
L.BYTES/1024/1024 MEMBER
200 /u05/applprod/prdnlog/redolog1a.dbf
200 /u06/applprod/prdnlog/redolog1b.dbf
200 /u05/applprod/prdnlog/redolog2a.dbf
200 /u06/applprod/prdnlog/redolog2b.dbf
200 /u05/applprod/prdnlog/redolog3a.dbf
200 /u06/applprod/prdnlog/redolog3b.dbf
here is the some content of alert message for your reference how frequent log switch is occuring:
Beginning log switch checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
Thread 1 advanced to log sequence 17439
Current log# 3 seq# 17439 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
Current log# 3 seq# 17439 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
Tue Jul 13 14:46:17 2010
Completed checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
Tue Jul 13 14:46:38 2010
Beginning log switch checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
Thread 1 advanced to log sequence 17440
Current log# 1 seq# 17440 mem# 0: /u05/applprod/prdnlog/redolog1a.dbf
Current log# 1 seq# 17440 mem# 1: /u06/applprod/prdnlog/redolog1b.dbf
Tue Jul 13 14:46:52 2010
Completed checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
Tue Jul 13 14:53:33 2010
Beginning log switch checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
Thread 1 advanced to log sequence 17441
Current log# 2 seq# 17441 mem# 0: /u05/applprod/prdnlog/redolog2a.dbf
Current log# 2 seq# 17441 mem# 1: /u06/applprod/prdnlog/redolog2b.dbf
Tue Jul 13 14:53:37 2010
Completed checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
Tue Jul 13 14:55:37 2010
Incremental checkpoint up to RBA [0x4421.4b45c.0], current log tail at RBA [0x4421.4b5c5.0]
Tue Jul 13 15:15:37 2010
Incremental checkpoint up to RBA [0x4421.4d0c1.0], current log tail at RBA [0x4421.4d377.0]
Tue Jul 13 15:35:38 2010
Incremental checkpoint up to RBA [0x4421.545e2.0], current log tail at RBA [0x4421.54ad9.0]
Tue Jul 13 15:55:39 2010
Incremental checkpoint up to RBA [0x4421.55eda.0], current log tail at RBA [0x4421.56aa5.0]
Tue Jul 13 16:15:41 2010
Incremental checkpoint up to RBA [0x4421.58bc6.0], current log tail at RBA [0x4421.596de.0]
Tue Jul 13 16:35:41 2010
Incremental checkpoint up to RBA [0x4421.5a7ae.0], current log tail at RBA [0x4421.5aae2.0]
Tue Jul 13 16:42:28 2010
Beginning log switch checkpoint up to RBA [0x4422.2.10], SCN: 4872672366
Thread 1 advanced to log sequence 17442
Current log# 3 seq# 17442 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
Current log# 3 seq# 17442 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
Thanks in advancehi,
Use the below script to find out at what hour the generation of archives are more and in the hour check for eg. if MV's are running...or any programs where delete * from table is going on..
L
1 select
2 to_char(first_time,'DD-MM-YY') day,
3 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
4 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
5 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
6 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
7 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
8 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
9 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
10 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
11 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
12 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
13 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
14 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
15 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
16 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
17 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
18 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
19 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
20 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
21 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
22 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
23 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
24 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
25 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
26 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
27 COUNT(*) TOT
28 from v$log_history
29 group by to_char(first_time,'DD-MM-YY')
30 order by daythanks,
baskar.l -
Archive log file size is varying in RAC 10g database.
---- Environment oracle 10g rac 9 node cluster database, with 3 log groups for each node with 500 mb size for each redo log file.
Question is why would be the archive log file size is varying, i know when ever there is log file switch the redo log will be archived, So as our redo log file size is of 500 MB
isn't the archive log file size should be the same as 500 MB?
Instead we are seeing the archive log file is varying from 20 MB to 500 MB this means the redo log file is not using the entire 500 MB space? What would be causing this to happen? how can we resolve this?
Some init parameter values.(just for information)
fast_start_mttr_target ----- 400
log_checkpoint_timeout ----- 0
log_checkpoint_interval ----- 0
fast_start_io_target ----- 0There was a similar discussion a few days back,
log file switch before it filled up
The guy later claimed it's because their log_buffer size. It's remain a mystery to me still. -
Oracle 10g - switch off archive logs
Hi,
I understand in oracle 10g, the following parameter is not longer available:
LOG_ARCHIVE_START=FALSE
Can i confirm that for now, only way to disable archive log is to execute the following command when in mount stage:
alter database noarchivelog;
There's not other parameter we can specify in pfile to disable it permantely?
thanksYou are correct. No other parameters.
-
How the "Alter system archive log current" command works with 10g RAC. Will it apply to all the rac instances or just the connected instance.
Since your login is "RAC_DBA" I think you should be able to test this and answer your own question ?
-
Urgent: Huge diff in total redo log size and archive log size
Dear DBAs
I have a concern regarding size of redo log and archive log generated.
Is the equation below is correct?
total size of redo generated by all sessions = total size of archive log files generated
I am experiencing a situation where when I look at the total size of redo generated by all the sessions and the size of archive logs generated, there is huge difference.
My total all session redo log size is 780MB where my archive log directory size has consumed 23GB.
Before i start measuring i cleared up archive directory and started to monitor from a specific time.
Environment: Oracle 9i Release 2
How I tracked the sizing information is below
logon as SYS user and run the following statements
DROP TABLE REDOSTAT CASCADE CONSTRAINTS;
CREATE TABLE REDOSTAT
AUDSID NUMBER,
SID NUMBER,
SERIAL# NUMBER,
SESSION_ID CHAR(27 BYTE),
STATUS VARCHAR2(8 BYTE),
DB_USERNAME VARCHAR2(30 BYTE),
SCHEMANAME VARCHAR2(30 BYTE),
OSUSER VARCHAR2(30 BYTE),
PROCESS VARCHAR2(12 BYTE),
MACHINE VARCHAR2(64 BYTE),
TERMINAL VARCHAR2(16 BYTE),
PROGRAM VARCHAR2(64 BYTE),
DBCONN_TYPE VARCHAR2(10 BYTE),
LOGON_TIME DATE,
LOGOUT_TIME DATE,
REDO_SIZE NUMBER
TABLESPACE SYSTEM
NOLOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
GRANT SELECT ON REDOSTAT TO PUBLIC;
CREATE OR REPLACE TRIGGER TR_SESS_LOGOFF
BEFORE LOGOFF
ON DATABASE
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO SYS.REDOSTAT
(AUDSID, SID, SERIAL#, SESSION_ID, STATUS, DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, DBCONN_TYPE, LOGON_TIME, LOGOUT_TIME, REDO_SIZE)
SELECT A.AUDSID, A.SID, A.SERIAL#, SYS_CONTEXT ('USERENV', 'SESSIONID'), A.STATUS, USERNAME DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, TYPE DBCONN_TYPE,
LOGON_TIME, SYSDATE LOGOUT_TIME, B.VALUE REDO_SIZE
FROM V$SESSION A, V$MYSTAT B, V$STATNAME C
WHERE
A.SID = B.SID
AND
B.STATISTIC# = C.STATISTIC#
AND
C.NAME = 'redo size'
AND
A.AUDSID = sys_context ('USERENV', 'SESSIONID');
COMMIT;
END TR_SESS_LOGOFF;
Now, total sum of REDO_SIZE (B.VALUE) this is far less than archive log size. This at time when no other user is logged in except myself.
Is there anything wrong with query for collecting redo information or there are some hidden process which doesnt provide redo information on session basis.
I have seen the similar implementation as above at many sites.
Kindly provide a mechanism where I can trace which user is generated how much redo (or archive log) on a session basis. I want to track which all user/process are causing high redo to generate.
If I didnt find a solution I would raise a SR with Oracle.
Thanks
[V]You can query v$sess_io, column block_changes to find out which session generating how much redo.
The following query gives you the session redo statistics:
select a.sid,b.name,sum(a.value) from v$sesstat a,v$statname b
where a.statistic# = b.statistic#
and b.name like '%redo%'
and a.value > 0
group by a.sid,b.name
If you want, you can only look for redo size for all the current sessions.
Jaffar -
Deletion of archive log using RMAN Target in Oracle 10g
Hi All,
I recently have noticed that the archive logs have occupied 102 GBs out of the drive and I need to free up some space. When I tried to connect RMAN Catalog using
connect catalog user@dbI got error message as
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06445: cannot connect to recovery catalog after NOCATALOG has been usedBut I was able to get through using
connect target user@dbMy Query is can I use
RMAN> delete noprompt expired archivelog all;as given in [how to delete archive log file |https://forums.oracle.com/forums/thread.jspa?threadID=2321257&start=0&tstart=0]
Or shall I take any other approach to delete the obsolete archive logs?
Regards,
*009*Hello;
I recently have noticed that the archive logs have occupied 102 GBs out of the drive and I need to free up some space. When I tried to connect RMAN Catalog using
connect catalog user@dbI got error message as
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06445: cannot connect to recovery catalog after NOCATALOG has been usedBut I was able to get through using
connect target user@db
Are you trying to connect it through Grid control ? If connecting through a script, then please post its contents.
Make sure you are not connecting to the target database using "nocatalog".
rman target sys/<pwd_of_target>@<target_db> catalog <catalog_schema>/<catalog_schema_pwd>@<catalog_db>
My Query is can I use
RMAN> delete noprompt expired archivelog all;
as given in [how to delete archive log file |https://forums.oracle.com/forums/thread.jspa?threadID=2321257&start=0&tstart=0]
Or shall I take any other approach to delete the obsolete archive logs?
Regards,
*009*Are these archives backed up ? The above command only updates the RMAN repository if in case the archives were delete manually at OS level.
I would recommend you to backup these archives and then delete them.
RMAN>backup archivelog all not backedup 1 times delete input;If these archives are already backed up and have not been deleted from the disk, then you can use the below command
RMAN>delete force noprompt archivelog all completed before 'SYSDATE-n';where "n" is the number of days behind the current date. Make sure to verify if these archives have been backed up before deleting.
Regards,
Shivananda -
Oracle recommended location for archive logs in oracle 10g rac
Hello All,
We would like to know the oracle recommended location for the archive logs in oracle10g RAC .we are using ASM.
Thanks...user4487322 wrote:
thanks. Is it the recommended setting ,if we go for a DR setup?I mean archive logs in ASM.If you can use dataguard, the archivelog copy to the standby system would be handled by Oracle and it supports ASM.
Just remember, what ever your strategy, the archivelogs must be in a SHARED location (where all nodes can read/write to this location.) -
How to recover datafile in Oralcle 10g...? No backups and No archive log
All,
I need to recover the datafile 2 which is for undo tablespace and it is in recover state and i need to recover the data files now .
But the bad thing is We dont have backup at all and we dont have archive logs (Archive log disabled in the database)...
In this situation how can i recover the datafile ...?
SQL> select a.file#,a.name,a.status from v$datafile a,v$tablespace b where a.ts#=b.ts#;
FILE# NAME STATUS
1 /export/home/oracle/flexcube/product/10.2.0/db_1/oradata/bwfcc73/system01.dbf SYSTEM
*2 /export/home/oracle/logs/bw/undotbs01.dbf RECOVER*
3 /export/home/oracle/flexcube/product/10.2.0/db_1/oradata/bwfcc73/sysaux01.dbf ONLINE
4 /export/home/oracle/datafiles/bw/bwfcc73.dbf ONLINE
5 /export/home/oracle/datafiles/bw/bwfcc73_01.dbf ONLINE
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 4940
Current log sequence 4942Hi,
First of all you must Open a ticket with oracle Support and explore the options
You can use this note to fix it:
RECOVERING FROM A LOST DATAFILE IN A UNDO TABLESPACE [ID 1013221.6]
If you is Unable to Drop Undo tablespace Since Undo Segment is in Needs Recovery
You can Upload the following trace file while opening the ticket
SQL>Alter session set tracefile_identifier='corrupt';
SQL>Alter system dump undo header "<new of undo segment in recover status>";
Go to udump
ls -lrt *corrupt*
Upload this trace file
Also upload the alert log fileRegards,
Levi Pereira
Edited by: Levi Pereira on Nov 29, 2011 1:58 PM -
Corruption of archived logs after redo resizing - URGENT
Hello all,
We have a RAC 9i 2-node DB, running on RHEL 3 machines.
We resized our redo logs from 100M to 1G to avoid "Can not allocate log, archival required" messages.
The operation was successful, but now we are facing problems with the ext3 filesystem holding the archived logs.
Each node has its own arc location, mounted only once.
When we run "ls" in the arc destination, we get "ls: Input/output error" messages.
RMAN complains also about corrupted arcvhived logs.
The day before Yesterday we reformatted one arc destination, and today the messages appeared in the other one.
Do you have any suggestion?
JonathanDid the problem occur right after increasing the log size? Then the first thing to do would be to decrease the log size, but add more log groups to fix the "Can not allocate..." problem.
If "ls" doesn't work, your file system is stuffed.Reformat the archive dest and do simple file copies to verify the integrity, before putting the archive logs back on.
That's just for starters...
Maybe you are looking for
-
My facebook app no longer works for my iphone after i sync it to my laptop!
Why does my facebook app no longer works for my iphone after i sync it to my mac book pro? I added this app for facebook and then i synced it up with my mac book pro, but afterwards it lost the app, and I can't download it anymore... why is that? And
-
Regarding stats collection in oracle 11g.
Just a general question, while doing stats collection weather system takes the backup of current statistics. i think we can specify stattab. but weather it takes stats backup before over writing? I got this requirement as a part of upgrade, i have al
-
Restrict IDOC processing through output control
Hi, We want to send one idoc (ORDERS) only to a particular logical system while creating purchase order through output control Can anybody explain how we can restrict sending idoc to only one logical system? Thanks in advance
-
Exporting to different frame dimensions
I imported a couple of .mov files of 3D flythroughs made with SketchUp to dimensions 1200 x 900 into iMovie HD 6.0.3. I stitched these together in IMovie. No problem - the length is only about 1 minute. When I, however, export to full quality the fra
-
Error Personnel subarea grouping (MOABW) is not maintained.
Hi Error message reflecting Personnel subarea grouping (MOABW) is not maintained while approving leave through MSS tab. Need help. Rgds RKG