Very large bdump file sizes, how to solve?

Hi gurus,
I currently always find my disk space is not enough, after checking, it is the oraclexe/admin/bdump, there's currently 3.2G for it, my database is very small, only holding datas of 10mb.
It didn't happen before, only currently.
I don't know why it happened, I have deleted some old files in that folder, but today I found it is still very large compare to my database.
I am running an apex application with xe, the applcaitions works well, we didn't see anything wrong, but only the bdump file very big.
any tip to solve this? thanks
here comes my alert_xe.log file content:
hu Jun 03 16:15:43 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5600.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:15:48 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=5452
Thu Jun 03 16:15:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:16:16 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:20:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:21:50 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:25:56 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:26:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:30:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:31:19 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:36:00 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:36:46 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=1312
Thu Jun 03 16:36:49 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:37:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:41:51 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:42:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:46:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:47:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:51:57 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:52:35 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:56:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:57:10 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=3428
Thu Jun 03 16:57:13 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 16:57:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:02:16 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:02:48 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:07:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:08:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:12:18 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:12:41 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:17:21 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:17:34 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=5912
Thu Jun 03 17:17:37 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:18:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:22:37 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:23:01 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:27:39 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:28:02 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:32:42 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:33:07 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:37:45 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:38:40 2010
Restarting dead background process MMON
MMON started with pid=11, OS id=1660
Thu Jun 03 17:38:43 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:39:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:42:54 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=31, OS id=6116
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174259', 'KUPC$S_1_20100603174259', 0);
Thu Jun 03 17:43:38 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=32, OS id=2792
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174338', 'KUPC$S_1_20100603174338', 0);
Thu Jun 03 17:43:44 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:44:06 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:44:47 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=33, OS id=3492
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174448', 'KUPC$S_1_20100603174448', 0);
kupprdp: worker process DW01 started with worker id=1, pid=34, OS id=748
to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM');
Thu Jun 03 17:45:28 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 5684K exceeds notification threshold (2048K)
KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
Thu Jun 03 17:45:28 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 5681K exceeds notification threshold (2048K)
Details in trace file c:\oraclexe\app\oracle\admin\xe\bdump\xe_dw01_748.trc
KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
Thu Jun 03 17:48:47 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:49:17 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:53:49 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Thu Jun 03 17:54:28 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
Fri Jun 04 07:46:55 2010
ORACLE V10.2.0.1.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Windows XP Version V5.1 Service Pack 3
CPU : 2 - type 586, 1 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:1653M/2047M, Ph+PgF:4706M/4958M, VA:1944M/2047M
Fri Jun 04 07:46:55 2010
Starting ORACLE instance (normal)
Fri Jun 04 07:47:06 2010
LICENSE_MAX_SESSION = 100
LICENSE_SESSIONS_WARNING = 80
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =33
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.1.0.
System parameters with non-default values:
processes = 200
sessions = 300
license_max_sessions = 100
license_sessions_warning = 80
sga_max_size = 838860800
__shared_pool_size = 260046848
shared_pool_size = 209715200
__large_pool_size = 25165824
__java_pool_size = 4194304
__streams_pool_size = 8388608
spfile = C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SERVER\DBS\SPFILEXE.ORA
sga_target = 734003200
control_files = C:\ORACLEXE\ORADATA\XE\CONTROL.DBF
__db_cache_size = 432013312
compatible = 10.2.0.1.0
db_recovery_file_dest = D:\
db_recovery_file_dest_size= 5368709120
undo_management = AUTO
undo_tablespace = UNDO
remote_login_passwordfile= EXCLUSIVE
dispatchers = (PROTOCOL=TCP) (SERVICE=XEXDB)
shared_servers = 10
job_queue_processes = 1000
audit_file_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
background_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
user_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
core_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
db_name = XE
open_cursors = 300
os_authent_prefix =
pga_aggregate_target = 209715200
PMON started with pid=2, OS id=3044
MMAN started with pid=4, OS id=3052
DBW0 started with pid=5, OS id=3196
LGWR started with pid=6, OS id=3200
CKPT started with pid=7, OS id=3204
SMON started with pid=8, OS id=3208
RECO started with pid=9, OS id=3212
CJQ0 started with pid=10, OS id=3216
MMON started with pid=11, OS id=3220
MMNL started with pid=12, OS id=3224
Fri Jun 04 07:47:31 2010
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 10 shared server(s) ...
Oracle Data Guard is not available in this edition of Oracle.
PSP0 started with pid=3, OS id=3048
Fri Jun 04 07:47:41 2010
alter database mount exclusive
Fri Jun 04 07:47:54 2010
Setting recovery target incarnation to 2
Fri Jun 04 07:47:56 2010
Successful mount of redo thread 1, with mount id 2601933156
Fri Jun 04 07:47:56 2010
Database mounted in Exclusive Mode
Completed: alter database mount exclusive
Fri Jun 04 07:47:57 2010
alter database open
Fri Jun 04 07:48:00 2010
Beginning crash recovery of 1 threads
Fri Jun 04 07:48:01 2010
Started redo scan
Fri Jun 04 07:48:03 2010
Completed redo scan
16441 redo blocks read, 442 data blocks need recovery
Fri Jun 04 07:48:04 2010
Started redo application at
Thread 1: logseq 1575, block 48102
Fri Jun 04 07:48:05 2010
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1575 Reading mem 0
Mem# 0 errs 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Fri Jun 04 07:48:07 2010
Completed redo application
Fri Jun 04 07:48:07 2010
Completed crash recovery at
Thread 1: logseq 1575, block 64543, scn 27413940
442 data blocks read, 442 data blocks written, 16441 redo blocks read
Fri Jun 04 07:48:09 2010
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=25, OS id=3288
ARC1 started with pid=26, OS id=3292
Fri Jun 04 07:48:10 2010
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 advanced to log sequence 1576
Thread 1 opened at log sequence 1576
Current log# 3 seq# 1576 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
Successful open of redo thread 1
Fri Jun 04 07:48:13 2010
ARC0: STARTING ARCH PROCESSES
Fri Jun 04 07:48:13 2010
ARC1: Becoming the 'no FAL' ARCH
Fri Jun 04 07:48:13 2010
ARC1: Becoming the 'no SRL' ARCH
Fri Jun 04 07:48:13 2010
ARC2: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC0: Becoming the heartbeat ARCH
Fri Jun 04 07:48:13 2010
SMON: enabling cache recovery
ARC2 started with pid=27, OS id=3580
Fri Jun 04 07:48:17 2010
db_recovery_file_dest_size of 5120 MB is 49.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Fri Jun 04 07:48:31 2010
Successfully onlined Undo Tablespace 1.
Fri Jun 04 07:48:31 2010
SMON: enabling tx recovery
Fri Jun 04 07:48:31 2010
Database Characterset is AL32UTF8
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=28, OS id=2412
Fri Jun 04 07:48:51 2010
Completed: alter database open
Fri Jun 04 07:49:22 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:32 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:52 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:49:57 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:54:10 2010
Shutting down archive processes
Fri Jun 04 07:54:15 2010
ARCH shutting down
ARC2: Archival stopped
Fri Jun 04 07:54:53 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:55:08 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:56:25 2010
Starting control autobackup
Fri Jun 04 07:56:27 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
Fri Jun 04 07:56:28 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_21
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_20
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_17
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_16
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_14
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_12
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_09
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_07
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_06
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_03
ORA-27093: 无法删除目录
Fri Jun 04 07:56:29 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_21
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_20
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_17
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_16
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_14
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_12
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_09
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_07
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_06
ORA-27093: 无法删除目录
ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_03
ORA-27093: 无法删除目录
Control autobackup written to DISK device
handle 'D:\XE\AUTOBACKUP\2010_06_04\O1_MF_S_720777385_60JJ9BNZ_.BKP'
Fri Jun 04 07:56:38 2010
Thread 1 advanced to log sequence 1577
Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Fri Jun 04 07:56:56 2010
Thread 1 cannot allocate new log, sequence 1578
Checkpoint not complete
Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
Thread 1 advanced to log sequence 1578
Current log# 3 seq# 1578 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
Fri Jun 04 07:57:04 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 2208K exceeds notification threshold (2048K)
KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
Fri Jun 04 07:59:54 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Fri Jun 04 07:59:58 2010
Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []

Hi Gurus,
there's a error ora-00600 in the big trc files as below, this is only part of this file, this file is more than 45mb in size:
xe_mmon_4424.trc
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_4424.trc
Fri Jun 04 17:03:22 2010
ORACLE V10.2.0.1.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Windows XP Version V5.1 Service Pack 3
CPU : 2 - type 586, 1 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:992M/2047M, Ph+PgF:3422M/4958M, VA:1011M/2047M
Instance name: xe
Redo thread mounted by this instance: 1
Oracle process number: 11
Windows thread id: 4424, image: ORACLE.EXE (MMON)
*** SERVICE NAME:(SYS$BACKGROUND) 2010-06-04 17:03:22.265
*** SESSION ID:(284.23) 2010-06-04 17:03:22.265
*** 2010-06-04 17:03:22.265
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
Current SQL statement for this session:
BEGIN :success := dbms_ha_alerts_prvt.check_ha_resources; END;
----- PL/SQL Call Stack -----
object line object
handle number name
41982E80 418 package body SYS.DBMS_HA_ALERTS_PRVT
41982E80 552 package body SYS.DBMS_HA_ALERTS_PRVT
41982E80 305 package body SYS.DBMS_HA_ALERTS_PRVT
419501A0 1 anonymous block
----- Call Stack Trace -----
calling call entry argument values in hex
location type point (? means dubious value)
ksedst+38           CALLrel  ksedst1+0 0 1
ksedmp+898          CALLrel  ksedst+0 0
ksfdmp+14           CALLrel  ksedmp+0 3
_kgerinv+140         CALLreg  00000000             8EF0A38 3
kgeasnmierr+19      CALLrel  kgerinv+0 8EF0A38 6610020 3672F70 0
6538808
kjhnpost_ha_alert CALLrel _kgeasnmierr+0       8EF0A38 6610020 3672F70 0
0+2909
__PGOSF57__kjhn_pos CALLrel kjhnpost_ha_alert 88 B21C4D0 B21C4D8 B21C4E0
t_ha_alert_plsql+43 0+0 B21C4E8 B21C4F0 B21C4F8
8 B21C500 B21C50C 0 FFFFFFFF 0
0 0 6
_spefcmpa+415        CALLreg  00000000            
spefmccallstd+147   CALLrel  spefcmpa+0 65395B8 16 B21C5AC 653906C 0
pextproc+58         CALLrel  spefmccallstd+0 6539874 6539760 6539628
65395B8 0
__PGOSF302__peftrus CALLrel _pextproc+0         
ted+115
_psdexsp+192         CALLreg  00000000             6539874
_rpiswu2+426         CALLreg  00000000             6539510
psdextp+567         CALLrel  rpiswu2+0 41543288 0 65394F0 2 6539528
0 65394D0 0 2CD9E68 0 6539510
0
_pefccal+452         CALLreg  00000000            
pefcal+174          CALLrel  pefccal+0 6539874
pevmFCAL+128 CALLrel _pefcal+0           
pfrinstrFCAL+55 CALLrel pevmFCAL+0 AF74F48 3DFB92B8
pfrrunno_tool+56 CALL??? 00000000 AF74F48 3DFBB728 AF74F84
pfrrun+781          CALLrel  pfrrun_no_tool+0 AF74F48 3DFBB28C AF74F84
plsqlrun+738 CALLrel _pfrrun+0            AF74F48
peicnt+247          CALLrel  plsql_run+0 AF74F48 1 0
kkxexe+413          CALLrel  peicnt+0
opiexe+5529         CALLrel  kkxexe+0 AF7737C
kpoal8+2165         CALLrel  opiexe+0 49 3 653A4FC
_opiodr+1099         CALLreg  00000000             5E 0 653CBAC
kpoodr+483          CALLrel  opiodr+0
_xupirtrc+1434       CALLreg  00000000             67384BC 5E 653CBAC 0 653CCBC
upirtrc+61          CALLrel  xupirtrc+0 67384BC 5E 653CBAC 653CCBC
653D990 60FEF8B8 653E194
6736CD8 1 0 0
kpurcsc+100         CALLrel  upirtrc+0 67384BC 5E 653CBAC 653CCBC
653D990 60FEF8B8 653E194
6736CD8 1 0 0
kpuexecv8+2815      CALLrel  kpurcsc+0
kpuexec+2106        CALLrel  kpuexecv8+0 673AE10 6736C4C 6736CD8 0 0
653EDE8
OCIStmtExecute+29   CALLrel  kpuexec+0 673AE10 6736C4C 673AEC4 1 0 0
0 0 0
kjhnmmon_action+5 CALLrel _OCIStmtExecute+0    673AE10 6736C4C 673AEC4 1 0 0
26 0 0
kjhncheck_ha_reso CALLrel kjhnmmon_action+0 653EFCC 3E
urces+140
kebmronce_dispatc CALL??? 00000000
her+630
kebmronce_execute CALLrel kebmronce_dispatc
+12 her+0
_ksbcti+788          CALLreg  00000000             0 0
ksbabs+659          CALLrel  ksbcti+0
kebmmmon_main+386 CALLrel _ksbabs+0            3C5DCB8
_ksbrdp+747          CALLreg  00000000             3C5DCB8
opirip+674          CALLrel  ksbrdp+0
opidrv+857          CALLrel  opirip+0 32 4 653FEBC
sou2o+45            CALLrel  opidrv+0 32 4 653FEBC
opimaireal+227 CALLrel _sou2o+0             653FEB0 32 4 653FEBC
opimai+92           CALLrel  opimai_real+0 3 653FEE8
BackgroundThreadSt  CALLrel  opimai+0
art@4+422
7C80B726 CALLreg 00000000
--------------------- Binary Stack Dump ---------------------
========== FRAME [1] (_ksedst+38 -> _ksedst1+0) ==========
Dump of memory from 0x065386DC to 0x065386EC
65386D0 065386EC [..S.]
65386E0 0040467B 00000000 00000001 [{F@.........]
========== FRAME [2] (_ksedmp+898 -> _ksedst+0) ==========
Dump of memory from 0x065386EC to 0x065387AC
65386E0 065387AC [..S.]
65386F0 00403073 00000000 53532E49 20464658 [[email protected] ]
6538700 54204D41 0000525A 00000000 08EF0EC0 [AM TZR..........]
6538710 6072D95A 08EF0EC5 03672F70 00000017 [Z.r`....p/g.....]
6538720 00000000 00000000 00000000 00000000 [................]
Repeat 1 times
6538740 00000000 00000000 00000000 00000017 [................]
6538750 08EF0B3C 08EF0B34 03672F70 08F017F0 [<...4...p/g.....]
6538760 603AA0D3 065387A8 00000001 00000000 [..:`..S.........]
6538770 00000000 00000000 00000001 00000000 [................]
6538780 00000000 08EF0A38 06610020 031E1D20 [....8... .a. ...]
6538790 00000000 065386F8 08EF0A38 06538D38 [......S.8...8.S.]
65387A0 0265187C 031C8860 FFFFFFFF [|.e.`.......]
========== FRAME [3] (_ksfdmp+14 -> _ksedmp+0) ==========
and the file is keeping increasing, though I have deleted a lot of this, but:
as I marked:
time size
15:23 pm 795mb
16:55 pm 959mb
17:01 pm 970mb
17:19 pm 990mb
Any solution for that?
Thanks!!

Similar Messages

  • HELP!! Very Large Spooling / File Size after Data Merge

    My question is: If the image is the same and only the text is different why not use the same image over and over again?
    Here is what happens...
    Using CS3 and XP (P4 2.4Ghz, 1GB Ram, 256MB Video Card) I have taken a postcard pdf (the backside) placed it in a document, then I draw a text box. Then I select a data source and put the fields I wish to print (Name, address, zip, etc.) in the text box.
    Now, under the Create Merged Document menu I select Multiple Records and then use the Multiple Records Layout tab to adjust the placement of this postcard on the page. I use the preview multiple records option to lay out 4 postcards on my page. Then I merge the document (it has 426 records).
    Now that my merged document is created with four postcards per page and the mailing data on each card I go to print. When I print the file it spools up huge! The PDF I orginally placed in the document is 2.48 MB but when it spools I can only print 25 pages at a time and that still takes FOREVER. So again my question is... If the image is the same and only the text is different why not use the same image over and over again?
    How can I prevent the gigantic spooling? I have tried putting the PDF on the master page and then using the document page to create the merged document and still the same result. I have also tried createing a merged document with just the addresses then adding the the pdf on the Master page afterward but again, huge filesize while spooling. Am I missing something? Any help is appreciated :)

    The size of the EMF spool file may become very large when you print a document that contains lots of raster data
    View products that this article applies to.
    Article ID : 919543
    Last Review : June 7, 2006
    Revision : 2.0
    On This Page
    SYMPTOMS
    CAUSE
    RESOLUTION
    STATUS
    MORE INFORMATION
    Steps to reproduce the problem
    SYMPTOMS
    When you print a document that contains lots of raster data, the size of the Enhanced Metafile (EMF) spool file may become very large. Files such as Adobe .pdf files or Microsoft Word .doc documents may contain lots of raster data. Adobe .pdf files and Word .doc documents that contain gradients are even more likely to contain lots of raster data.
    Back to the top
    CAUSE
    This problem occurs because Graphics Device Interface (GDI) does not compress raster data when the GDI processes EMF spool files and generates EMF spool files.
    This problem is very prominent with printers that support higher resolutions. The size of the raster data increases by four times if the dots-per-inch (dpi) in the file increases by two times. For example, a .pdf file of 1 megabyte (MB) may generate an EMF spool file of 500 MB. Therefore, you may notice that the printing process decreases in performance.
    Back to the top
    RESOLUTION
    To resolve this problem, bypass EMF spooling. To do this, follow these steps:1. Open the properties dialog box for the printer.
    2. Click the Advanced tab.
    3. Click the Print directly to the printer option.
    Note This will disable all print processor-based features such as the following features: N-up
    Watermark
    Booklet printing
    Driver collation
    Scale-to-fit
    Back to the top
    STATUS
    Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
    Back to the top
    MORE INFORMATION
    Steps to reproduce the problem
    1. Open the properties dialog box for any inbox printer.
    2. Click the Advanced tab.
    3. Make sure that the Print directly to the printer option is not selected.
    4. Click to select the Keep printed documents check box.
    5. Print an Adobe .pdf document that contains many groups of raster data.
    6. Check the size of the EMF spool file.

  • Very large Keynote file size

    One of my Keynote files has grown to 704 MB and now takes forever to save. How can I reduce the size of this file? I suspect some photos in the file are larger in MB size then they need to be.
    Thanks

    You'd need to try exporting your images from iPhoto as smaller images before using them in Keynote. I'm not sure if there's a simple way to compress the images that are already in Keynote.

  • I need to sort very large Excel files and perform other operations.  How much faster would this be on a MacPro rather than my MacBook Pro i7, 2.6, 15R?

    I am a scientist and run my own business.  Money is tight.  I have some very large Excel files (~200MB) that I need to sort and perform logic operations on.  I currently use a MacBookPro (i7 core, 2.6GHz, 16GB 1600 MHz DDR3) and I am thinking about buying a multicore MacPro.  Some of the operations take half an hour to perform.  How much faster should I expect these operations to happen on a new MacPro?  Is there a significant speed advantage in the 6 core vs 4 core?  Practically speaking, what are the features I should look at and what is the speed bump I should expect if I go to 32GB or 64GB?  Related to this I am using a 32 bit version of Excel.  Is there a 64 bit spreadsheet that I can us on a Mac that has no limit on column and row size?

    Grant Bennet-Alder,
    It’s funny you mentioned using Activity Monitor.  I use it all the time to watch when a computation cycle is finished so I can avoid a crash.  I keep it up in the corner of my screen while I respond to email or work on a grant.  Typically the %CPU will hang at ~100% (sometimes even saying the application is not responding in red) but will almost always complete the cycle if I let it go for 30 minutes or so.  As long as I leave Excel alone while it is working it will not crash.  I had not thought of using the Activity Monitor as you suggested. Also I did not realize using a 32 bit application limited me to 4GB of memory for each application.  That is clearly a problem for this kind of work.  Is there any work around for this?   It seems like a 64-bit spreadsheet would help.  I would love to use the new 64 bit Numbers but the current version limits the number of rows and columns.  I tried it out on my MacBook Pro but my files don’t fit.
    The hatter,
    This may be the solution for me. I’m OK with assembling the unit you described (I’ve even etched my own boards) but feel very bad about needing to step away from Apple products.  When I started computing this was the sort of thing computers were designed to do.  Is there any native 64-bit spreadsheet that allows unlimited rows/columns, which will run on an Apple?  Excel is only 64-bit on their machines.
    Many thanks to both of you for your quick and on point answers!

  • How can NI FBUS Monitor display very large recorded files

    NI FBUS Monitor version 3.0.1 outputs an error message "Out of memory", if I try to load a large recorded file of size 272 MB. Is there any combination of operating system (possible Vista32 or Vista64) and/or physical memory size, where NI FBUS Monitor can display such large recordings ? Are there any patches or workarounds or tools to display very large recorded files?

    Hi,
    NI-FBUS Monitor does not set the limitation on the maximum record file size. The physical memory size in the system is one of the most important factors that affect the loading of large record file.  Monitor will try loading the entire file into the memory during file open operation.
    272MB is a really large file size. To open the file, your system must have sufficient physical memory available.  Otherwise "Out of memory" error will occur.
    I would recommend you do not use Monitor to open a file larger than 100MB. Loading of a too large file will consume the system memory quickly and decrease the performance.
    Message Edited by Vince Shen on 11-30-2009 09:38 PM
    Feilian (Vince) Shen

  • Large PDF file sizes when exporting from InDesign

    Hi,
    I was wondering if anyone knew why some PDF file sizes are so large when exporting from ID.
    I create black and white user manuals with ID CS3. We post these online, so I try to get the file size down as much as possible.
    There is only one .psd image in each manual. The content does not have any photographs, just Illustrator .eps diagrams and line drawings. I am trying to figure out why some PDF file sizes are so large.
    Also, why the file sizes are so different.
    For example, I have one ID document that is 3MB.
    Exporting it at the smallest file size, the PDF file comes out at 2MB.
    Then I have another ID document that is 10MB.
    Exporting to PDF is 2MB (the same size as the smaller ID document)... this one has many more .eps's in it and a lot more pages.
    Then I have another one that the ID size is 8MB and the PDF is 6MBwhy is this one so much larger than the 10MB ID document?
    Any ideas on why this is happening and/or how I can reduce the file size.
    I've tried adjusting the export compression and other settings but that didn't work.
    I also tried to reduce them after the fact in Acrobat to see what would happen, but it doesn't reduce it all that much.
    Thanks for any help,
    Cathy

    > Though, the sizes of the .eps's are only about 100K to 200K in size and they are linked, not embedded.
    But they're embedded in the PDF.
    > It's just strange though because our marketing department as an 80 page full color catalog that, when exported it is only 5MB. Their ID document uses many very large .tif files. So, I am leaning toward it being an .eps/.ai issue??
    Issue implies there's something wrong, but I think this is just the way
    it's supposed to work.
    Line drawings, while usually fairly compact, cannot be lossy compressed.
    The marketing department, though, may compress their very large TIFF
    files as much as they like (with a corresponding loss of quality). It's
    entirely possible to compress bitmaps to a smaller size than the
    drawings those bitmaps were made from. You could test this yourself.
    Just open a few of your EPS drawings in Photoshop, save as TIFF, place
    in ID, and try various downsampling schemes. If you downsample enough,
    you'll get the size of the PDF below a PDF that uses the same graphics
    as line drawing EPS files. But you may have to downsample them beyond
    recognition...
    Kenneth Benson
    Pegasus Type, Inc.
    www.pegtype.com

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Large .bpel file size vs performance

    how does large .bpel file size affect performance,say that I have a process of .9 mgb size with around 10000 line how does this affect the instance creation ,fetching and message creation during the process life cycle.
    Edited by: arababah on Mar 8, 2010 7:23 AM

    Johnk93 wrote:
    MacDLS,
    I recently did a little house-cleaning on my startup drive (only 60Gb) and now have about 20Gb free, so I don't think that is the problem.
    It's probably not a very fast drive in the first place...
    I know that 5MB isn't very big, but for some reason it takes a lot longer to open these scanned files in photoshop (from aperture) than the 5MB files from my camera. Any idea why this is?
    Have a look at the file size of one of those externally edited files for a clue - it won't be 5MB. When Aperture sends a file out for editing, it creates either a PSD or an uncompressed TIFF after applying any image adjustments that you've applied in Aperture, and sends that out. Depending on the settings in Aperture's preferences this will be in either 8-bit or 16-bit.
    As a 16-bit uncompressed TIFF, a 44 megapixel image weighs in at a touch over 150MB...
    Ian

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Large folio file size

    We are half way through a book that comprises 100 single page articles. However it is already nearly 500 MB and this isn't sustainable.
    Does the following affect the file size:
    Is the Folio file size affected by the number of individual articles, would it be smaller if we had stacks of say 10 articles each with 10 pages rather than 100 single pages?
    Every page has a two picture (JPG) object state, the first image is an extreme elargement of the image that visible only for a about a second before the full frame image appears. Each page has a caption using Pan overlay that can be dragged into the page using a small tab.Does an Object State increase the file size over and above the images contained within it?
    We have reduced the JPGs to the minimum acceptable quality and there is no video in the Folio.
    Any ideas would be much appreciated?

    800 MB worth of video sounds crazy.
    Of course, a high number of videos can bring you to that.
    I saw bigger dps apps. I think the apple limit lies around 4 gb (remember,
    that is more than 25% of a whole 16 gb iPad)
    The mp4 video codec does a really good job while keeping the quality high.
    And the human eye is more forgiving to quality when it comes to moving
    images compared to still imagery.
    I wrote a collection of tipps and ideas how to reduce your file size.
    http://www.google.de/url?sa=t&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fdigitalpublis hing.tumblr.com%2Fpost%2F11650748389%2Freducing-folio-filesize&ei=uVbeTv_yD--M4gTY_OWbBw&u sg=AFQjCNHroLkcl-neKlpeidULpQdosl08vw
    —Johannes
    (mobil gesendet. fat fingers. beware!)
    Am 06.12.2011 18:32 schrieb "gnusart" <[email protected]>:
       Re: Large folio file size  created by gnusart<http://forums.adobe.com/people/gnusart>in
    Digital Publishing Suite - View the full discussion<http://forums.adobe.com/message/4067148#4067148>

  • Today, I randomly happened to have less than 1GB of hard drive space left. I found very large "frame" files, what are they?

    I found very large "frame" files, what are they & can I delete them? (See screenshot). I'm a (17 today)-year-old film-maker and can't edit in FCP X anymore because I "don't have enough space". Every time I try to delete one, another identical file creates itself...
    If that can help: I just upgraded to FCP 10.0.4 and every time I launch it it asks to convert my current projects (I know it would do it at least once) and I accept, but everytime I have to get it done AGAIN. My computer is slower than ever and I have a deadline this friday
    I also just upgraded to Mac OS X 10.7.4, and the problem hasn't been here for long, so it may be linked...
    Please help me!
    Alex

    The first thing you should do is to back up your personal data. It is possible that your hard drive is failing. If you are using Time Machine, that part is already done.
    Then, I think it would be easiest to reformat the drive and restore. If you ARE using Time Machine, you can start up from your Leopard installation disc. At the first Installer screen, go up to the menu bar, and from the Utilities menu, first select to run Disk Utility. Completely erase the internal drive using the Erase tab; make sure you have the internal DRIVE (not the volume) selected in the sidebar, and make sure you are NOT erasing your Time Machine drive by mistake. After erasing, quit Disk Utility, and select the command to restore from backup from the same Utilities menu. Using that Time Machine volume restore utility, you can restore it to a time and date immediately before you went on vacation, when things were working.
    If you are not using Time Machine, you can erase and reinstall the OS (after you have backed up your personal data). After restarting from the new installation and installing all the updates using Software Update, you can restore your personal data from the backup you just made.

  • Have a very large text file, and need to read lines in the middle.

    I have very large txt files (around several hundred megabytes), and I want to be able to skip and read specific lines. More specifically, say the file looks like:
    scan 1
    scan 2
    scan 3
    scan 100,000
    I want to be able to skip move the filereader immediately to scan 50,000, rather than having to read through scan 1-49,999.
    Thanks for any help.

    If the lines are all different lengths (as in your example) then there is nothing you can do except to read and ignore the lines you want to skip over.
    If you are going to be doing this repeatedly, you should consider reformatting those text files into something that supports random access.

  • What are the best tools for opening very large XML files and examining the tree and confirming they are valid?

    I am generating some very large XML files (600,000+ lines, 50MB+ characters). I finally have them all being valid XML and valid UTF-8.
    But the files are so large Safari and Chrome will often not open them. FireFox will though.
    Instead of these browsers, I was wondering if there are there any other recommended apps for the Mac for opening and viewing the XML, getting an error message if they are not valid for some reason and examing the XML tree?
    I opened the file in the default app for XML which is Xcode, but that is just like opening it in a plain text editor. You can't expand/collapse the XML tree like you can with a browser, and it doesn't report errors.
    Thanks,
    Doug

    Hi Tom,
    I had not seen that list. I'll look it over.
    I'm also in touch with the developer of BBEdit (they are quite responsive) and they are willing to look at the file in question and see why it is not reporting UTF-8 errors while Chrome is.
    For now I have all the invalid characters quashed and things are working. But it would be useful in the future.
    By the by, some of those editors are quite pricey!
    doug

  • How can I copy a very large iMovie file (10.08 GB) onto a disc?

    I have an imovie project of my daughters wedding and I would like to put it on a disc for her, however, it is so big ( 10.08GB) that I can't find a DVD disc big enough. I bought a 16GB USB drive thinking I could copy it to that but my iMac says that it is too big to copy. Can anyone help please?
    Also is it possible to save an imove file as a different type of file? if so, how do i do it?

    If you want your daughter to be able to play the movie in a DVD player, you need DVD authoring software. Get a copy of iDVD (discontinued but you can still find copies of the old iLife09 or iLife11 package that include iDVD) or Roxio Toast.
    Export your iMovie project to a QuickTime movie file, then import the resulting QT movie into iDVD or Toast and create your DVD.  They will encode & compress your movie to fit on a DVD and be playable in a DVD player.
    Note: you can fit up to about 2 hours of video on a single-layer DVD disk and up to about 4 hours on a dual-layer DVD disk.  The iMovie project file size is not the correct measure here, the length in time of the video is.

  • How do I delete a very Large "Other" File

    I erroneously loaded a ton of music files to the "Other" file on my ipod classic 80g...What is the easiest way to delete this large file from the ipod?

    This method deleted the huge "Other" file, but of course results in a long term issue of re-loading the ipod w/a large original file. On a external drive, I have my itunes music folder and other music in MP3/CD formats. I've tried to load these songs to the ipod, but it seems to want to place them in the "Other" file and not music. I have been manually loading these songs from the external drive and sliding them right into the music library on the itunes page. Is this the right to do this?

Maybe you are looking for

  • Subcontracting error - MIGO posting

    Dear SAP Gurus, I have raised the sub contract PO and have posted subcontracting challan using J1ifo1 for which i have configured CIN setting for this. while posting J1IF01 i was getting error "excise modvat account not defined for 57FC and for P2 ex

  • Start following process human task direct from exising one

    Hi all, Is it possible to start the following human task inside an integrated BPM process directly from the former (now in usage) task?? For understanding the scenario: I want the user to edit some global ERP data and after ending this, the global da

  • Weird graphics problem: HELP

    i recently upgraded to 10.5.5. Everything seemed fine. However, i came home today and found that my displays (i have three, listed below in my profile) were not displaying color correctly. All of them appear to be very dark, and certainly not in mill

  • How to assign ranges ( select-option)to field symbol

    Hi , I have following scenario ranges : r_test1 for agr_1251,              r_test2 for agr_1251. In runtime i am getting which range i have to populate in a field v_rname.for now let it me v_rname  = 'r_test2' i want to assign (v_rname ) to <field -s

  • WebLogic 9 on HP-UX Itanium

    hi, Anyone knows when WebLogic 9 on HP-UX Itanium will be released? Message was edited by: ctubea