Very large Keynote file size

One of my Keynote files has grown to 704 MB and now takes forever to save. How can I reduce the size of this file? I suspect some photos in the file are larger in MB size then they need to be.
Thanks

You'd need to try exporting your images from iPhoto as smaller images before using them in Keynote. I'm not sure if there's a simple way to compress the images that are already in Keynote.

Similar Messages

  • HELP!! Very Large Spooling / File Size after Data Merge

    My question is: If the image is the same and only the text is different why not use the same image over and over again?
    Here is what happens...
    Using CS3 and XP (P4 2.4Ghz, 1GB Ram, 256MB Video Card) I have taken a postcard pdf (the backside) placed it in a document, then I draw a text box. Then I select a data source and put the fields I wish to print (Name, address, zip, etc.) in the text box.
    Now, under the Create Merged Document menu I select Multiple Records and then use the Multiple Records Layout tab to adjust the placement of this postcard on the page. I use the preview multiple records option to lay out 4 postcards on my page. Then I merge the document (it has 426 records).
    Now that my merged document is created with four postcards per page and the mailing data on each card I go to print. When I print the file it spools up huge! The PDF I orginally placed in the document is 2.48 MB but when it spools I can only print 25 pages at a time and that still takes FOREVER. So again my question is... If the image is the same and only the text is different why not use the same image over and over again?
    How can I prevent the gigantic spooling? I have tried putting the PDF on the master page and then using the document page to create the merged document and still the same result. I have also tried createing a merged document with just the addresses then adding the the pdf on the Master page afterward but again, huge filesize while spooling. Am I missing something? Any help is appreciated :)

    The size of the EMF spool file may become very large when you print a document that contains lots of raster data
    View products that this article applies to.
    Article ID : 919543
    Last Review : June 7, 2006
    Revision : 2.0
    On This Page
    SYMPTOMS
    CAUSE
    RESOLUTION
    STATUS
    MORE INFORMATION
    Steps to reproduce the problem
    SYMPTOMS
    When you print a document that contains lots of raster data, the size of the Enhanced Metafile (EMF) spool file may become very large. Files such as Adobe .pdf files or Microsoft Word .doc documents may contain lots of raster data. Adobe .pdf files and Word .doc documents that contain gradients are even more likely to contain lots of raster data.
    Back to the top
    CAUSE
    This problem occurs because Graphics Device Interface (GDI) does not compress raster data when the GDI processes EMF spool files and generates EMF spool files.
    This problem is very prominent with printers that support higher resolutions. The size of the raster data increases by four times if the dots-per-inch (dpi) in the file increases by two times. For example, a .pdf file of 1 megabyte (MB) may generate an EMF spool file of 500 MB. Therefore, you may notice that the printing process decreases in performance.
    Back to the top
    RESOLUTION
    To resolve this problem, bypass EMF spooling. To do this, follow these steps:1. Open the properties dialog box for the printer.
    2. Click the Advanced tab.
    3. Click the Print directly to the printer option.
    Note This will disable all print processor-based features such as the following features: N-up
    Watermark
    Booklet printing
    Driver collation
    Scale-to-fit
    Back to the top
    STATUS
    Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
    Back to the top
    MORE INFORMATION
    Steps to reproduce the problem
    1. Open the properties dialog box for any inbox printer.
    2. Click the Advanced tab.
    3. Make sure that the Print directly to the printer option is not selected.
    4. Click to select the Keep printed documents check box.
    5. Print an Adobe .pdf document that contains many groups of raster data.
    6. Check the size of the EMF spool file.

  • Very large bdump file sizes, how to solve?

    Hi gurus,
    I currently always find my disk space is not enough, after checking, it is the oraclexe/admin/bdump, there's currently 3.2G for it, my database is very small, only holding datas of 10mb.
    It didn't happen before, only currently.
    I don't know why it happened, I have deleted some old files in that folder, but today I found it is still very large compare to my database.
    I am running an apex application with xe, the applcaitions works well, we didn't see anything wrong, but only the bdump file very big.
    any tip to solve this? thanks
    here comes my alert_xe.log file content:
    hu Jun 03 16:15:43 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5600.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:15:48 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=5452
    Thu Jun 03 16:15:52 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:16:16 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:20:54 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:21:50 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:25:56 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:26:18 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:30:58 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:31:19 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:36:00 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5452.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:36:46 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=1312
    Thu Jun 03 16:36:49 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:37:13 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:41:51 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:42:13 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:46:54 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:47:17 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:51:57 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:52:35 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:56:58 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1312.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:57:10 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=3428
    Thu Jun 03 16:57:13 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 16:57:52 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:02:16 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:02:48 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:07:18 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:08:01 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:12:18 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:12:41 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:17:21 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3428.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:17:34 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=5912
    Thu Jun 03 17:17:37 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:18:01 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:22:37 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:23:01 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:27:39 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:28:02 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:32:42 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:33:07 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:37:45 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_5912.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:38:40 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=1660
    Thu Jun 03 17:38:43 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:39:17 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:42:54 2010
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=31, OS id=6116
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174259', 'KUPC$S_1_20100603174259', 0);
    Thu Jun 03 17:43:38 2010
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=32, OS id=2792
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174338', 'KUPC$S_1_20100603174338', 0);
    Thu Jun 03 17:43:44 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:44:06 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:44:47 2010
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=33, OS id=3492
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM', 'KUPC$C_1_20100603174448', 'KUPC$S_1_20100603174448', 0);
    kupprdp: worker process DW01 started with worker id=1, pid=34, OS id=748
    to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_SCHEMA_01', 'SYSTEM');
    Thu Jun 03 17:45:28 2010
    Memory Notification: Library Cache Object loaded into SGA
    Heap size 5684K exceeds notification threshold (2048K)
    KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
    Thu Jun 03 17:45:28 2010
    Memory Notification: Library Cache Object loaded into SGA
    Heap size 5681K exceeds notification threshold (2048K)
    Details in trace file c:\oraclexe\app\oracle\admin\xe\bdump\xe_dw01_748.trc
    KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('TABLE_T', '7')), KU$.OBJ_NUM ,KU$.ANC_OBJ.NAME ,KU$.ANC_OBJ.OWNER_NAME ,KU$.ANC_OBJ.TYPE_NAME ,KU$.BASE_OBJ.NAME ,KU$.BASE_OBJ.OWNER_NAME ,KU$.BASE_OBJ.TYPE_NAME ,KU$.SPARE1 ,KU$.XMLSCHEMACOLS ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'TABLE' ,KU$.PROPERTY ,KU$.SCHEMA_OBJ.OWNER_NAME ,KU$.TS_NAME ,KU$.TRIGFLAG FROM SYS.KU$_FHTABLE_VIEW KU$ WHERE NOT (BITAND (KU$.PROPERTY,8192)=8192) AND NOT BITAND(KU$.SCHEMA_OBJ.FLAGS,128)!=0 AND KU$.OBJ_NU
    Thu Jun 03 17:48:47 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:49:17 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:53:49 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Thu Jun 03 17:54:28 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_1660.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
    Fri Jun 04 07:46:55 2010
    ORACLE V10.2.0.1.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Windows XP Version V5.1 Service Pack 3
    CPU : 2 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:1653M/2047M, Ph+PgF:4706M/4958M, VA:1944M/2047M
    Fri Jun 04 07:46:55 2010
    Starting ORACLE instance (normal)
    Fri Jun 04 07:47:06 2010
    LICENSE_MAX_SESSION = 100
    LICENSE_SESSIONS_WARNING = 80
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =33
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.1.0.
    System parameters with non-default values:
    processes = 200
    sessions = 300
    license_max_sessions = 100
    license_sessions_warning = 80
    sga_max_size = 838860800
    __shared_pool_size = 260046848
    shared_pool_size = 209715200
    __large_pool_size = 25165824
    __java_pool_size = 4194304
    __streams_pool_size = 8388608
    spfile = C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SERVER\DBS\SPFILEXE.ORA
    sga_target = 734003200
    control_files = C:\ORACLEXE\ORADATA\XE\CONTROL.DBF
    __db_cache_size = 432013312
    compatible = 10.2.0.1.0
    db_recovery_file_dest = D:\
    db_recovery_file_dest_size= 5368709120
    undo_management = AUTO
    undo_tablespace = UNDO
    remote_login_passwordfile= EXCLUSIVE
    dispatchers = (PROTOCOL=TCP) (SERVICE=XEXDB)
    shared_servers = 10
    job_queue_processes = 1000
    audit_file_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
    background_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
    user_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
    core_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
    db_name = XE
    open_cursors = 300
    os_authent_prefix =
    pga_aggregate_target = 209715200
    PMON started with pid=2, OS id=3044
    MMAN started with pid=4, OS id=3052
    DBW0 started with pid=5, OS id=3196
    LGWR started with pid=6, OS id=3200
    CKPT started with pid=7, OS id=3204
    SMON started with pid=8, OS id=3208
    RECO started with pid=9, OS id=3212
    CJQ0 started with pid=10, OS id=3216
    MMON started with pid=11, OS id=3220
    MMNL started with pid=12, OS id=3224
    Fri Jun 04 07:47:31 2010
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 10 shared server(s) ...
    Oracle Data Guard is not available in this edition of Oracle.
    PSP0 started with pid=3, OS id=3048
    Fri Jun 04 07:47:41 2010
    alter database mount exclusive
    Fri Jun 04 07:47:54 2010
    Setting recovery target incarnation to 2
    Fri Jun 04 07:47:56 2010
    Successful mount of redo thread 1, with mount id 2601933156
    Fri Jun 04 07:47:56 2010
    Database mounted in Exclusive Mode
    Completed: alter database mount exclusive
    Fri Jun 04 07:47:57 2010
    alter database open
    Fri Jun 04 07:48:00 2010
    Beginning crash recovery of 1 threads
    Fri Jun 04 07:48:01 2010
    Started redo scan
    Fri Jun 04 07:48:03 2010
    Completed redo scan
    16441 redo blocks read, 442 data blocks need recovery
    Fri Jun 04 07:48:04 2010
    Started redo application at
    Thread 1: logseq 1575, block 48102
    Fri Jun 04 07:48:05 2010
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 1575 Reading mem 0
    Mem# 0 errs 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
    Fri Jun 04 07:48:07 2010
    Completed redo application
    Fri Jun 04 07:48:07 2010
    Completed crash recovery at
    Thread 1: logseq 1575, block 64543, scn 27413940
    442 data blocks read, 442 data blocks written, 16441 redo blocks read
    Fri Jun 04 07:48:09 2010
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=25, OS id=3288
    ARC1 started with pid=26, OS id=3292
    Fri Jun 04 07:48:10 2010
    ARC0: Archival started
    ARC1: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    Thread 1 advanced to log sequence 1576
    Thread 1 opened at log sequence 1576
    Current log# 3 seq# 1576 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
    Successful open of redo thread 1
    Fri Jun 04 07:48:13 2010
    ARC0: STARTING ARCH PROCESSES
    Fri Jun 04 07:48:13 2010
    ARC1: Becoming the 'no FAL' ARCH
    Fri Jun 04 07:48:13 2010
    ARC1: Becoming the 'no SRL' ARCH
    Fri Jun 04 07:48:13 2010
    ARC2: Archival started
    ARC0: STARTING ARCH PROCESSES COMPLETE
    ARC0: Becoming the heartbeat ARCH
    Fri Jun 04 07:48:13 2010
    SMON: enabling cache recovery
    ARC2 started with pid=27, OS id=3580
    Fri Jun 04 07:48:17 2010
    db_recovery_file_dest_size of 5120 MB is 49.00% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Fri Jun 04 07:48:31 2010
    Successfully onlined Undo Tablespace 1.
    Fri Jun 04 07:48:31 2010
    SMON: enabling tx recovery
    Fri Jun 04 07:48:31 2010
    Database Characterset is AL32UTF8
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=28, OS id=2412
    Fri Jun 04 07:48:51 2010
    Completed: alter database open
    Fri Jun 04 07:49:22 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:49:32 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:49:52 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:49:57 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:54:10 2010
    Shutting down archive processes
    Fri Jun 04 07:54:15 2010
    ARCH shutting down
    ARC2: Archival stopped
    Fri Jun 04 07:54:53 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:55:08 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:56:25 2010
    Starting control autobackup
    Fri Jun 04 07:56:27 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
    Fri Jun 04 07:56:28 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_21
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_20
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_17
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_16
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_14
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_12
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_09
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_07
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_06
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\AUTOBACKUP\2009_04_03
    ORA-27093: 无法删除目录
    Fri Jun 04 07:56:29 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_488.trc:
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_21
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_20
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_17
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_16
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_14
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_12
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_09
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_07
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_06
    ORA-27093: 无法删除目录
    ORA-17624: 无法删除目录 D:\XE\BACKUPSET\2009_04_03
    ORA-27093: 无法删除目录
    Control autobackup written to DISK device
    handle 'D:\XE\AUTOBACKUP\2010_06_04\O1_MF_S_720777385_60JJ9BNZ_.BKP'
    Fri Jun 04 07:56:38 2010
    Thread 1 advanced to log sequence 1577
    Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
    Fri Jun 04 07:56:56 2010
    Thread 1 cannot allocate new log, sequence 1578
    Checkpoint not complete
    Current log# 1 seq# 1577 mem# 0: D:\XE\ONLINELOG\O1_MF_1_4CT6N7TC_.LOG
    Thread 1 advanced to log sequence 1578
    Current log# 3 seq# 1578 mem# 0: D:\XE\ONLINELOG\O1_MF_3_4CT6N1SD_.LOG
    Fri Jun 04 07:57:04 2010
    Memory Notification: Library Cache Object loaded into SGA
    Heap size 2208K exceeds notification threshold (2048K)
    KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
    Fri Jun 04 07:59:54 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Fri Jun 04 07:59:58 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_3220.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []

    Hi Gurus,
    there's a error ora-00600 in the big trc files as below, this is only part of this file, this file is more than 45mb in size:
    xe_mmon_4424.trc
    Dump file c:\oraclexe\app\oracle\admin\xe\bdump\xe_mmon_4424.trc
    Fri Jun 04 17:03:22 2010
    ORACLE V10.2.0.1.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    Windows XP Version V5.1 Service Pack 3
    CPU : 2 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:992M/2047M, Ph+PgF:3422M/4958M, VA:1011M/2047M
    Instance name: xe
    Redo thread mounted by this instance: 1
    Oracle process number: 11
    Windows thread id: 4424, image: ORACLE.EXE (MMON)
    *** SERVICE NAME:(SYS$BACKGROUND) 2010-06-04 17:03:22.265
    *** SESSION ID:(284.23) 2010-06-04 17:03:22.265
    *** 2010-06-04 17:03:22.265
    ksedmp: internal or fatal error
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Current SQL statement for this session:
    BEGIN :success := dbms_ha_alerts_prvt.check_ha_resources; END;
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    41982E80 418 package body SYS.DBMS_HA_ALERTS_PRVT
    41982E80 552 package body SYS.DBMS_HA_ALERTS_PRVT
    41982E80 305 package body SYS.DBMS_HA_ALERTS_PRVT
    419501A0 1 anonymous block
    ----- Call Stack Trace -----
    calling call entry argument values in hex
    location type point (? means dubious value)
    ksedst+38           CALLrel  ksedst1+0 0 1
    ksedmp+898          CALLrel  ksedst+0 0
    ksfdmp+14           CALLrel  ksedmp+0 3
    _kgerinv+140         CALLreg  00000000             8EF0A38 3
    kgeasnmierr+19      CALLrel  kgerinv+0 8EF0A38 6610020 3672F70 0
    6538808
    kjhnpost_ha_alert CALLrel _kgeasnmierr+0       8EF0A38 6610020 3672F70 0
    0+2909
    __PGOSF57__kjhn_pos CALLrel kjhnpost_ha_alert 88 B21C4D0 B21C4D8 B21C4E0
    t_ha_alert_plsql+43 0+0 B21C4E8 B21C4F0 B21C4F8
    8 B21C500 B21C50C 0 FFFFFFFF 0
    0 0 6
    _spefcmpa+415        CALLreg  00000000            
    spefmccallstd+147   CALLrel  spefcmpa+0 65395B8 16 B21C5AC 653906C 0
    pextproc+58         CALLrel  spefmccallstd+0 6539874 6539760 6539628
    65395B8 0
    __PGOSF302__peftrus CALLrel _pextproc+0         
    ted+115
    _psdexsp+192         CALLreg  00000000             6539874
    _rpiswu2+426         CALLreg  00000000             6539510
    psdextp+567         CALLrel  rpiswu2+0 41543288 0 65394F0 2 6539528
    0 65394D0 0 2CD9E68 0 6539510
    0
    _pefccal+452         CALLreg  00000000            
    pefcal+174          CALLrel  pefccal+0 6539874
    pevmFCAL+128 CALLrel _pefcal+0           
    pfrinstrFCAL+55 CALLrel pevmFCAL+0 AF74F48 3DFB92B8
    pfrrunno_tool+56 CALL??? 00000000 AF74F48 3DFBB728 AF74F84
    pfrrun+781          CALLrel  pfrrun_no_tool+0 AF74F48 3DFBB28C AF74F84
    plsqlrun+738 CALLrel _pfrrun+0            AF74F48
    peicnt+247          CALLrel  plsql_run+0 AF74F48 1 0
    kkxexe+413          CALLrel  peicnt+0
    opiexe+5529         CALLrel  kkxexe+0 AF7737C
    kpoal8+2165         CALLrel  opiexe+0 49 3 653A4FC
    _opiodr+1099         CALLreg  00000000             5E 0 653CBAC
    kpoodr+483          CALLrel  opiodr+0
    _xupirtrc+1434       CALLreg  00000000             67384BC 5E 653CBAC 0 653CCBC
    upirtrc+61          CALLrel  xupirtrc+0 67384BC 5E 653CBAC 653CCBC
    653D990 60FEF8B8 653E194
    6736CD8 1 0 0
    kpurcsc+100         CALLrel  upirtrc+0 67384BC 5E 653CBAC 653CCBC
    653D990 60FEF8B8 653E194
    6736CD8 1 0 0
    kpuexecv8+2815      CALLrel  kpurcsc+0
    kpuexec+2106        CALLrel  kpuexecv8+0 673AE10 6736C4C 6736CD8 0 0
    653EDE8
    OCIStmtExecute+29   CALLrel  kpuexec+0 673AE10 6736C4C 673AEC4 1 0 0
    0 0 0
    kjhnmmon_action+5 CALLrel _OCIStmtExecute+0    673AE10 6736C4C 673AEC4 1 0 0
    26 0 0
    kjhncheck_ha_reso CALLrel kjhnmmon_action+0 653EFCC 3E
    urces+140
    kebmronce_dispatc CALL??? 00000000
    her+630
    kebmronce_execute CALLrel kebmronce_dispatc
    +12 her+0
    _ksbcti+788          CALLreg  00000000             0 0
    ksbabs+659          CALLrel  ksbcti+0
    kebmmmon_main+386 CALLrel _ksbabs+0            3C5DCB8
    _ksbrdp+747          CALLreg  00000000             3C5DCB8
    opirip+674          CALLrel  ksbrdp+0
    opidrv+857          CALLrel  opirip+0 32 4 653FEBC
    sou2o+45            CALLrel  opidrv+0 32 4 653FEBC
    opimaireal+227 CALLrel _sou2o+0             653FEB0 32 4 653FEBC
    opimai+92           CALLrel  opimai_real+0 3 653FEE8
    BackgroundThreadSt  CALLrel  opimai+0
    art@4+422
    7C80B726 CALLreg 00000000
    --------------------- Binary Stack Dump ---------------------
    ========== FRAME [1] (_ksedst+38 -> _ksedst1+0) ==========
    Dump of memory from 0x065386DC to 0x065386EC
    65386D0 065386EC [..S.]
    65386E0 0040467B 00000000 00000001 [{F@.........]
    ========== FRAME [2] (_ksedmp+898 -> _ksedst+0) ==========
    Dump of memory from 0x065386EC to 0x065387AC
    65386E0 065387AC [..S.]
    65386F0 00403073 00000000 53532E49 20464658 [[email protected] ]
    6538700 54204D41 0000525A 00000000 08EF0EC0 [AM TZR..........]
    6538710 6072D95A 08EF0EC5 03672F70 00000017 [Z.r`....p/g.....]
    6538720 00000000 00000000 00000000 00000000 [................]
    Repeat 1 times
    6538740 00000000 00000000 00000000 00000017 [................]
    6538750 08EF0B3C 08EF0B34 03672F70 08F017F0 [<...4...p/g.....]
    6538760 603AA0D3 065387A8 00000001 00000000 [..:`..S.........]
    6538770 00000000 00000000 00000001 00000000 [................]
    6538780 00000000 08EF0A38 06610020 031E1D20 [....8... .a. ...]
    6538790 00000000 065386F8 08EF0A38 06538D38 [......S.8...8.S.]
    65387A0 0265187C 031C8860 FFFFFFFF [|.e.`.......]
    ========== FRAME [3] (_ksfdmp+14 -> _ksedmp+0) ==========
    and the file is keeping increasing, though I have deleted a lot of this, but:
    as I marked:
    time size
    15:23 pm 795mb
    16:55 pm 959mb
    17:01 pm 970mb
    17:19 pm 990mb
    Any solution for that?
    Thanks!!

  • I need to sort very large Excel files and perform other operations.  How much faster would this be on a MacPro rather than my MacBook Pro i7, 2.6, 15R?

    I am a scientist and run my own business.  Money is tight.  I have some very large Excel files (~200MB) that I need to sort and perform logic operations on.  I currently use a MacBookPro (i7 core, 2.6GHz, 16GB 1600 MHz DDR3) and I am thinking about buying a multicore MacPro.  Some of the operations take half an hour to perform.  How much faster should I expect these operations to happen on a new MacPro?  Is there a significant speed advantage in the 6 core vs 4 core?  Practically speaking, what are the features I should look at and what is the speed bump I should expect if I go to 32GB or 64GB?  Related to this I am using a 32 bit version of Excel.  Is there a 64 bit spreadsheet that I can us on a Mac that has no limit on column and row size?

    Grant Bennet-Alder,
    It’s funny you mentioned using Activity Monitor.  I use it all the time to watch when a computation cycle is finished so I can avoid a crash.  I keep it up in the corner of my screen while I respond to email or work on a grant.  Typically the %CPU will hang at ~100% (sometimes even saying the application is not responding in red) but will almost always complete the cycle if I let it go for 30 minutes or so.  As long as I leave Excel alone while it is working it will not crash.  I had not thought of using the Activity Monitor as you suggested. Also I did not realize using a 32 bit application limited me to 4GB of memory for each application.  That is clearly a problem for this kind of work.  Is there any work around for this?   It seems like a 64-bit spreadsheet would help.  I would love to use the new 64 bit Numbers but the current version limits the number of rows and columns.  I tried it out on my MacBook Pro but my files don’t fit.
    The hatter,
    This may be the solution for me. I’m OK with assembling the unit you described (I’ve even etched my own boards) but feel very bad about needing to step away from Apple products.  When I started computing this was the sort of thing computers were designed to do.  Is there any native 64-bit spreadsheet that allows unlimited rows/columns, which will run on an Apple?  Excel is only 64-bit on their machines.
    Many thanks to both of you for your quick and on point answers!

  • Large PDF file sizes when exporting from InDesign

    Hi,
    I was wondering if anyone knew why some PDF file sizes are so large when exporting from ID.
    I create black and white user manuals with ID CS3. We post these online, so I try to get the file size down as much as possible.
    There is only one .psd image in each manual. The content does not have any photographs, just Illustrator .eps diagrams and line drawings. I am trying to figure out why some PDF file sizes are so large.
    Also, why the file sizes are so different.
    For example, I have one ID document that is 3MB.
    Exporting it at the smallest file size, the PDF file comes out at 2MB.
    Then I have another ID document that is 10MB.
    Exporting to PDF is 2MB (the same size as the smaller ID document)... this one has many more .eps's in it and a lot more pages.
    Then I have another one that the ID size is 8MB and the PDF is 6MBwhy is this one so much larger than the 10MB ID document?
    Any ideas on why this is happening and/or how I can reduce the file size.
    I've tried adjusting the export compression and other settings but that didn't work.
    I also tried to reduce them after the fact in Acrobat to see what would happen, but it doesn't reduce it all that much.
    Thanks for any help,
    Cathy

    > Though, the sizes of the .eps's are only about 100K to 200K in size and they are linked, not embedded.
    But they're embedded in the PDF.
    > It's just strange though because our marketing department as an 80 page full color catalog that, when exported it is only 5MB. Their ID document uses many very large .tif files. So, I am leaning toward it being an .eps/.ai issue??
    Issue implies there's something wrong, but I think this is just the way
    it's supposed to work.
    Line drawings, while usually fairly compact, cannot be lossy compressed.
    The marketing department, though, may compress their very large TIFF
    files as much as they like (with a corresponding loss of quality). It's
    entirely possible to compress bitmaps to a smaller size than the
    drawings those bitmaps were made from. You could test this yourself.
    Just open a few of your EPS drawings in Photoshop, save as TIFF, place
    in ID, and try various downsampling schemes. If you downsample enough,
    you'll get the size of the PDF below a PDF that uses the same graphics
    as line drawing EPS files. But you may have to downsample them beyond
    recognition...
    Kenneth Benson
    Pegasus Type, Inc.
    www.pegtype.com

  • How can NI FBUS Monitor display very large recorded files

    NI FBUS Monitor version 3.0.1 outputs an error message "Out of memory", if I try to load a large recorded file of size 272 MB. Is there any combination of operating system (possible Vista32 or Vista64) and/or physical memory size, where NI FBUS Monitor can display such large recordings ? Are there any patches or workarounds or tools to display very large recorded files?

    Hi,
    NI-FBUS Monitor does not set the limitation on the maximum record file size. The physical memory size in the system is one of the most important factors that affect the loading of large record file.  Monitor will try loading the entire file into the memory during file open operation.
    272MB is a really large file size. To open the file, your system must have sufficient physical memory available.  Otherwise "Out of memory" error will occur.
    I would recommend you do not use Monitor to open a file larger than 100MB. Loading of a too large file will consume the system memory quickly and decrease the performance.
    Message Edited by Vince Shen on 11-30-2009 09:38 PM
    Feilian (Vince) Shen

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Is There a Recommended Maximum Keynote File Size?

    I am prepping my first Keynote presentation and I'm only 20% into it and already have a Keynote file size of 1 gig spread across 28 slides. I'm being careful to make sure that no individual slide gets too big (some using video), but I'm worried that I'm going to end up with something so large it'll give me problems! I know I can scale down file size but I don't think I'll gain that much. My presentation is just two days away!!!!
    btw - I'm building on an iMac 3.06 with 4 GB Ram but will be presenting on an iBook Pro 2.66 with 4 GB
    Ram. Both with core2 Duo.
    thanks!

    Hi kwieder: Late reply, but you should've been fine. Did the presentation go ok? You could've done a test run on the MBP 2.66 before the presentation. File size shouldn't really impact the presentation, unless you're using much older computer hardware. File size is just cumbersome when you're presenting for a conference or a company and they ask to check your presentation beforehand or request a copy etc.

  • Large .bpel file size vs performance

    how does large .bpel file size affect performance,say that I have a process of .9 mgb size with around 10000 line how does this affect the instance creation ,fetching and message creation during the process life cycle.
    Edited by: arababah on Mar 8, 2010 7:23 AM

    Johnk93 wrote:
    MacDLS,
    I recently did a little house-cleaning on my startup drive (only 60Gb) and now have about 20Gb free, so I don't think that is the problem.
    It's probably not a very fast drive in the first place...
    I know that 5MB isn't very big, but for some reason it takes a lot longer to open these scanned files in photoshop (from aperture) than the 5MB files from my camera. Any idea why this is?
    Have a look at the file size of one of those externally edited files for a clue - it won't be 5MB. When Aperture sends a file out for editing, it creates either a PSD or an uncompressed TIFF after applying any image adjustments that you've applied in Aperture, and sends that out. Depending on the settings in Aperture's preferences this will be in either 8-bit or 16-bit.
    As a 16-bit uncompressed TIFF, a 44 megapixel image weighs in at a touch over 150MB...
    Ian

  • Today, I randomly happened to have less than 1GB of hard drive space left. I found very large "frame" files, what are they?

    I found very large "frame" files, what are they & can I delete them? (See screenshot). I'm a (17 today)-year-old film-maker and can't edit in FCP X anymore because I "don't have enough space". Every time I try to delete one, another identical file creates itself...
    If that can help: I just upgraded to FCP 10.0.4 and every time I launch it it asks to convert my current projects (I know it would do it at least once) and I accept, but everytime I have to get it done AGAIN. My computer is slower than ever and I have a deadline this friday
    I also just upgraded to Mac OS X 10.7.4, and the problem hasn't been here for long, so it may be linked...
    Please help me!
    Alex

    The first thing you should do is to back up your personal data. It is possible that your hard drive is failing. If you are using Time Machine, that part is already done.
    Then, I think it would be easiest to reformat the drive and restore. If you ARE using Time Machine, you can start up from your Leopard installation disc. At the first Installer screen, go up to the menu bar, and from the Utilities menu, first select to run Disk Utility. Completely erase the internal drive using the Erase tab; make sure you have the internal DRIVE (not the volume) selected in the sidebar, and make sure you are NOT erasing your Time Machine drive by mistake. After erasing, quit Disk Utility, and select the command to restore from backup from the same Utilities menu. Using that Time Machine volume restore utility, you can restore it to a time and date immediately before you went on vacation, when things were working.
    If you are not using Time Machine, you can erase and reinstall the OS (after you have backed up your personal data). After restarting from the new installation and installing all the updates using Software Update, you can restore your personal data from the backup you just made.

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Have a very large text file, and need to read lines in the middle.

    I have very large txt files (around several hundred megabytes), and I want to be able to skip and read specific lines. More specifically, say the file looks like:
    scan 1
    scan 2
    scan 3
    scan 100,000
    I want to be able to skip move the filereader immediately to scan 50,000, rather than having to read through scan 1-49,999.
    Thanks for any help.

    If the lines are all different lengths (as in your example) then there is nothing you can do except to read and ignore the lines you want to skip over.
    If you are going to be doing this repeatedly, you should consider reformatting those text files into something that supports random access.

  • Large folio file size

    We are half way through a book that comprises 100 single page articles. However it is already nearly 500 MB and this isn't sustainable.
    Does the following affect the file size:
    Is the Folio file size affected by the number of individual articles, would it be smaller if we had stacks of say 10 articles each with 10 pages rather than 100 single pages?
    Every page has a two picture (JPG) object state, the first image is an extreme elargement of the image that visible only for a about a second before the full frame image appears. Each page has a caption using Pan overlay that can be dragged into the page using a small tab.Does an Object State increase the file size over and above the images contained within it?
    We have reduced the JPGs to the minimum acceptable quality and there is no video in the Folio.
    Any ideas would be much appreciated?

    800 MB worth of video sounds crazy.
    Of course, a high number of videos can bring you to that.
    I saw bigger dps apps. I think the apple limit lies around 4 gb (remember,
    that is more than 25% of a whole 16 gb iPad)
    The mp4 video codec does a really good job while keeping the quality high.
    And the human eye is more forgiving to quality when it comes to moving
    images compared to still imagery.
    I wrote a collection of tipps and ideas how to reduce your file size.
    http://www.google.de/url?sa=t&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fdigitalpublis hing.tumblr.com%2Fpost%2F11650748389%2Freducing-folio-filesize&ei=uVbeTv_yD--M4gTY_OWbBw&u sg=AFQjCNHroLkcl-neKlpeidULpQdosl08vw
    —Johannes
    (mobil gesendet. fat fingers. beware!)
    Am 06.12.2011 18:32 schrieb "gnusart" <[email protected]>:
       Re: Large folio file size  created by gnusart<http://forums.adobe.com/people/gnusart>in
    Digital Publishing Suite - View the full discussion<http://forums.adobe.com/message/4067148#4067148>

  • What are the best tools for opening very large XML files and examining the tree and confirming they are valid?

    I am generating some very large XML files (600,000+ lines, 50MB+ characters). I finally have them all being valid XML and valid UTF-8.
    But the files are so large Safari and Chrome will often not open them. FireFox will though.
    Instead of these browsers, I was wondering if there are there any other recommended apps for the Mac for opening and viewing the XML, getting an error message if they are not valid for some reason and examing the XML tree?
    I opened the file in the default app for XML which is Xcode, but that is just like opening it in a plain text editor. You can't expand/collapse the XML tree like you can with a browser, and it doesn't report errors.
    Thanks,
    Doug

    Hi Tom,
    I had not seen that list. I'll look it over.
    I'm also in touch with the developer of BBEdit (they are quite responsive) and they are willing to look at the file in question and see why it is not reporting UTF-8 errors while Chrome is.
    For now I have all the invalid characters quashed and things are working. But it would be useful in the future.
    By the by, some of those editors are quite pricey!
    doug

  • File corruption with a very large .ai file

    A little background first:
    I am a graphic designer/cartographer with 15+ years of experience. I started making maps in Illustrator with version 6 and have upgraded to every version since.
    My machines:
    2x Mac Pro 8-core 3.0GHz, 16GB RAM, 10.5.7
    Mac Pro quad-core 2.66GHz, 8GB RAM, 10.5.7
    MacBook Pro 2.0GHz, 2GB RAM, 10.5.7
    Illustrator specs:
    All machines have CS4 installed as well as Illustrator 10.
    The 8-core MPs have the MAPublisher Plug-ins installed
    The 4-core and MacBook Pro (MBP) does not have the MAPublisher Plug-ins
    The problem I am having can be replicated on each of the machines. The MBP can't handle the file due to RAM. Since this occurs on machines that has MAPublisher installed and a machine that does not, I think we can rule out a plug-in issue.
    File specs:
    The original file: version 10, file size (uncompressed, no PDF support, and no font-embedding) is 36.4 MB. There are no raster effects or embedded/placed images. This is strictly a vector file. Artboard Dimensions: 85.288 in x 81.042 in
    The original file, converted with CS4, and then saved as a CS4 file: file size (uncompressed, no PDF support, and no font-embedding) is 97.9 MB.
    Brief Description of the problem:
    I have tried to convert this file into every version of CS and it has failed every time. With each version, it has resulted in an unusable file for different reasons. CS-CS3, the file was completely unusable because of the opening/saving time. It could take as long as 3 hours to save the file. With CS4, this has been rectified and I once again tried to convert it. Upon re-opening of the 'converted' CS4 native file, the file is 'corrupted'.
    The file corruption is not your regular "This file can't be opened because of: X" corruption. The file opens after a save/close just fine. It is just that parts of the file gets destroyed. To save space in this post, I have created a webpage that illustrates the problem that I am having:
    http://newatlas.com/ai_problem/
    I have tried everything possible to make the file smaller and it is as slimmed down as I can make it. (Using symbols, styles, etc.) I have also tried to eliminate this as a font problem by replacing every font with an Adobe supplied font, cleared caches, etc. This does not work, so I think we can rule out a font issue. I have also reduced this file to contain no pattern fills, no gradients, and just used simple fills and strokes. All to no avail. I have also tried piecing the file back together into a new document by copying/pasting a layer at a time. Saving, closing and re-opening after each paste cycle. I can get about 95% of it put back together and then it will manifest the problem. The only thing I haven't done is to convert all of the type to outlines. This would not solve my problem since this is a map that I continually work on year after year. I also can't remove objects or cut the overall area of the map because this file is used to produce an atlas book, a wall map and custom boundary wall maps. You can view the entire file at:
    http://okc.cocpub.com
    If I do not convert the legacy text, the file saves/closes/re-opens just fine. It just takes a very long time. So this leads me to think that the cause of the problem is the number of editable type objects that this file has. Ever since Adobe changed the Type Engine, I haven't been able to use this file in current versions of Illustrator.
    If I could get this file to open, uncorrupted, I could finally get rid of Illustrator 10. Illustrator 10 does not have any problem with this file (and is still faster than CS4 in everything except selecting a lot of objects.)
    I am posting this on the forums for any other opinions/ideas from the 'Illustrator Gurus' as a first step. I want to get in contact with someone at Adobe to see if we can address this problem and possibly get it fixed with CS5. I know that this is a user-to-user forum, but I'm not sure who, where and how to contact Adobe for this issue. Maybe someone on these forums can help with that as well.
    Thank you for your patience for getting this far in my long post and I would really appreciate any response.
    Dave

    Thanks Wade for responding,
    Did you try trashing your Adobe Illustrator CS4 Settings folder in your User's Preferences?
    Yes, I've tried deleting prefs. Basically I've tried to rule out any problems with Illustrator as a whole. This issue has also occured on a clean install of OS X and Illustrator on a new User Account with the opening of this file being the first task that Illustrator has. There is no problem with Illustrator per se, but I think it is more of a limitation in Illustrator based on the number of type objects.
    You could also try saving it out of 10 as a pdf or as postscript and distilling it then open that in ai or place it id a blank AI document.
    Did you try to place instead of opening it?
    I haven't tried any of these since the resulting file would be utterly unusable. Basically this would create a 'flat' file with 'broken' strings of text (type on a path especially) and type being uneditable. (Now that I think about it, CS4 does a much better job of opening pdfs without breaking type.) I still think this approach is not really a prudent course of action since, as of now, I can continue to maintain this map in Illy 10.
    In my experimentation, the results are as follows:
    1. Opening the file without updating the legacy type, saving the file as a new document, closing and then re-opening results in a file you would expect. Every object is where it is supposed to be. Downfall of this method: I absolutely need the type to be editable, especially the 'Street Type' since this type is actually used to create map indexes.
    2. Opening the file with updating the legacy type, saving the file as a new document, closing and then re-opening results in a file that exhibits the exact behavior that has been posted in the thread-starter. This method results in the 'Bruce Gray' type being the duplicated item.
    3. Opening the file without updating the legacy type, then splitting the file into layer groups and saving as separate files. Then opening the resulting CS4 files, updating the legacy type, copy & pasting (with layer structure) into a new document results in a usable file up to a point. I can get about 95% of it put back together and then the problem manifests. I have thought that it might be a "bad" object on one of the layers but I have ruled that out by: a.) All of the resulting sub-files (files that are portions of the larger) exhibit no problems at all. Usually our PS printers find issues that Illy does not and there is no problem in RIPing the sub-files. b.) If I change the paste order, meaning copying & pasting from the top-most layers to the bottom-most layers, vice-versa, and a completely random paste order, different objects (other than the 'Bruce Gray' type) will be duplicated. I've had one of my park screens, a zip code type object and a school district boundary be the duplicated object.
    All of these experiments has lead me to believe that the Illustrator Type Engine is the main facilitor. I just don't think it can handle that many individual point type objects. I know CS4 can handle the number of objects, based on the fact that a legacy type (non-updated) file works.
    I am almost entirely sure that Illustrator is working exactly as it is supposed to and that the vast majority of Illy users will never run into this issue. This file is by far the largest file that I work on. I would just like to be able to use an Intel native version of CS to continue maintaining this map.
    On a side note: About three years ago, I tried working with this file in Freehand MX. Freehand initially would open the Illy file without a problem. I could work on it but when I would save it as a Freehand file, close it and re-open it, I would get your standard File Corruption. It would partially open, give me a corruption dialog, and open the file as a blank document. I alwa ys knew there was a reason to use Illustrator over Freehand for making maps.

Maybe you are looking for

  • Is there a way to run a program designed for PC on my Mac?

    I would like to use the Firearms Management System offered by one of my suppliers on my Mac but the only operating system shown to use is with is Windows. Is there a way to run it on my iMac?

  • Why does my scroll slider go to bottom, and stay there, and not allow me to scroll upward?

    I can open Thunderbird (v.31.4.0) and use it successfully. But if I close it and then open it again (without turning the computer off) the scroll slider goes all the way to the bottom and stays there. It resists me if I try and scroll upward and imme

  • SWF file not playing in dreamweaver

    I have an SWF file on my homepage that used to work just fine, now all the sudden it does not work in preview or when I upload it (I have not changed the file in anyway.)  I made this site using the design mode because I do not know code.  If anyone

  • How to make a Java Applet work beyond proxy?

    Hi All, I have problems with my applet working through proxy.I am using a client applet which makes a socket connection to a Java application running on the same pc as the web server. Everything works fine when I am directly connected.However it does

  • Traffic monitoring for Coherence 3.1

    The objective of our small project is to monitor the traffic on our coherence clusters. We also were trying to put the cache traffic as a object in the same cache name. The problem we encountered was during performance tests something happened to the