Cube Load Times

Hi there,<BR><BR>We have one BSO cube implemented on Essbase 7.1.2 and we load data from an SQL server source every night. My question is to do with the time taken to load/calculate the cube.<BR><BR>It started off at around 20mins however in the space of a few months this time has risen a LOT. Its now taking upto 2.5hours to complete the load/calculation process.<BR><BR>If we re-structure the cube, the next days load takes around 1hour.<BR><BR>My question was basically, can anyone reccomend any tips/tricks to optimise the load process?<BR><BR>One thing we have noticed is that the data compression on the cube is set to RLE. IS this the right one or should we be using one of the others?<BR><BR>Any help appreciated! <img src="i/expressions/beer.gif" border="0"><BR><BR>Brian

With the assumptions that (a) you perform a full calc of the database, and (b) the full calc itself isn't an exceptionally long process, you can typically get a much better overall performance gain by doing the following:<BR><BR>1) Export the input level data (level 0 if you have agg missing turned on).<BR>2) Clear the database<BR>3) Load data from your Essbase export<BR>4) Load data from your SQL source<BR>5) Run your full calc/calc all<BR><BR>The reason you could get an overall performance gain is two-fold:<BR>1) The RLE compression tends to cause a lot of fragmentation within the page files as blocks are written and re-written.<BR>2) The calc engine tends to run faster when it doesn't have to read the upper level blocks in before calculating them.<BR><BR>Both of the above are simplified explanations, and results vary greatly based on the outline/hierarchies and the calc required. One thing to note however, is that if you run any type of input scrubbing/modification script (e.g. allocation, elimination, etc.), the optimum sequence can be quite different than the above (I won't go into the hows and whys here).<BR><BR>Now on to the other ways to optimize your load. In the Database Administrators Guide (DBAG), there is a good section on ways to optimize a file for optimum loading. Basically, the closer the order of the members appearing in your load file matches the order they appear in the outline, the better. For instance, if the columns in the file are dense, and the rows are sorted in the same order as the sparse dimensions, you load MUCH faster than if you do the opposite. This can have a big impact on the fragmentation that occurs during a load, enough so that if you can be pretty sure that the sort for your SQL export is optimum for loading, the above approach (Export/Clear/Load/Calc) won't get you any real benefit -- at least from the fragmentation aspect.<BR><BR>Of course, the outline itself can be less than optimum, so any fixes you address should start with the outline, then move to the SQL file layout, then the load process. The caution here is to be sure that changes you make to align the outline with your dataload don't adversely affect your calc and retrieval performace too much.<BR><BR>Most of this is covered in the DBAG, in greater detail, just not layed out in an order that can be easily followed for your needs. So if you need further details on any of the above, start there.<BR>

Similar Messages

  • ODS to CUBE loading - taking too much time

    Hi Experts,
    I am loading data from R/3(4.7) to BW (3.5).
    I am loading with option --> PSA and then Data Target (ODS ).
    I have a selection criteria in Infopackage while loading from standard Datasource to ODS.
    It takes me 20 mins to load 300K records.
    But, from ODS to Infocube ( update method: Data Target Only), it is taking 8 hours.
    The data packet size in Infopackage is 20,000 ( same for ODS and Infocube).
    I also tried changing the data packet size, tried with full load , load with initialization,..
    I tried scheduling it as a background job too.
    I do not have any selection criteria in the infopackage from ODS to Cube.
    Please let me know how can I decrease this loading time from ODS to Infocube.

    Hi,
    To improve the data load performance
    1. If they are full loads then try to see if you make them delta loads.
    2. Check if there are complex routines/transformations being performed in any layer. In that case see if you can optimize those codes with the help of an abaper.
    3. Ensure that you are following the standard procedures in the chain like deleting Indices/secondary Indices before loading etc.
    4. Check whether the system processes are free when this load is running
    5. try making the load as parallel as possible if the load is happening serially. Remove PSA if not needed.
    6. When the load is not getiing processed due to huge volume of data, or more number of records per data packet, Please try the below option.
    1) Reduce the IDOC size to 8000 and number of data packets per IDOC as 10. This can be done in info package settings.
    2) Run the load only to PSA.
    3) Once the load is succesfull , then push the data to targets.
    In this way you can overcome this issue.
    Ensure the data packet sizing and also the number range buffering, PSA Partition size, upload sequence i.e, always load master data first, perform change run and then transaction data loads.
    Check this doc on BW data load perfomance optimization
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    BI Performance Tuning
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap
    Thanks,
    JituK

  • BI 7 real-time cube load behavior changed after transport

    Dears,
      After I transported real-time cube from DEV to QA system, I found that all the real-time cube load behavior changed from 'can be loaded, planning not allowed' to 'can be planned, load not allowed'. What happened? Or I'm enpackaging this type cube with wrong step? Any suggestions are appreciated.
    Best regards,
    Gerald

    Hi,
    I think note 984803 can be usefull for you.
    The link is https://websmp207.sap-ag.de/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=984803&_NLANG=E
    Rgs
    Antonino

  • Real-Time i-Cube load behavior changed after transport

    Dears
      After the transports compleleted, the Real-Time i-cube load behavior is changed. All the 'can be loaded,planning not allowed' were changed to 'can be planned, load not allowed'. What happened? How I enpackage this type R-T i-cube? Any suggestions are appreciated.
    Best regards,
    Gerald

    Hi Gerald,
    you can switch it back to the enable loadind of data, using the "Switch Transactional Infocube" option you get when you right click on the Infocube.

  • Simulate load times

    Hi all,
    Anyone here knows if we can simulate load time in BW?
    Example, i have a DSO that goes to a Cube for reporting.
    I would like to simulate how long 100,000 records will take to get from ECC to the Cube.
    I might later want to test how long 250,00 records will take.
    RSA3 is not useful as it can only test how fast the extractor will take.
    Is such as thing possible?

    Hi Ganesan,
    1) for DSO to Cube Loading:  You have to create a DTP.
       Execute the DTP with Debugging mode (simulation).
       Then in the monitor you can find the time taken.
    2) for Loading from ECC to BI : You have to create Info package.
       But info package does not give any option for simulation.
    cheers.

  • Comparing load times w/ and w/o BIA

    We are looking at the pros/cons of BIA for implementation.  Does anyone have data to show a comparison between loads, loads with compression, vs BIA Index time?

    Haven't seen numbers comparing load times.  Loads to your cubes and compression continue whether you have BIA or not.  Rollup time would be eliminated as you would no longer have the need to have aggregates.  No aggregates should also reduce Change Run time, perhaps a lot, or only a little, depending on whether you have large aggregates with Nav Attrs in them. All of that is offset to some degree by the time to update the BIA.
    Make sure you understand all the licensing costs, not just SAP's, but the hardware vendors per blade licensing costs.  Talked to someone just the other day that was not expecting a per blade licensing, list price of the license per blade was $75,000

  • How to tune data loading time in BSO using 14 rules files ?

    Hello there,
    I'm using Hyperion-Essbase-Admin-Services v11.1.1.2 and the BSO Option.
    In a nightly process using MAXL i load new data into one Essbase-cube.
    In this nightly update process 14 account-members are updated by running 14 rules files one after another.
    These rules files connect 14 times by sql-connection to the same oracle database and the same table.
    I use this procedure because i cannot load 2 or more data fields using one rules file.
    It takes a long time to load up 14 accounts one after other.
    Now my Question: How can I minimise this data loading time ?
    This is what I found on Oracle Homepage:
    What's New
    Oracle Essbase V.11.1.1 Release Highlights
    Parallel SQL Data Loads- Supports up to 8 rules files via temporary load buffers.
    In an Older Thread John said:
    As it is version 11 why not use parallel sql loading, you can specify up to 8 load rules to load data in parallel.
    Example:
    import database AsoSamp.Sample data
    connect as TBC identified by 'password'
    using multiple rules_file 'rule1','rule2'
    to load_buffer_block starting with buffer_id 100
    on error write to "error.txt";
    But this is for ASO Option only.
    Can I use it in my MAXL also for BSO ?? Is there a sample ?
    What else is possible to tune up nightly update time ??
    Thanks in advance for every tip,
    Zeljko

    Thanks a lot for your support. I’m just a little confused.
    I will use an example to illustrate my problem a bit more clearly.
    This is the basic table, in my case a view, which is queried by all 14 rules files:
    column1 --- column2 --- column3 --- column4 --- ... ---column n
    dim 1 --- dim 2 --- dim 3 --- data1 --- data2 --- data3 --- ... --- data 14
    Region -- ID --- Product --- sales --- cogs ---- discounts --- ... --- amount
    West --- D1 --- Coffee --- 11001 --- 1,322 --- 10789 --- ... --- 548
    West --- D2 --- Tea10 --- 12011 --- 1,325 --- 10548 --- ... --- 589
    West --- S1 --- Tea10 --- 14115 --- 1,699 --- 10145 --- ... --- 852
    West --- C3 --- Tea10 --- 21053 --- 1,588 --- 10998 --- ... --- 981
    East ---- S2 --- Coffee --- 15563 --- 1,458 --- 10991 --- ... --- 876
    East ---- D1 --- Tea10 --- 15894 --- 1,664 --- 11615 --- ... --- 156
    East ---- D3 --- Coffee --- 19689 --- 1,989 --- 15615 --- ... --- 986
    East ---- C1 --- Coffee --- 18897 --- 1,988 --- 11898 --- ... --- 256
    East ---- C3 --- Tea10 --- 11699 --- 1,328 --- 12156 --- ... --- 9896
    Following 3 out of 14 (load-) rules files to load the data columns into the cube:
    Rules File1:
    dim 1 --- dim 2 --- dim 3 --- sales --- ignore --- ignore --- ... --- ignore
    Rules File2:
    dim 1 --- dim 2 --- dim 3 --- ignore --- cogs --- ignore --- ... --- ignore
    Rules File14:
    dim 1 --- dim 2 --- dim 3 --- ignore --- ignore --- ignore --- ... --- amount
    Is the upper table design what GlennS mentioned as a "Data" column concept which only allows a single numeric data value ?
    In this case I cant tag two or more columns as “Data fields”. I just can tag one column as “Data field”. Other data fields I have to tag as “ignore fields during data load”. Otherwise, when I validate the rules file, an Error occurs “only one field can contain the Data Field attribute”.
    Or may I skip this error massage and just try to tag all 14 fields as “Data fields” and “load data” ?
    Please advise.
    Am I right that the other way is to reconstruct the table/view (and the rules files) like follows to load all of the data in one pass:
    dim 0 --- dim 1 --- dim 2 --- dim 3 --- data
    Account --- Region -- ID --- Product --- data
    sales --- West --- D1 --- Coffee --- 11001
    sales --- West --- D2 --- Tea10 --- 12011
    sales --- West --- S1 --- Tea10 --- 14115
    sales --- West --- C3 --- Tea10 --- 21053
    sales --- East ---- S2 --- Coffee --- 15563
    sales --- East ---- D1 --- Tea10 --- 15894
    sales --- East ---- D3 --- Coffee --- 19689
    sales --- East ---- C1 --- Coffee --- 18897
    sales --- East ---- C3 --- Tea10 --- 11699
    cogs --- West --- D1 --- Coffee --- 1,322
    cogs --- West --- D2 --- Tea10 --- 1,325
    cogs --- West --- S1 --- Tea10 --- 1,699
    cogs --- West --- C3 --- Tea10 --- 1,588
    cogs --- East ---- S2 --- Coffee --- 1,458
    cogs --- East ---- D1 --- Tea10 --- 1,664
    cogs --- East ---- D3 --- Coffee --- 1,989
    cogs --- East ---- C1 --- Coffee --- 1,988
    cogs --- East ---- C3 --- Tea10 --- 1,328
    discounts --- West --- D1 --- Coffee --- 10789
    discounts --- West --- D2 --- Tea10 --- 10548
    discounts --- West --- S1 --- Tea10 --- 10145
    discounts --- West --- C3 --- Tea10 --- 10998
    discounts --- East ---- S2 --- Coffee --- 10991
    discounts --- East ---- D1 --- Tea10 --- 11615
    discounts --- East ---- D3 --- Coffee --- 15615
    discounts --- East ---- C1 --- Coffee --- 11898
    discounts --- East ---- C3 --- Tea10 --- 12156
    amount --- West --- D1 --- Coffee --- 548
    amount --- West --- D2 --- Tea10 --- 589
    amount --- West --- S1 --- Tea10 --- 852
    amount --- West --- C3 --- Tea10 --- 981
    amount --- East ---- S2 --- Coffee --- 876
    amount --- East ---- D1 --- Tea10 --- 156
    amount --- East ---- D3 --- Coffee --- 986
    amount --- East ---- C1 --- Coffee --- 256
    amount --- East ---- C3 --- Tea10 --- 9896
    And the third way is to adjust the essbase.cfg parameters DLTHREADSPREPARE and DLTHREADSWRITE (and DLSINGLETHREADPERSTAGE)
    I just want to be sure that I understand your suggestions.
    Many thanks for awesome help,
    Zeljko

  • Can we show some user defined message while cube loading in OBIEE

    Hi All,
    Currently we are building OBIEE dashboards from Essbase ASO Cube as datasource.
    During Essbase Cube loading ,Essbase will "Disable all Administrative Commands".In this mean time,When users are trying to run the dashboards,OBIEE will thorws severe error.
    Can we dispaly some user defined message something like "Cube is loading.Please wait...." or any other alternative.
    Time taken to load the cube is 1hr.Please suggest
    Thanks,
    SatyaB

    Yes, You can better try replacing the whole OBIEE dashboard with a custom message. Whenver the load happens...you shud trigger or set a flag value to 'Y'. Using guided navigation when Flag value is matched to Y...right a JS to replace the whole dashboard.
    [html]
    [head]
    [script type="text/javascript"]var strStatus="@1";
    if(strStatus =='Y')
    location.replace('saw.dll?PortalPages&_scid=in6lPJnWWNk&PortalPath=/shared/Framework/_portal/Cube%20Status%20&Page=Cube%20Status');
    [script]
    [head]
    Replace square bracket with curly brackets in above replace code..
    BTW, you shud mark the previous post on Implicit fact. Follow https://forums.oracle.com/forums/ann.jspa?annID=939

  • Last load time in Query

    Dear All,
    Can i get somehow in query what was the last load time? I mean if query is running on InfoCube and i want to see that on that InfoCube when last data was loaded and this info i want to see in Query.
    Regards,
    Sohil Shah.

    Hi,
    Do you want to know only the status of the data of that query or in general about all the cubes last load time.
    If you want to know the status of the data of that query, check the below link. You can get it from the constants of query.
    http://help.sap.com/saphelp_nw70/helpdata/EN/06/ad1578a5a9487da87495c1960f5a2d/content.htm
    If you need to view about all the cubes, then you may need to install the technical content cubes which contains data about the status of the cube.
    Hope this gives you an idea.
    Regards
    akhan

  • Movie Load Time Too Slow

    Hello! 
    I've created three movies for a new website; all photo slide shows with 10-14 photos and some text.  The photos have all been optimized in Photoshop and saved for web...most are under 100kb, however the movies are taking a long time to load on the webpages.  Is there anything I can do through Edge Animate to reduce the load time?  Even with the preloader, the load time is way too long.  I've inserted the movies into the html pages using iframe.  Any suggestions are much appreciated.  Thanks!

    Hi Simon,
    I want to reduce Power View blank canvas / PowerView report load time from SSAS Tabular source in the SharePoint 2013 Portal.
    I have observed that a PowerView report with 1 View, loads faster than a PowerView report with multiple (4) Views, so I think that your statement "Power View only retrieves the data it needs at any given time for a data visualization" might
    be incorrect.
    I have read the link you have provided and have all the patches applied, besides I am not using a Power Pivot source.
    My tabular cube is complex and has about 200 measures, and the blank Power View canvas takes about 13 seconds to load in SharePoint 2013 URL from web browser. Appreciate if you can provide any insights here please.
    Thanks, Ashish Singh

  • PowerView (SharePoint 2013) Load time too slow

    I am using PowerView in SharePoint 2013 and when I access the SSAS 2012 Tabular Data Source through a "Report Data Source", the PowerView Canvas with the Field List takes a long while to load and the users get restless seeing the Blue spinning
    wheel. I tested this on different tabular models and am seeing different results based on the complexity of the model.
    Complex Model - 13 secs.
    Adventure Works Model - 9 secs.
    Very Simple Model (1 fact, 1 dim) - 8 secs.
    I think if the model is complex then the engine takes a longer time to render the field list. How to get around this limitation and bring the blank canvas load time to under 5 secs.? I am curious to know if anyone has ever seen lesser load times in their
    environment for the blank PowerView canvas and field lists.
    Alternatively, If the field list loading takes time, is it possible to disable it? Because, I have already created dashboards for the end users in PowerView and do not want to load the field list if it is slowing down the entire experience.
    Appreciate any guidance.
    Thanks, Ashish Singh

    Hi Simon,
    I want to reduce Power View blank canvas / PowerView report load time from SSAS Tabular source in the SharePoint 2013 Portal.
    I have observed that a PowerView report with 1 View, loads faster than a PowerView report with multiple (4) Views, so I think that your statement "Power View only retrieves the data it needs at any given time for a data visualization" might
    be incorrect.
    I have read the link you have provided and have all the patches applied, besides I am not using a Power Pivot source.
    My tabular cube is complex and has about 200 measures, and the blank Power View canvas takes about 13 seconds to load in SharePoint 2013 URL from web browser. Appreciate if you can provide any insights here please.
    Thanks, Ashish Singh

  • Load time depends on index files

    Hi All,
    (on BSO)
    I read some where like if more index files exists the load time will be increase to search the rite combination to load data value. Is that correct ?
    I have restrcutred(level 0) the dense members and done the calculations So my index files become 3 after i have run general load on that database it took 1hr.
    I have done sparse restructure( all data) and noticed Index files become 2 and ran general load on that database it took 30 mins time.
    Let me know whether m correct or not.
    Sorry,m not good in explaning the things. :)
    Regards,
    Prabhas
    Edited by: Prabhas on Jun 13, 2012 9:21 PM

    You say you first load into a test cube, then export and use that to load into prod. Is the dimension order and dense/sparse configuration the same? Test cube is a copy of prod cube.so everything must be same.
    If your periods (or time) is dense, then loading a single month will still cause fragmentation as it has to read the blocks and rewrite them. I'm guessing things speed up after a restructure because you are getting rid of fragmentation. What you think is a sparse restructure is actually a dense restructure.Please note that We have "years" dimension as a sparse not dense dimension and i have done fragmentation during the maintainance..and loaded but takes much time than usual.But very next day i have added new sparse member and done alldata restructure and cleared the particular month data and loaded which completes faster than y'day load.

  • Incremental Cube Load Error: job slave process terminated

    Hi,
    For performance reasons, we switched to Incremental Cube Loading i.e. only those partitions are autosolved whose data is made available.
    Some times, the background submitted job terminates and the reason given in dba_scheduler_job_run_details is:
    REASON="Job slave process was terminated"
    There so no definits occurance pattren for this error.
    The job submitted in background is killed.
    The last entry the xml_load_log displayed is of Started Auto solving of a partition.
    After this error occurs, we have to Full Aggregate the cube; which offcourse would autosolve all partitions.
    We have been too much annoyed by this error as we did lot of package changes as part of a release to production to include Incremental cube loading, and once done, we see that incremental cube loading just terminates while autosolving a partitions.
    Can any one assist please? Urgent?
    thank you,

    Hi,
    There is a metalink note about this issue. Check note 443348.1
    Thanks
    Brijesh

  • I just updated my RAM and I keep getting a crash report. I have a 21.5 Inch mid 2010 imac. I also am experiencing slow load times and when I try to open Final Cut it says that quartz extreme is not compatible and that I have no VRAM even though I do.

    I just updated my RAM (replaced the two 2 gig cards with two Corsair 8 gig cards) and I keep getting a crash report. I have a 21.5 Inch mid 2010 imac. I also am experiencing slow load times with Photoshop and when I try to open Final Cut it says that quartz extreme is not compatible and that I have no VRAM even though I do.
    Here is the crash report:
    Interval Since Last Panic Report:  5426204 sec
    Panics Since Last Report:          2
    Anonymous UUID:                    2DD57DDB-BB42-5614-395A-CA6225BDAFD9
    Wed Mar 20 11:36:53 2013
    panic(cpu 0 caller 0xffffff801aa43d8e): "a freed zone element has been modified in zone: maps"@/SourceCache/xnu/xnu-2050.18.24/osfmk/kern/zalloc.c:219
    Backtrace (CPU 0), Frame : Return Address
    0xffffff81eb0eb950 : 0xffffff801aa1d626
    0xffffff81eb0eb9c0 : 0xffffff801aa43d8e
    0xffffff81eb0eba00 : 0xffffff801aa435d2
    0xffffff81eb0ebae0 : 0xffffff801aa663f7
    0xffffff81eb0ebb20 : 0xffffff801aa67398
    0xffffff81eb0ebc70 : 0xffffff801aa6887c
    0xffffff81eb0ebd20 : 0xffffff801ad5b8fe
    0xffffff81eb0ebf50 : 0xffffff801ade182a
    0xffffff81eb0ebfb0 : 0xffffff801aaced33
    BSD process name corresponding to current thread: launchd
    Mac OS version:
    Not yet set
    Kernel version:
    Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64
    Kernel UUID: 69A5853F-375A-3EF4-9247-478FD0247333
    Kernel slide:     0x000000001a800000
    Kernel text base: 0xffffff801aa00000
    System model name: iMac11,2 (Mac-F2238AC8)
    System uptime in nanoseconds: 1070542822
    last loaded kext at 707348380: com.apple.driver.AppleIRController    320.15 (addr 0xffffff7f9c53e000, size 28672)
    loaded kexts:
    at.obdev.nke.LittleSnitch    3908
    com.apple.driver.AppleIRController    320.15
    com.apple.driver.AppleUSBCardReader    3.1.0
    com.apple.driver.AppleFileSystemDriver    3.0.1
    com.apple.AppleFSCompression.AppleFSCompressionTypeDataless    1.0.0d1
    com.apple.AppleFSCompression.AppleFSCompressionTypeZlib    1.0.0d1
    com.apple.BootCache    34
    com.apple.iokit.SCSITaskUserClient    3.5.1
    com.apple.driver.XsanFilter    404
    com.apple.iokit.IOAHCIBlockStorage    2.2.2
    com.apple.driver.AppleUSBHub    5.2.5
    com.apple.driver.AppleFWOHCI    4.9.6
    com.apple.driver.AirPort.Atheros40    600.70.23
    com.apple.driver.AppleUSBEHCI    5.4.0
    com.apple.driver.AppleAHCIPort    2.4.1
    com.apple.iokit.AppleBCM5701Ethernet    3.2.5b3
    com.apple.driver.AppleUSBUHCI    5.2.5
    com.apple.driver.AppleEFINVRAM    1.6.1
    com.apple.driver.AppleACPIButtons    1.6
    com.apple.driver.AppleRTC    1.5
    com.apple.driver.AppleHPET    1.7
    com.apple.driver.AppleSMBIOS    1.9
    com.apple.driver.AppleACPIEC    1.6
    com.apple.driver.AppleAPIC    1.6
    com.apple.driver.AppleIntelCPUPowerManagementClient    196.0.0
    com.apple.nke.applicationfirewall    4.0.39
    com.apple.security.quarantine    2
    com.apple.driver.AppleIntelCPUPowerManagement    196.0.0
    com.apple.iokit.IOUSBHIDDriver    5.2.5
    com.apple.iokit.IOSCSIBlockCommandsDevice    3.5.1
    com.apple.iokit.IOUSBMassStorageClass    3.5.0
    com.apple.driver.AppleUSBMergeNub    5.2.5
    com.apple.driver.AppleUSBComposite    5.2.5
    com.apple.iokit.IOSCSIMultimediaCommandsDevice    3.5.1
    com.apple.iokit.IOBDStorageFamily    1.7
    com.apple.iokit.IODVDStorageFamily    1.7.1
    com.apple.iokit.IOCDStorageFamily    1.7.1
    com.apple.iokit.IOAHCISerialATAPI    2.5.0
    com.apple.iokit.IOSCSIArchitectureModelFamily    3.5.1
    com.apple.iokit.IOUSBUserClient    5.2.5
    com.apple.iokit.IOFireWireFamily    4.5.5
    com.apple.iokit.IO80211Family    500.15
    com.apple.iokit.IOAHCIFamily    2.2.1
    com.apple.iokit.IOEthernetAVBController    1.0.2b1
    com.apple.iokit.IONetworkingFamily    3.0
    com.apple.iokit.IOUSBFamily    5.4.0
    com.apple.driver.AppleEFIRuntime    1.6.1
    com.apple.iokit.IOHIDFamily    1.8.0
    com.apple.iokit.IOSMBusFamily    1.1
    com.apple.security.sandbox    220
    com.apple.kext.AppleMatch    1.0.0d1
    com.apple.security.TMSafetyNet    7
    com.apple.driver.DiskImages    344
    com.apple.iokit.IOStorageFamily    1.8
    com.apple.driver.AppleKeyStore    28.21
    com.apple.driver.AppleACPIPlatform    1.6
    com.apple.iokit.IOPCIFamily    2.7.2
    com.apple.iokit.IOACPIFamily    1.4
    com.apple.kec.corecrypto    1.0
    Model: iMac11,2, BootROM IM112.0057.B00, 2 processors, Intel Core i3, 3.2 GHz, 16 GB, SMC 1.64f5
    Graphics: ATI Radeon HD 5670, ATI Radeon HD 5670, PCIe, 512 MB
    Memory Module: BANK 0/DIMM1, 8 GB, DDR3, 1333 MHz, 0x029E, 0x434D5341384758334D314131333333433920
    Memory Module: BANK 1/DIMM1, 8 GB, DDR3, 1333 MHz, 0x029E, 0x434D5341384758334D314131333333433920
    AirPort: spairport_wireless_card_type_airport_extreme (0x168C, 0x8F), Atheros 9280: 4.0.70.23-P2P
    Bluetooth: Version 4.0.9f33 10885, 2 service, 18 devices, 0 incoming serial ports
    Network Service: AirPort, AirPort, en1
    Serial ATA Device: ST31000528AS, 1 TB
    Serial ATA Device: HL-DT-STDVDRW  GA32N
    USB Device: hub_device, 0x0424  (SMSC), 0x2514, 0xfd100000 / 2
    USB Device: IR Receiver, apple_vendor_id, 0x8242, 0xfd120000 / 4
    USB Device: Built-in iSight, apple_vendor_id, 0x8502, 0xfd110000 / 3
    USB Device: hub_device, 0x0424  (SMSC), 0x2514, 0xfa100000 / 2
    USB Device: BRCM2046 Hub, 0x0a5c  (Broadcom Corp.), 0x4500, 0xfa110000 / 4
    USB Device: Bluetooth USB Host Controller, apple_vendor_id, 0x8215, 0xfa111000 / 6
    USB Device: Internal Memory Card Reader, apple_vendor_id, 0x8403, 0xfa120000 / 3

    There have been a few reports on here where Corsair RAM seems to have caused users a lot of grief with crashes.
    The recommendation on here, mostly, is to only buy RAM from macsales.com or crucial.com as they guarantee their modules will work and offer a no quibble lifetime guarantee.
    I'd put the original RAM back in, return the Corsair chips for a refund and re-order from one of those two companies.
    http://eshop.macsales.com/shop/apple/memory/iMac
    http://www.crucial.com/

  • Can a BIG form be served up one page at a time to avoid long load time?

    Tricks I have read for optimizing the load time of large forms are not helping. Linearization causes the first page to render quickly, but you can't interact with the fields until the whole form finishes loading -- no help there. Is there a way to break the form into pages (without creating entirely separate forms) so the user can fill out a page, hit a Next Page button, fill out that page, etc.? Understood that this is an old school idea, but until Reader can download a 1+ MB form in less time than it takes an average user to get ticked off, old school might do the trick.
    Alternatively, is there a way to construct a form so you can start interacting with it without having to wait for it all to load? This question comes from the (uninformed) assumption that maybe there are forward references that can't be satisfied until all the bits have come over the wire. If that's right, can a multipage form be architected so as to avoid this problem?

    No that technology does not exist yet. There are form level events that need to have the entire document there before they can fire. Also you would have to keep track of where you are so that would mean some sort of session information for each user.

Maybe you are looking for