Large data update Problem

I have a temprary table with 5 million rows (A)
which needs to be appended with 90 million row table (B).
60% of the rows of the 5mil rows already exist in the big table
i need to update/merge the table A data with table B
Oracle version is 8.1.7
Please advice which method is the fastest

hi raghu,
this is the portal content management forum. please post your database related question in the following forum:
General Database Discussions
this is the appropriate place to post database related questions.
thanks,
christian

Similar Messages

  • Large Data file problem in Oracle 8.1.7 and RedHat 6.2EE

    I've installed the RedHat 6.2EE (Enterprise
    Edition Optimized for Oracle8i) and Oracle
    EE 8.1.7. I am able to create very large file
    ( > 2GB) using standard commands, such as
    'cat', 'dd', .... However, when I create a
    large data file in Oracle, I get the
    following error messages:
    create tablespace ts datafile '/data/u1/db1/data1.dbf' size 10000M autoextend off
    extent management local autoallocate;
    create tablespace ts datafile '/data/u1/db1/data1.dbf' size 10000M autoextend off
    ERROR at line 1:
    ORA-19502: write error on file "/data/u1/db1/data1.dbf", blockno 231425
    (blocksize=8192)
    ORA-27069: skgfdisp: attempt to do I/O beyond the range of the file
    Additional information: 231425
    Additional information: 64
    Additional information: 231425
    Do anyone know what's wrong?
    Thanks
    david

    I've finally solved it!
    I downloaded the following jre from blackdown:
    jre118_v3-glibc-2.1.3-DYNMOTIF.tar.bz2
    It's the only one that seems to work (and god, have I tried them all!)
    I've no idea what the DYNMOTIF means (apart from being something to do with Motif - but you don't have to be a linux guru to work that out ;)) - but, hell, it works.
    And after sitting in front of this machine for 3 days trying to deal with Oracle's, frankly PATHETIC install, that's so full of holes and bugs, that's all I care about..
    The one bundled with Oracle 8.1.7 doesn't work with Linux redhat 6.2EE.
    Don't oracle test their software?
    Anyway I'm happy now, and I'm leaving this in case anybody else has the same problem.
    Thanks for everyone's help.

  • Data updation problem?

    Hi All,
    I have two buttons, submit and save. in submit i am calling FM through service call to get data. data is coming correctly.
    In save i am trying to save any changed data. but changed data is not saving, i checked through debug in table showing changed data, but in database it is not updating.
    code i am using in SAVE button.
    DATA lo_nd_it_vttk TYPE REF TO if_wd_context_node.
      DATA lo_el_it_vttk TYPE REF TO if_wd_context_element.
      DATA ls_it_vttk TYPE wd_this->Element_it_vttk.
      DATA lt_it_vttk TYPE wd_this->Elements_it_vttk.
    Data lt_it_vttk type STANDARD TABLE OF if_main=>Element_it_vttk.
    DATA lt_it_vttk TYPE STANDARD TABLE OF vttk.
      DATA wa_vttk TYPE vttk.
    navigate from <CONTEXT> to <IT_VTTK> via lead selection
      lo_nd_it_vttk = wd_context->path_get_node( path = `ZSHIPMENT_CHANGE.CHANGING.IT_VTTK` ).
    get element via lead selection
      lo_el_it_vttk = lo_nd_it_vttk->get_element( ).
    get all declared attributes
      lo_el_it_vttk->get_static_attributes(
        IMPORTING
          static_attributes = ls_it_vttk ).
    ******Changing DATA ************
    WA_VTTK-EXTI1 = LS_IT_VTTK-EXTI1.
        WA_VTTK-DTDIS = LS_IT_VTTK-DTDIS.
        WA_VTTK-UZDIS = LS_IT_VTTK-UZDIS.
        WA_VTTK-DPREG = LS_IT_VTTK-DPREG.
        WA_VTTK-UPREG = LS_IT_VTTK-UPREG.
    append WA_VTTK to lt_it_vttk .
    MODIFIED DATA IS COMING TO  lt_it_vttk  TABLE. BUT NOT UPDATING DB
    lock the table
      CALL FUNCTION 'ENQUEUE_E_TABLE'
        EXPORTING
          MODE_RSTABLE   = 'E'
          TABNAME        = 'VTTK'
        EXCEPTIONS
          FOREIGN_LOCK   = 1
          SYSTEM_FAILURE = 2
          OTHERS         = 3.
      MODIFY VTTK from table lt_it_vttk.
      if sy-subrc is initial.
        COMMIT WORK .
      else.
        ROLLBACK work.
      endif.
      CALL FUNCTION 'DEQUEUE_E_TABLE'
        EXPORTING
          MODE_RSTABLE = 'E'
          TABNAME      = 'VTTK' .
    In this i am missing anything? Please Help.
    Thanks,
    Madan.

    I guess the problem is here.
    ******Changing DATA ************
    WA_VTTK-EXTI1 = LS_IT_VTTK-EXTI1.
    WA_VTTK-DTDIS = LS_IT_VTTK-DTDIS.
    WA_VTTK-UZDIS = LS_IT_VTTK-UZDIS.
    WA_VTTK-DPREG = LS_IT_VTTK-DPREG.
    WA_VTTK-UPREG = LS_IT_VTTK-UPREG.
    you are not passing any key fields value here.
    i guess you need to pass the value of
      WA_VTTK-TKNUM field also.
    SO ADD
    WA-VTTK-TKNUM = LS_IT_VTTK-TKNUM.
    Thanks
    sarbjeet singh

  • IDOC segment Data updation problem

    I have done the Idoc extention for ORDERS05.I have written a code to insert data for my z segments in IDOC_INPUT_ORDERS.But when I run the program its not updating data to z segements as well as other segments
    What may be the problem..
    READ TABLE dedidd INTO wa_edidd WITH KEY segnam = c-e1edka1
                                                      sdata(3) = 'AG'.
            IF sy-subrc = 0.
              CLEAR w-index.
              w-index = sy-tabix.
              wa_e1edka1 = wa_edidd-sdata.
              CLEAR : wa_e1edka1-parvw,
                      wa_e1edka1-partn.
              wa_e1edka1-parvw = w-parvw.
              wa_e1edka1-partn = w-inpnr_ag.
              wa_edidd-sdata = wa_e1edka1.
              MODIFY dedidd FROM wa_edidd INDEX w-index TRANSPORTING sdata.
    ENDIF.
    Thanks
    Umesh

    Hi,
    Use Comparing fields addition in MODIFY Statement...
    MODIFY ........ COMPARING <key fields>
    Hope this would help you.
    Regards
    Narin Nandivada.

  • IChart Date updation problem in version 12.1

    Hi,
    I have created a Line iChart for a TagQuery in which I am passing  the Tags ,start date, end date and server (simulator) through the following script
                                              var chart = document.iChart;
              var qryObj = chart.getQueryObject();
              qryObj.setServer("simulator");
              qryObj.setStartDate("01/14/2011 07:00:00");
              qryObj.setEndDate("01/14/2011 07:01:00");
              qryObj.setDuration(60);
              chart.refresh();
    I am getting the Line Chart but it is always displaying the StartDate as system current date and EndDate as StartDate +60 min (in setDuration)  but I need Line Chart for the date's which I am passing through the script
    Thank you for the answers in advance..

    Use: updateChart(true) instead of refresh()
    Your query is time based, so it will 'refresh' the chart to the most recent duration.
    http://help.sap.com/saphelp_mii122/helpdata/EN/44/b305d64c7914e6e10000000a114e5d/frameset.htm

  • Problem with large data report

    I tried to run a template I got from release 12 using data from the release we are using (11i). The xml file is about 13,500 kb. when i run it from my desktop.
    I get the following error (mostly no output is generated sometimes its generated after a long time).
    Font Dir: C:\Program Files\Oracle\BI Publisher\BI Publisher Desktop\Template Builder for Word\fonts
    Run XDO Start
    RTFProcessor setLocale: en-us
    FOProcessor setData: C:\Documents and Settings\skiran\Desktop\working\2648119.xml
    FOProcessor setLocale: en-us
    I assumed there may be compatibility issues between 12i and 11i hence tried to write my own template and ran into same issue
    when i added the third nested loop.
    I also noticed javaws.exe runs in the background hogging a lot of memory. I am using Bi version 5.6.3
    I tried to run the template through template viewer. The process never completes.
    The log file is
    [010109_121009828][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setData(InputStream) is called.
    [010109_121014796][][STATEMENT] Logger.init(): *** DEBUG MODE IS OFF. ***
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setTemplate(InputStream)is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setOutput(OutputStream)is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setOutputFormat(byte)is called with ID=1.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setLocale is called with 'en-US'.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.process() is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.generate() called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] createFO(Object, Object) is called.
    [010109_121318828][oracle.apps.xdo.common.xml.XSLT10gR1][STATEMENT] oracle.xdo Developers Kit 10.1.0.5.0 - Production
    [010109_121318828][oracle.apps.xdo.common.xml.XSLT10gR1][STATEMENT] Scalable Feature Disabled
    End of Process.
    Time: 436.906 sec.
    FO Formatting failed.
    I cant seem to figure out whether this is a looping or large data or BI version issue. Please advice
    Thank you

    The report will probably fail in a production environment if you don't have enough heap. 13 megs is a big xml file for the parsers to handle, it will probably crush the opp. The whole document has to be loaded into memory and perserving the relationships in the documents is probably whats killing your performance. The opp or foprocessor is not using the sax parser like the bursting engine does. I would suggest setting a maximum range on the amount of documents that can be created and submit in a set of batches. That will reduce your xml file size and performance will increase.
    An alternative to the pervious approach would be to write a concurrent program that merges the pdfs using the document merger api. This would allow you to burst the document into a temp directory and then re-assimilate them it one document. One disadvantage of this approach is that the pdf is going to be freakin huge. Also, if you have to send that piggy to the printer your gonna have some problems too. When you convert it pdf to ps the files are going to be massive because of the loss of compression, it's gets even worse if the pdf has images......Then'll you have a more problems with disk on the server and or running out of memory on ps printers.
    All of things I have discussed I have done in some sort of fashion. Speaking from experience your idea of 13 meg xml file is just a really bad idea. I would go with option one.
    Ike Wiggins
    http://bipublisher.blogspot.com

  • Problem in data update to BW

    Hi All,
    We are facing a very peculiar problem while data updating in BW. Data for GL, AR gets updated evryday twice through process chain. But sometimes even after normal load data does not gets updated properly and changed data comes after 2-3 loads though it sould have come before. Can you pls provide possible suggestion for this behaviour.
    Thanks in advance,
    Sananda

    Hi Sananda,
    Please got through the below article
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a00ae8f2-03ad-2d10-71b7-962915661a93?quicklink=index&overridelayout=true
    and can you tell me the type of delta update you are using
    This should be because of your delta update mode
    Hope this helps
    Regards,
    Venkatesh

  • 64-bit LabVIEW - still major problems with large data sets

    Hi Folks -
    I have LabVIEW 2009 64-bit version running on a Win7 64-bit OS with Intel Xeon dual quad core processor, 16 gbyte RAM.  With the release of this 64-bit version of LabVIEW, I expected to easily be able to handle x-ray computed tomography data sets in the 2 and 3-gbyte range in RAM since we now have access to all of the available RAM.  But I am having major problems - sluggish (and stoppage) operation of the program, inability to perform certain operations, etc.
    Here is how I store the 3-D data that consists of a series of images. I store each of my 2d images in a cluster, and then have the entire image series as an array of these clusters.  I then store this entire array of clusters in a queue which I regularly access using 'Preview Queue' and then operate on the image set, subsets of the images, or single images.
    Then enqueue:
    I remember talking to LabVIEW R&D years ago that this was a good way to do things because it allowed non-contiguous access to memory (versus contigous access that would be required if I stored my image series as 3-D array without the clusters) (R&D - this is what I remember, please correct if wrong).
    Because I am experiencing tremendous slowness in the program after these large data sets are loaded, and I think disk access as well to obtain memory beyond 16 gbytes, I am wondering if I need to use a different storage strategy that will allow seamless program operation while still using RAM storage (do not want to have to recall images from disk).
    I have other CT imaging programs that are running very well with these large data sets.
    This is a critical issue for me as I move forward with LabVIEW in this application.   I would like to work with LabVIEW R&D to solve this issue.  I am wondering if I should be thinking about establishing say, 10 queues, instead of 1, to address this.  It would mean a major program rewrite.
    Sincerely,
    Don

    First, I want to add that this strategy works reasonably well for data sets in the 600 - 700 mbyte range with the 64-bit LabVIEW. 
    With LabVIEW 32-bit, I00 - 200 mbyte sets were about the limit before I experienced problems.
    So I definitely noticed an improvement.
    I use the queuing strategy to move this large amount of data in RAM.   We could have used other means such a LV2 globals.  But the idea of clustering the 2-d array (image) and then having a series of those clustered arrays in an array (to see the final structure I showed in my diagram) versus using a 3-D array I believe even allowed me to get this far using RAM instead of recalling the images from disk.
    I am sure data copies are being made - yes, the memory is ballooning to 15 gbyte.  I probably need to have someone examine this code while I am explaining things to them live.  This is a very large application, and a significant amount of time would be required to simplify it, and that might not allow us to duplicate the problem.  In some of my applications, I use the in-place structure for indexing
    data out of arrays to minimize data copies.  I expect I might have to
    consider this strategy now here as well.  Just a thought.
    What I can do is send someone (in US) via large file transfer a 1.3 - 2.7 gbyte set of image data - and see how they would best advise on storing and extracting the images using RAM, how best to optimize the RAM usage, and not make data copies.  The operations that I apply on the images are irrelevant.  It is the storage, movement, and extractions that are causing the problems.  I can also show a screen shot(s) of how I extract the images (but I have major problems even before I get to that point),
    Can someone else comment on how data value references may help here, or how they have helped in one of their applications?  Would the use of this eliminate copies?   I currently have to wait for 64-bit version of the Advanced Signal Processing Toolkit for LabVIEW 2010 before I can move to LabVIEW 2010.
    Don

  • Large data transfers in early morning hours putting me over my data limit

    I am getting large data transfers labeled Intranet Media Net from two different Iphones in my home.   These transfers occur typically in the early morning hours when we are sleeping and when we should be connected to Wifi.  ATT suggested that I turn off the option to send updates to Apple, which I did.  However, the charges are still occuring.  Last night a 35MB transfer has nearly put my daughter's phone over her 200 MB limit.  ATT suggested that I turn off cellular data to eliminate these from occuring, but that seems silly.  Plus, who is to say that they won't occur when I turn cellular on, in order to use data when I am outside of a Wifi area?  Any help would be much appreciated.  I don't know if this is an Apple issue or an ATT issue.

    How do you get logs out of a TC anyway?
    From the latest one you cannot.. no logs at all so that is the direction Apple is moving.. the black box.. well white box that you feed this into and get that out of.. but all the works inside are complete mystery.. that is the end point.. not quite there yet.
    Logs from v5 utility or SNMP.. both still working on any of the earlier TC.
    You can install v5 on Mountain Lion.
    How to load 5.6 into ML.
    1. Download 5.6 for Lion.
    http://support.apple.com/kb/DL1482
    Click to open the dmg but do not attempt to install the pkg.. it won't work anyway.
    Leave the package open on the desktop so you can see the file. AirportUtility56.pkg
    2. Download and install unpkg.
    http://www.timdoug.com/unpkg/
    Run unpkg on the desktop.. If your Mac refuses to run the software, because it wasn’t downloaded from the Apple store, go to security in preferences and allow other software to work.. this is limitation of trade methinks. You can set back later if you like.
    Now drag the AirPortUtility56.pkg file over to unpkg.. and it will create a new directory of the same name on the desktop.. in finder, open the new directory, drill down.. applications, utilities .. there lo and behold is Airport utility 5.6 .. drag it to your main utilities directory or just run it from current location.
    You cannot uninstall version 6 (now 6.3 if you updated) so don't try.. and you cannot or should not run them both at the same time.. although I have had no problems when doing so.
    Give 7.6.3 firmware the heave ho.. and use 7.6.1 and if the TC is more than 18months old.. round about .. even a Gen4 you can go back to 7.5.2.. which seems more solid again.
    As stated above.. by the time you get early Gen3.. they can have board faults. and earlier ones are especially not reliable with power supply faults.

  • [Bug?] X-Control Memory Leak with Large Data Array

    [LV2009]
    [Cross-posted to LAVA]
    I have found that if I pass a large data array (~4MB in this example) into an X-Control, it causes massive memory allocations (1 GB+).
    Is this a known issue?
    The X-Control in the video was created, then the Data.ctl was changed to 2D Array - it has not been edited in any other way.
    I also compare the allocations to that of a native 2D Array (which is only ~4MB).
    Note: I jiggled the Windows Task Manager about so that JING would update correctly, its a bit slow, but it essentially just keeps rolling up and doesn't stop.
    Demo code attached.
    Cheers
    -JG
    Unable to display content. Adobe Flash is required.
    Certified LabVIEW Architect * LabVIEW Champion
    Attachments:
    X Control Bug [LV2009].zip ‏42 KB

    Hi Jon (cool name) 
    Thank you very much for your reply. We came to this conclusion in the cross post and it is good to have it confirmed by LabVIEW R&D. Your response is also similar to that of my AE which I got this morning as well - see below:
    Note: Your reference number is included in the Subject field of this
    message. It is very important that you do not remove or modify this
    reference number, or your message may be returned to you.
    Hi Jon,
    You probably found some information from the forum. The US engineer has gotten back and he said that unfortunately that's expected behaviour after they have conducted some tests and this is what he replied:
    "X Controls in the background use events structures. In particular the Data Change Event is called when the value of the XControl changes (writing to the terminal, local variable, or value change property). What is happening in this case is the XControl is getting called to fast with a large set of data that the event structure is queuing the events and data that a memory leak is produced. It is, unfortunately, expect behavior. The main work around for the customer in this case is not call the XControl as often. Another possibility is to use the Synchronous Display Property to defer updates to the Xcontrol, this might slow down a leak."
    He would also like to know if you can provide with more details how you are using the Xcontrol, perhaps there is a better way. Please refer to the link below for synchronous display. Thank you.
    http://zone.ni.com/reference/en-XX/help/371361G-01/lvprop/control_synchronous_display/
    In my application I updated the X-Control @ 1Hz and it allocated at MBs/s up to 1+GB before it crashed, all within a few hours. That is why I called it a leak. I am really worried that if this CAR gets killed, there will still be an issue lingering that makes using X-Controls a major problem under the above conditions. I have had to pull two sets of libraries from my code because of this - when they got replaced with native LabVIEW controls the leak when away (but I lost reuse and encapsulation etc...).
    Anyways, I really want to use X-Control tho (now and in the future) as I like all other aspect of them. If you do not consider this a leak, can a different #CAR be raised that may modify the existing behavior? I offer the suggestion (in the cross-post) that the data be ignored rather than queued? Similar to Christian's idea, but for X-Controls. Maybe as an option?
    I look forward to discussing this with you further.
    Regards
    -Jon
    Certified LabVIEW Architect * LabVIEW Champion

  • INPUT: KT4/V vs CRC error in large data transfer/CD burning HERE!

    This issue can be solved with BIOS update. KT4V & KT4 Ultra users who are having this problem can request for the TEST BIOS to test on your system. You may either pm/email me or Bas, or get it at http://ftp://ftp.heppen.be/MSI/
    Please report back whether the test BIOS would really fix the problem, or cause any new problem, or any performance hit.
    ** this sounds like a Christmas Gift to KT4V users AND New Year Gift to KT4 Ultra users!!!  :D  **
    To all KT4 Ultra and KT4V users, either you have data corruption or CRC error in large data transfer and CD burning or not, your inputs are needed.
    Please list down your system specs as details as possible. Below here is a guideline, you may take this, CTRL-C (copy) and write your specs in your post.
    1. System specs:
    CPU:
    Motherboard:
    RAM Slot-1: (exact brand and model)
    RAM Slot-2:
    RAM Slot-3:
    display card:  [no overclock]
    IDE-1M:  (exact HDD brand and model pls)
    IDE-1S:
    IDE-2M:
    IDE-2S:
    IDE-3:
    SER-1:
    SER-2:
    PCI-1:
    PCI-2:
    PCI-3:
    PCI-4:
    PCI-5:
    PCI-6:
    PSU: (brand, model, total power, (estimated) combined power)
    BIOS revision:
    Operating System:
    VIA 4-in-1 drivers : (if you installed it, tell us the version)
    other drivers, services or applications might affect the data transfer such as : PCI Latency patch, WPCREDIT modifications, VCool, CoolerXP...
    2. CRC ERROR?
    PASS or FAIL
    If PASS, let us know your BIOS settings.
    If FAIL, proceed as below:
    3. Please use these BIOS settings:
    1. Load BIOS Setup Default
    2. NO OVERCLOCK ON FSB! Set accordingly to your CPU
    2. Set RAM to
    a) SPD, if failed try
    b) User defined to the slowest RAM timings, ie 266,2.5,3,6,3,disable interleave,4,disable 1T, normal
    If PASS, go for more extreme BIOS settings as you usually use:
    1. High Performance Default
    2. set RAM to the extreme timings
    3. DO NOT overclock yet until both 1. and 2. are PASS
    4. Try these suggestions:
    1. Microsoft IDE drivers (uninstall VIA 4-in-1's)
    2. VIA 4-in-1 different version's IDE filter driver?
    3. VIA IDE Miniport driver?
    4. use IDE-3 RAID channel for one HDD data transfer
    5. same HDD transfer, ie C:\dir1\*.* -> C:\dir2\*.*
    6. burn CD at 1x speed
    7. Set the HDD and/or CD to PIO mode, or slower UDMA mode.
    8. If and only if you know how to update BIOS correctly and willing to take some risks, try the BETA BIOS KT4 (1.25), KT4V (1.64) too.
    Please report back your tests and experimentations of these suggestions.
    If you have any workaround to deal with this issue other than set back FSB to 100Mhz, please tell us too.
    Thanks for your inputs!

    My system, just gotten this 2 days ago
    CPU: Athlon XP 2000+
    Motherboard: MSI KT4V (MS-6712)
    RAM Slot-1:
    RAM Slot-2: 512 Mb Kingston DDR 333 CAS 2.5
    RAM Slot-3:
    display card: Abit Siluro GF3 Ti200
    IDE-1M: Western Digital WD800JB (8 Mb Cache) - 80 GB
    IDE-1S: Seagate U-Series ST360020A - 60 GB
    IDE-2M: Sony DVD-ROM 16x (DDU1621)
    IDE-2S: Creative CDRW121032
    IDE-3:
    SER-1:
    SER-2:
    PCI-1:
    PCI-2:
    PCI-3: Accton 1207F 10/100 Fast Ethernet Card
    PCI-4: SBLIVE 5.1 Platinum with Live!Drive II
    PCI-5:
    PCI-6:
    PSU: 400Watts (Generic)
    BIOS revision: 1.6
    Operating System: Windows 2000 with SP3
    VIA 4-in-1 drivers : Hyperion 4.45, only AGP and INF installed. IDE drivers are standard Win2k/SP3 ones.
    2. CRC ERROR?
    FAIL.
    When i had my system, i tried installing Windows ME as i wanted to do dual-booting together with Win2K. When i tried to install NVIDIA Detonator drivers (Ver 30.82), it proceeded as normally and asked for a reboot, which i did, then it just hang before the start of Windows ME. I did a reboot, and later selected "Normal" as Windows ME detected a failed startup, and later i was able to enter Windows ME, but it reported that the NVIDIA Detonator drivers were invalid and of wrong type to my display card.
    Later i tried to move my files from my C:\ to D:\ and it reported saying that my destination file is invalid.
    When i changed my OS to Win2k/SP3 (no more dual-boot), and installed the same NVIDIA detonator driver version, it worked. When i started to copy files again, it later BSOD, and said PAGE_FAULT_ERROR (something like this). When installing from CDs, it will report that my .CAB files are corrupt or have insufficient swap file space (i set mine manually at 1.2Gb size). Then there are times during my reboots and entering win2k, i found my keyboard and mouse (all PS/2) not working and windows loads as usual.
    Later i changed my PCI latency setting from 32 to 96, i managed to install from CD without much further issues.
    Upon reading these posts here, i didn't realize that the MSI KT4 series or the KT400 chipset had so much issues! i have read from countless sites like extremetech and anandtech and none reported about this particular errors i have encountered during my first 2 days with this setup. (this is my first Athlon setup, i'm previously and Intel person-type).
    So far, i conclude in my settings is that:
    -32-Bit settings in Bios settings for CD/DVD/CD-RW must be disabled, i concur with Shumway's recommendations.
    -DMA settings in Windows 2000 for CD/DVD/CD-RW must be set to PIO mode otherwise when copying from CD to HDD will have read errors.
    -Installing the PCI latency fix really does wonders for my set. (PCI Latency fix ver 1.9). Now i can copy files from all my drives without worrying so much about CRC errors. Thank you, George E. Breese.
    -I really want to know why in WinME i can't install the NVIDIA detonator drivers, while in Win2K i can.
    I post again, once i have done some more tests to my system, especially CD-R writes.
    Angel17

  • Query Error Information: Result set is too large; data retrieval ......

    Hi Experts,
    I got one problem with my query information. when Im executing my report and drill my info in my navigation panel, Instead of a table with values the message "Result set is too large; data retrieval restricted by configuration" appears. I already applied "Note 1127156 - Safety belt: Result set is too large". I imported Support Package 13 for SAP NetWeaver 7. 0 BI Java (BIIBC13_0.SCA / BIBASES13_0.SCA / BIWEBAPP13_0.SCA) and executed the program SAP_RSADMIN_MAINTAIN (in transaction SE38), with the object and the value like Note 1127156 says... but the problem still appears....
    what Should I be missing ??????  How can I fix this issue ????
    Thank you very much for helping me out..... (Any help would be rewarded)
    David Corté

    You may ask your basis guy to increase ESM buffer (rsdb/esm/buffersize_kb). Did you check the systems memory?
    Did you try to check the error dump using ST22 - Runtime error analysis?
    Edited by: ashok saha on Feb 27, 2008 10:27 PM

  • Result set is too large; data retrieval restricted by configuration

    Hi,
    While executing query for a given period, 'Result set is too large; data retrieval restricted by configuration' message is getting displayed. I had searched in SDN and I had referred the following link:
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/d047e1a1-ad5d-2c10-5cb1-f4ff99fc63c4&overridelayout=true
    Steps followed:
    1) Transaction Code SE38
    2) In the program field, entered the report name SAP_RSADMIN_MAINTAIN and Executed.
    3) For OBJECT, entered the following parameters: BICS_DA_RESULT_SET_LIMIT_MAX
    4) For VALUE, entered the value for the size of the result set, and then executed the program:
    After the said steps, the below message is displayed:
    OLD SETTING:
    OBJECT =                                VALUE =
    UPDATE failed because there is no record
    OBJECT = BICS_DA_RESULT_SET_LIMIT_MAX
    Similar message is displayed for Object: BICS_DA_RESULT_SET_LIMIT_DEF.
    Please let me know as to how to proceed on this.
    Thanks in advance.

    Thanks for the reply!
    The objects are not available in the RSADMIN table.

  • When the apple review team review our app,they point out that our  app uses a background mode but does not include functionality that requires that mode to run persistently.but in fact,when the app in background ,the app need data update to make the

    when the apple review team review our app,they point out that our  app uses a background mode but does not include functionality that requires that mode to run persistently。but in fact,when the app in background ,the app need data update to make the function of  trajectory replay come ture。in fact, we have added function when the app  is in background mode。we have point out the point to them by email。but they still have question on the background mode,we are confused,does anyone can help me,i still don't know why do review team can't find the data update when  the app is in background and how do i modify the app,or what is the really problem they refered,do i misunderstand them?
    the blow is the content of the review team email:
    We found that your app uses a background mode but does not include functionality that requires that mode to run persistently. This behavior is not in compliance with the App Store Review Guidelines.
    We noticed your app declares support for location in the UIBackgroundModes key in your Info.plist but does not include features that require persistent location.
    It would be appropriate to add features that require persistent use of real-time location updates while the app is in the background or remove the "location" setting from the UIBackgroundModes key. If your application does not require persistent, real-time location updates, we recommend using the significant-change location service or the region monitoring location service.
    For more information on these options, please see the "Starting the Significant-Change Location Service" and "Monitoring Shape-Based Regions" sections in the Location Awareness Programming Guide.
    If you choose to add features that use the Location Background Mode, please include the following battery use disclaimer in your Application Description:
    "Continued use of GPS running in the background can dramatically decrease battery life."
    Additionally, at your earliest opportunity, please review the following question/s and provide as detailed information as you can in response. The more information you can provide upfront, the sooner we can complete your review.
    We are unable to access the app in use in "http://www.wayding.com/waydingweb/article/12/139". Please provide us a valid demo video to show your app in use.
    For discrete code-level questions, you may wish to consult with Apple Developer Technical Support. When the DTS engineer follows up with you, please be ready to provide:
    - complete details of your rejection issue(s)
    - screenshots
    - steps to reproduce the issue(s)
    - symbolicated crash logs - if your issue results in a crash log
    If you have difficulty reproducing a reported issue, please try testing the workflow as described in <https://developer.apple.com/library/ios/qa/qa1764/>Technical Q&A QA1764: How to reproduce a crash or bug that only App Review or users are seeing.

    Unfortunately, these forums here are all user to user; you might try the developer forums or get in touch with the team that you are working with.

  • Date preferences problem - Photoshop CS2 and OS X 10.4.6

    I am running Photoshop CS2 9.0.1 on two Mac G5s (Mac OS X 10.4.6) and Photoshop CS 8.0 on a Mac G4 (Mac OS X 10.4.6) but now have a problem with the correct display of dates in File Info on the pair of G5s running CS2 9.0.1.
    However I configure the date display preferences in International in System Preferences (UK prefs - DD/MM/YYYY) File Info will now only display the date as 'short', i.e. 1/6/06, for example. Naturally this presents a major problem with all my pre-2000 dates. By contrast, with exactly identical System Preferences on the G4 with Photoshop CS, the dates display correctly, i.e. 01/06/2006 or 06/06/1944. One of the G5s also has Photoshop CS 8.0 and also exhibits the same problem.
    Is this an OS X 10.4.6 update problem, as I believe, or a combination of flaws between Apple and Adobe? Is there a cure? With a vast number of images in a picture library it is essential to have accurate, pre-2000, dates embedded in the File Info metadata. Can anyone offer an answer, a solution or a workaround?

    After much trial and error I eventually discovered a fix. When customizing the Date Format Preferences (UK) (in System Preferences/International) select 'Short' and then drag the day of the week from the list below as the first item in the displayed date; i.e. Thursday 08/06/2006
    Then modify the DD, MM and YYYY by highlighting the numbers and selecting the full 2 or 4 number digit from the drop-down menus. This is something that seemed to be fine in 10.4.5 but has changed in 10.4.6!

Maybe you are looking for

  • OAF Forms Moving from 11i to R12

    HI I have some customized OAF forms in 11i, and we are up-grading to R12, now i have compiled OAF Forms but this error comes when i try to run these forms in R12 "You have encountered an unexpected error. Please contact the System Administrator for a

  • How do I find the size of my JPEG photos

    I want to find the size of my JPEG photos so I know how many photos I can fit per GB on my flashdrive. I know that my photos are in the "all images" section, on the left side  of my "Mcintosh HD" dialogue box, a box which contains just about everythi

  • How to set default selected checkbox in af:tableSelectMany /

    Hi, I am new to ADF technology, I am using JDev 10..3g. I want to set the checkbox as default selected for particular rows. but I don't know how to set default selected checkbox in <af:tableSelectMany /> Is there any way to do this using ADF<af:table

  • Message on missing import driveers

    Recently, before startup of i-tunes, a message started appearing, warning me that drivers for CD imports have been lost, possibly due to installation of anothe CD writer application. The last statement might be true. It further suggests that I reload

  • New Windows Azure Migration Content

    The guide, Migrating Data-Centric Applications to Windows Azure, provides detailed guidance on how to migrate data-centric applications to Windows Azure Cloud Services, as well as an introduction on how to migrate those same applications to Windows A