Performance blocker on cFP-2020: file-I/O! How to improve?

Hi all,
after resolving my serial-communication problems I still have a performance problem. The code is way too slow which causes e.g. the ftp-server on the fieldpoint to not respond anymore to requests from my PC. I always get timeouts. I also frequently get loop-finished-late events within my two state-machines. I have now used the timing and performance monitor to sse which vi is taking so much time. The result: A file-I/O vi that writes data and log-entries into three different log-files. With a former verion I simply used one vi that appends a string to an existing file. However, since this function disappeared with LV8.2 I had to rewrite the code to use the following sequence of LV-functions:
- file open
- set pointer to the end
- write string
- file close
My vi shich calls this sequence is horribly slow - execution time per run is about 200ms and the top of the list in the performance monitor. Are there any suggestions on how to improve this code? I simply want to add a string to the end of the log-file......
The vi is attached. There are two features in the code which are not self-explanatory: The first sub-vi generates a new file if is used for longer than a preset time (15 minutes in my case). So the create-time of the file is stored in the filename and whenever the current time exceeds create-time plus 15 min. A new file name is created. For simplicity, the name is stored only for the first of the three log-files. The other two are created by string operations from the first filename. Second, whenever a file is "created", that means is does not exist yet, a data-header is written to the file, before data is appended.
Can you see simple improvements here that will accelerate this code? Maybe open the file only once and then append data subsequently and only close it when a new file is created? But I do not need all three files all the times, there may be situations where only one file is needed, and the others need not be created at all.
Thanks,
Olaf
Attachments:
makedatalogfiles.vi ‏42 KB

Ravens Fan wrote:
I think moving the open file, move to end of file, and close file out of the loop would certainly help.  These functions could be associated with or built into your "determine new file" VI.  Since the file paths get passed into the loops, you could pass them through with shift registers so that you can close them after the loops end.
One other thing to look at is your initialize array and insert into array functions.  I believe insert into array is one of the costlier functions.  Build array would be better.  And initializing a much larger array and using replace array subset is better yet.  But if you wind up with more elements than you had originally initialized for, you will have to use build array to enlarge it.   I would recommend searching the Labview forum for insert into array, build array, and replace array subset for threads that do a better job explaining the differences and advantages of each.
Thanks, that improved the performance of this vi by about two orders of magnitude. The application is now much more stable.
However, I cannot connect to the cFP-2020 anymore by ftp. I even swithed the fieldpoint to boot without vi.
To be specific, I can access the cFP and cd into all directories except for the directory with the data. I assume that there is a large
amount of files in it now, but used to work before even with lots of files. The only thing that might be not so nice is that there is a space
in the folder name, but that has been like that for years now and used to work.
Is there any reason (corrupted file or something like that) that can cause the ftp to fail on this specific directory?
Thanks,
Olaf
...at least I am very close now to a satisfying and running system.... :-)

Similar Messages

  • SQL Performance - Looking for a hint or suggestion on how to improve performance.

    I've linked several tables for the Sales Order, Delivery, and Invoicing. In essence, a query to show shipments and invoicing to a sales order.
    Throughput is poor....60 + seconds, so I am looking for a solution...perhaps /* + hints*/ techinques to improve the performance of this code.
    Here is a functional version of the code....
    /* Functionally tested join between OM, WSH, AR Tables */
    Select oeh.order_number
         , trx_number as invc_no
         , rctl.line_number as invc_line_no
         , rctl.inventory_item_id rctl_inventory_item_id
         , rctl.sales_order_line as SO_Line_No
         , oel.line_id
         , rctl.line_type
         , oel.ship_from_org_id as oel_ship_from_org_id
         , rctl.warehouse_id
         , oel.ordered_quantity
         , oel.shipped_quantity
         , oel.invoiced_quantity
         , rctl.UNIT_SELLING_PRICE
         , rctl.extended_amount
         , rctl.revenue_amount
         , wdd.delivery_detail_id
         , wnd.delivery_id
         , rctl.interface_line_attribute1  -- Sales Order Number
         , rctl.interface_line_attribute3  -- delivery_id (wsh)
         , rctl.interface_line_attribute6  -- Sales Order Line Id
      From apps.oe_order_headers_all oeh
         , apps.oe_order_lines_all oel
         , apps.wsh_delivery_details wdd
         , apps.wsh_new_deliveries wnd
         , apps.wsh_delivery_assignments wda
         , apps.ra_customer_trx_all rct
         , apps.ra_customer_trx_lines_all rctl
    Where oeh.header_id = oel.header_id                           
       and wdd.source_header_id = oeh.header_id
       and wdd.source_header_id = oel.header_id 
           and wdd.source_line_id = oel.line_id
           and wdd.SOURCE_CODE = 'OE'
       and wdd.delivery_detail_id = wda.delivery_detail_id
       and wda.delivery_id = wnd.delivery_id
       and rctl.interface_line_attribute1 = to_char(oeh.order_number)
          and rctl.interface_line_attribute6 = to_char(oel.line_id) --this is where explain plan cost is high!
           and rctl.org_id = oel.org_id
       and rctl.interface_line_attribute3 = to_char(wnd.delivery_id)
       and rctl.customer_trx_id = rct.customer_trx_id
       and rct.interface_header_context = 'ORDER ENTRY'
       and oeh.order_number = '99999' --enter desired sales order here....
    Order by 1,2,3,4;

    Can you provide your explain?
    Also, can you do an "set autotrace traceonly" and run it again and post results?  The results of that would help people here provide more reasonable suggestions.
    (Mine so far is only: avoid hints - that's a crutch and not solving the real problem).
    Are your statistics up to date?
    select table_name, last_analyzed from dba_tables
    where table_name in (
           'OE_ORDER_HEADERS_ALL OEH','OE_ORDER_LINES_ALL OEL','WSH_DELIVERY_DETAILS WDD',
           'WSH_NEW_DELIVERIES WND','WSH_DELIVERY_ASSIGNMENTS WDA','RA_CUSTOMER_TRX_ALL RCT',
            'RA_CUSTOMER_TRX_LINES_ALL RCTL' );

  • How to improve the OpenGL performance for AE

    I upgraded my display card from Nvidia 8600GT to GTX260+ hoping to have a better and smoother scrubbing of the timeline in AE. But to my disappointment, there is absolutely no improvement at all. I checked the OpenGL benchmark of the 2 cards with the Cinebench software and the results are almost the same for the 2 cards.
    I wonder why the GTX260+ costs as much as about 3 times the cost of the 8600GT, but the OpenGL performance is almost the same.
    Any idea how to improve the OpenGL performance please ?
    Regards

    juskocf wrote:
    But to scrub the timeline smoothly, I think OpenGL plays an important role.
    No, not necessarily. General things like footage I/O performance can be much more critical in that case. Generally speaking, AE only uses OpenGL in 2 specific situations: When navigating 3D space and with hardware-accelerated effects. It doesn't do so consistently, though, as any non-accelerated function, such as a specific effect or exhaustion of the avialbale resources can negate that.
    juskocf wrote:
    Also, some 3D plugins such as Boris Continuum 6 need OpenGL to smoothly maneuver the 3D objects.  Just wonder why the OpenGL Performance of such an expensive card should be so weak.
    It's not the card, it's what the card does. See my above comment. Specific to the Boris stuff: Geometry manipulation is far simpler than pixel shaders. Most cards will allow you to manipulate bazillions of polygons - as long as they are untextured and only use simple shading, you will not see any impact on performance. Things get dicy, when it needs to use textures and load those textures into the graphics card's memory. Either loading those textures takes longer than the shading calculations, or, if you use multitexturing (different images combined with transparencies or blendmodes), you'll at some point reach the maximum. It's really a mixed bag. Ultimately the root of all evil is, that AE is not build around OpenGL because at the time it didn't exist, but rather the other way around OpenGL was plugged-on at some point and now there is a number of situations where one gets in the way of the other...
    Mylenium

  • Index file increase with no corresponding increase in block numbers or Pag file size

    Hi All,
    Just wondering if anyone else has experienced this issue and/or can help explain why it is happening....
    I have a BSO cube fronted by a Hyperion Planning app, in version 11.1.2.1.000
    The cube is in it's infancy, but already contains 24M blocks, with a PAG file size of 12GB.  We expect this to grow fairly rapidly over the next 12 months or so.
    After performing a simple Agg of aggregating sparse dimensions, the Index file sits at 1.6GB.
    When I then perform a dense restructure, the index file reduces to 0.6GB.  The PAG file remains around 12GB (a minor reduction of 0.4GB occurs).  The number of blocks remains exactly the same.
    If I then run the Agg script again, the number of blocks again remains exactly the same, the PAG file increases by about 0.4GB, but the index file size leaps back to 1.6GB.
    If I then immediately re-run the Agg script, the # blocks still remains the same, the PAG file increases marginally (less than 0.1GB) and the Index remains exactly the same at 1.6GB.
    Subsequent passes of the Agg script have the same effect - a slight increase in the PAG file only.
    Performing another dense restructure reverts the Index file to 0.6GB (exactly the same number of bytes as before).
    I have tried running the Aggs using parallel calcs, and also as in series (ie single thread) and get exactly the same results.
    I figured there must be some kind of fragmentation happening on the Index, but can't think of a way to prove it.  At all stages of the above test, the Average Clustering Ratio remains at 1.00, but I believe this just relates to the data, rather than the Index.
    After a bit of research, it seems older versions of Essbase used to suffer from this Index 'leakage', but that it was fixed way before 11.1.2.1. 
    I also found the following thread which indicates that the Index tags may be duplicated during a calc to allow a read of the data during the calc;
    http://www.network54.com/Forum/58296/thread/1038502076/1038565646/index+file+size+grows+with+same+data+-
    However, even if all the Index tags are duplicated, I would expect the maximum growth of the Index file to be 100%, right?  But I am getting more than 160% growth (1.6GB / 0.6GB).
    And what I haven't mentioned is that I am only aggregating a subset of the database, as my Agg script fixes on only certain members of my non-aggregating sparse dimensions (ie only 1 Scenario & Version)
    The Index file growth in itself is not a problem.  But the knock-on effect is that calc times increase - if I run back-to-back Aggs as above, the 2nd Agg calc takes 20% longer than the 1st.  And with the expected growth of the model, this will likely get much worse.
    Anyone have any explanation as to what is occurring, and how to prevent it...?
    Happy to add any other details that might help with troubleshooting, but thought I'd see if I get any bites first.
    The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
    Thanks for reading.

    alan.d wrote:
    The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
    Thanks for reading.
    I haven't tried Direct I/O for quite a while, but I never got it to work properly. Not exactly the same issue that you have, but it would spawn tons of .pag files in the past. You might try duplicating your cube, changing it to buffered I/O, and run the same processes and see if it does the same thing.
    Sabrina

  • Oracle 11g: Block Corruption in SYSAUX File

    Hello All,
    I am facing Data corruption issue in the SYSAUX file.
    We are using Oracle 11G (Microsoft 32 bit) and our system is running in noarchivelog mode.
    Following are the errors in the alert log.
    e:\sc\sc15.2\databases\oracleconfig\diag\rdbms\enmscsdb\nm45\trace\nm45_p000_5944.trc
    Corrupt block relative dba: 0x0088a9f8 (file 2, block 567800)
    Fractured block found during buffer read
    Data in bad block:
    type: 6 format: 2 rdba: 0x0088a9f8
    last change scn: 0x0000.0b3bb7c7 seq: 0x1 flg: 0x04
    spare1: 0x0 spare2: 0x0 spare3: 0x0
    consistency value in tail: 0xc7000601
    check value in block header: 0xee6b
    computed block checksum: 0x72c6
    Reread of rdba: 0x0088a9f8 (file 2, block 567800) found same corrupted data
    Thu Jan 22 16:46:44 2009
    SMON: Restarting fast_start parallel rollback
    SMON: ignoring slave err,downgrading to serial rollback
    ORACLE Instance nm45 (pid = 12) - Error 1578 encountered while recovering transaction (9, 11) on object 458.
    Errors in file e:\sc\sc15.2\databases\oracleconfig\diag\rdbms\enmscsdb\nm45\trace\nm45_smon_6492.trc:
    ORA-01578: ORACLE data block corrupted (file # 2, block # 567800)
    ORA-01110: data file 2: 'E:\SC\SC15.2\DATABASES\ORACLECONFIG\SYSAUXNM45.ORA'
    Thu Jan 22 16:46:45 2009
    Trace dumping is performing id=[cdmp_20090122164645]
    Corrupt Block Found
    TSN = 1, TSNAME = SYSAUX
    RFN = 2, BLK = 567800, RDBA = 8956408
    OBJN = 458, OBJD = 458, OBJECT = I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST, SUBOBJECT =
    SEGMENT OWNER = SYS, SEGMENT TYPE = Index Segment
    Following query indicates the corruption is in index.
    SQL> SELECT tablespace_name, segment_type, owner, segment_name FROM dba_extents
    WHERE file_id = 2 and 567800 between block_id AND block_id + blocks - 1;
    TABLESPACE_NAME SEGMENT_TYPE OWNER
    SEGMENT_NAME
    SYSAUX INDEX SYS
    I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST
    ==============
    DBverify output:
    ==============
    E:\SC\SC15.2\Databases\OracleConfig>dbv file=SYSAUXNM45.ORA blocksize=8192
    DBVERIFY: Release 11.1.0.7.0 - Production on Thu Jan 22 16:59:11 2009
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    DBVERIFY - Verification starting : FILE = E:\SC\SC15.2\Databases\OracleConfig/SY
    SAUXNM45.ORA
    DBV-00200: Block, DBA 8956312, already marked corrupt
    Page 567800 is influx - most likely media corrupt
    Corrupt block relative dba: 0x0088a9f8 (file 2, block 567800)
    Fractured block found during dbv:
    Data in bad block:
    type: 6 format: 2 rdba: 0x0088a9f8
    last change scn: 0x0000.0b3bb7c7 seq: 0x1 flg: 0x04
    spare1: 0x0 spare2: 0x0 spare3: 0x0
    consistency value in tail: 0xc7000601
    check value in block header: 0xee6b
    computed block checksum: 0x72c6
    DBVERIFY - Verification complete
    Total Pages Examined : 1623864
    Total Pages Processed (Data) : 540984
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 964944
    Total Pages Failing (Index): 0
    Total Pages Processed (Other): 17849
    Total Pages Processed (Seg) : 0
    Total Pages Failing (Seg) : 0
    Total Pages Empty : 100086
    Total Pages Marked Corrupt : 2
    Total Pages Influx : 1
    Total Pages Encrypted : 0
    Highest block SCN : 190789648 (0.190789648)
    SQL> select * from v$database_block_corruption;
    FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
    2 567800 1 0 FRACTURED
    2 567704 1 0 FRACTURED
    How to resolve this issue.
    Thanks
    With Regards
    Hemant Joshi.

    Drop and re-creating the index would be better than re-building the index.
    Check the metalink note: Identify the corruption extension using RMAN/DBV/ANALYZE etc - 836658.1
    Note 28814.1 - Handling Oracle Block Corruptions in Oracle7/8/8i/9i/10g
    Note 472231.1 - How to identify all the Corrupted Objects in the Database reported by RMAN
    ORA-1578 Main Reference Index for Solutions -830997.1

  • Cfp 2020-di-330

    Hi everybody!
    I'm trying to perform a digital input acquisition using the cfp 2020-di-330 , but i don't understand how to make the physical connections.
    In the diagrams i see two inputs INa and INb, but i don't know which one i have to use since i'll be working with TTL signals. 
    I have worked with DAQ's , but this is a little differnt, i think i don't need the external supply for doing this.....thanx in advance!!!
    Solved!
    Go to Solution.

    TTL signal is applied between INa and INb, reefer to the manual, specifically page 9.
    http://www.ni.com/pdf/manuals/323301c.pdf
    ~~~~~~~~~~~~~~~~~~~~~~~~~~
    "It’s the questions that drive us.”
    ~~~~~~~~~~~~~~~~~~~~~~~~~~

  • Corrupt block detected in control file

    Hi All,
    I have a scenario where I have set up Active/Standby RACs and successfully have archive redo logs being applied to Standby - everything was ok
    Versions - Oracle 11g R2 , of RHEL 5
    Scenario 1:
    Redo log application on Standby works perfectly when I do not create our software application tables using sql scripts on the Primary until AFTER the steps for Dataguard/RAC is completed successfully.
    Scenario 2:
    Redo log application does not work when I do run our sql scripts BEFORE I take a RMAN backup of the Primary to duplicate in the Standby
    Everything comes up on the Standby after the rman duplicate, archive logs get transferred , but now they do not get applied.
    I see the ORA-00227: corrupt block detected in control file: (block 1, # blocks 1) in the alert log when I put standby in Recovery Mode
    My theory is that somehow our sql scripts are breaking my rman backups when I run them before creating an RMAN backup of Primary to load on Standby- I just need someone to advise whether this is a possibility from their experience, if so I will contact Oracle support to investigate further. This is my first time working on RAC DG etc
    Thanks

    Hi All ,
    Ive tried to upgrade Oracle to 11.2.0.2 to fix this issue - that I can no longer remember !
    Managed to complete upgrade on the standby node (after having to reinstall due to hostname change )
    Now trying the Active node, I see the following error during the grid upgrade where i execute rootupgrade.sh
    Now product-specific root actions will be performed.
    Using configuration parameter file: /opt/app/11.2.0/grid2/crs/install/crsconfig_params
    Creating trace directory
    Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256
    The fixes for bug 9413827 are not present in the 11.2.0.1 crs home
    Apply the patches for these bugs in the 11.2.0.1 crs home and then run rootupgrade.sh
    /opt/app/11.2.0/grid2/perl/bin/perl -I/opt/app/11.2.0/grid2/perl/lib -I/opt/app/11.2.0/grid2/crs/install /opt/app/11.2.0/grid2/crs/install/rootcrs.pl execution failed
    I have to download this patch from MOS for bug 9413827, somehow apply it to the old version of grid 11.2.0.1 and then rootupgrade.sh

  • Corrupted software installation in cFP-2020....

    My cFP-2020's status led flashes three times, I know that means a corrupt
    software installation.
    I think someone in my department tried to do something and after doing it
    wrong she decided to install the whole operating system from scratch.
    Now it seems that I cannot recover the installation: the system is
    unavailable, and if I try to install the firmware following instructions on
    the manual I end up getting an unrecoverable error. BTW, the installation
    process is very slow and it end at around 75% of the entire procedure.
    Any help as to how I can reinstall the software would be greatly
    appreciated.
    Thanks to everyone!
    Nicola

    First, try FTP'ing to the cFP-2020 and delete all files that you possibly can. This can be done by opening a web browser (Internet Explorer) and typing in ftp://xxx.xxx.xxx.xxx where the x's are the ip address. You probably will not be able to delete all files, but delete as many as you can.
    Next, set the Reset dip switch into the ON position and cycle the power on the cFP-2020. You should notice that the status LED will blink 1 time continuously. At this point, set the Reset dip switch to the OFF position and cycle the power again. Now you should be able to configure the module as well as install the software for the module.
    I hope that this information is helpful. Have a good day.

  • I receive the error: Unable to load block component C:\Program Files\Hyperception\VABINF\KeyRcv.dll

    I have two Major issues with our Infinity Project computers,
    We have one entire lab that is getting this error... "Unable to load
    block component C:\Program Files\Hyperception\VABINF\KeyRcv.dll. 
    Please verify that the component exists and any dependencies it may
    have are satisfied."  It is also producing this error with
    NumEntry.dll.  Both files exist.  I am not sure what
    dependencies these files may have.  Ideas?
    We have several systems that are giving the following error on one of
    our labs... "You have exceeded the number of block functions allowed by
    VAB for Infinity; contact Hyperception regarding upgrades."  From
    what The professor here at UTA told me when I first took on this
    project, this was a problem that we had on the previous boards and that
    someone at Hyperception was able to help us resolve this problem.
    Can you help out with these issues?  The first one I need an answer today if possible. 
    Thanks,

    We have seen this type of problem on some computers (we do not know why as of yet).The problem with missing blocks is very likely being caused by file permissions, and can be seen when non-administrators are using VAB software. To resolve this: 1. login as ADMINISTRATOR 2. run the command line script from a DOS prompt as shown below: cacls "C:\Program FIles\Hyperception\VABINF" /T /E /G Everyone:F It will scroll through the name of every file in the provided directory, and all subdirectories. This should reset every file, and allow the USER to access all files in VAB software. This should correct the problem. Regards,Steve

  • How do I add multiple text block records from text file?

    The data manager documentation (page 151) for MDM 5.5 SP3 indicates that one or more new text blocks can be added to the Text Blocks object table from files. It is noted that the files must be plain text files.
    I use notepad and create a text file with two lines as follows:
    Test 1
    Test 2
    When I try to add the text blocks following documentation mentioned above, it only adds one record for the Data Group I have chosen and the record contains the entry "Test 1" from the first line in the text file.
    How can I add multiple records to the data group from a file?

    From my testing it appears that you need to have one text file per text block record in Data Manager.
    I wrote VBA macro to so that I could input my text blocks into an Excel spreadsheet and then the macro will take the contents of each cell in a highlighted column and create one text file per cell.
    Then using Data manager, I can select all of the text files at once and it will import them, creating one record per text file.

  • Is there a way to import Raw files into LR catalog ..including..any edits performed in an external raw file editor such as Capture One?

    Is there a way to import Raw files into LR catalog ..including..any edits performed in an external raw file editor such as Capture One?
    I can import the Raw file successfully but cannot see any edits that were applied in Capture One. Im assuming no but just checking.

    Your assumption is correct, Capture One, Lightroom and other third party raw processors have their own proprietary processes and profiles for rendering the raw data.
    They do not make permanent changes to the raw data and store the changes to an .xmp file or to a catalog file like Lightroom.

  • Performance problem when creating XML-file

    Hi,
    I want to create a XML-file with customer data from an internal table. This internal table has appr. 62000 entries. It takes hours to create the elements of the file.
    This is the coding I have used:
      loop at it_debtor.
        at first.
        Creating a ixml factory
          l_ixml = cl_ixml=>create( ).
        Creating the dom object model
          l_document = l_ixml->create_document( ).
        Fill root node
          l_element_argdeb  = l_document->create_simple_element(name = 'argDebtors'
                      parent = l_document ).
        endat.
      Create element 'debtor' as child of 'argdeb'
        l_element_debtor  = l_document->create_simple_element(
                     name = 'debtor'
                   parent = l_element_argdeb ).
      Create elements as child of 'debtor'
        l_value = it_debtor-admid.
        perform create_element using 'financialAdministrationId'.
        l_value = it_debtor-kunnr.
        perform create_element using 'financialDebtorId'.
        l_value = it_debtor-name1.
        21 child elements in total are created
      endloop.
      Creating a stream factory
      l_streamfactory = l_ixml->create_stream_factory( ).
      Connect internal XML table to stream factory
      l_ostream = l_streamfactory->create_ostream_itable( table =
      l_xml_table ).
      Rendering the document
      l_renderer = l_ixml->create_renderer( ostream  = l_ostream
                                          document = l_document ).
      l_rc = l_renderer->render( ).
      open dataset p_path for output in binary mode.
      if sy-subrc eq 0.
        loop at l_xml_table into rec.
          transfer rec to p_path.
        endloop.
        close dataset p_path.
        if sy-subrc eq 0.
          write:/ wa_lines, 'records have been processed'.
        endif.
      else.
        write:/ p_path, 'can not be opened.'.
      endif.
    form create_element using    p_text.
      check not l_value is initial.
      l_element_dummy  = l_document->create_simple_element(
                    name = p_text
                    value = l_value
                    parent = l_element_debtor ).
                                                                                    endform.                    " create_element
    Please can anyone tell me how to improve the performance.
    The method create_simple_element takes a long time.
    SAP release : 46C
    Regard,
    Christine

    Hi Christine!
    There might be several reasons - but you have to look at the living patient, not only the dead coding.
    You did not show any selects, loop at ... where or read table statements -> the usual slow parts are not included (or visible...).
    Of course the method-calls can contain slow statements, but you can also have problems because of size (when internal memory gets swaped to disc).
    Make a runtime analysis (SE30), have a look in SM50 (details!) for the size, maybe add an info-message for every 1000-customer (-> will be logged in job-log), so you will see if all 1000-packs are executed in same runtime.
    With this tools you have to search for the slow parts:
    - general out of memory problems, maybe because of to much buffering (visible by slower execution at the end)
    - slow methods / selects / other components by SM30 -> have a closer look
    Maybe a simple solution: make three portions with 20000 entries each and just copy the files into one, if necessary.
    Regards,
    Christian

  • The comment block in the XML file, can it be removed?

    Hello together,
    Ive created a form to be submitted via email and it all works great.
    In the XML file that is created, there is a huge comment block (from adobe) and I was wondering how I can go about removing it? Or better, how I can edit it.
    Any input on this is greatly appreciated.
    All the best,
    Kevin

    Anybody?

  • Error in reading (block 3, # blocks 8) of control file

    Hi All,
    My database experiencing this error :
    ORA-00204: error in reading (block 3, # blocks 8) of control file
    ORA-00202: control file: 'C:\ORACLEXE\ORADATA\XE\CONTROL.DBF'
    ORA-27091: unable to queue I/O
    ORA-27070: async read/write failed
    OSD-04006: ReadFile() failure, unable to read from file
    O/S-Error: (OS 23) Data error (cyclic redundancy check).
    Worst part we do not have latest backup and makes things weird.
    My Control file seem corrupted.

    Error: ORA 204
    Text: error in reading control file <name> block <num>, # blocks <num>
    Cause: A disk read-failure occurred while attempting to read the specified
    control file.
    The block location of the failure is given.
    Action: Check that the disk is online.
    If it is not, bring it online and shut down and restart Oracle.
    If the disk is online, then look for operating system reasons for
    Oracle's inability to read the disk or control file.
    Use the mutiplxed controlfile if you have already available to start your instance...
    SQL>show parameter control_files;
    Use above command

  • Has anyone seen this before,? some import operations were not performed , could not copy the file to requested location

    some import operations were not performed , could not copy the file to requested location , has anyone seen this or have a fix?

    cormacc57361247 wrote:
    has anyone seen this or have a fix?
    Oh, I bet that if you'd done a site search, you'd have got answers to both of those questions...

Maybe you are looking for