Regarding SD_ORDER_CREATE, performance issue.

Hi All,
I have a requirment where in I need to simulate the sales order based on customer and material combinations and get the pricing related data to send it across to the customers.
I am using 'SD_ORDER_CREATE' or 'PRICING' function module to get the simulated prices.
However in my case program might get run for 17000 customers and each customer needs be simulated against 60 materials. This is causing the preformance issues.
basically we can not read the database tables for the prices, becasue simulation would be give us the exact pricing infomration. Hence use of function modules is must.
Can anyone tell me the better solutions to improve the performance.
Regards,
Sanjay

Please provide an updated status if a solution was found. I encountered the same concern while running ORDER_CREATE_WWW. I tried to use the SALEORDER_SIMULATE but it did not have all the fields required for my pricing determination.
Why would the SIMULATION work better than the ORDERCREATE BAPIs?

Similar Messages

  • Oracle Performance Issue

    Hardware Configuration:
    Regarding Oracle Performance Issue.
    Configuration 1
    ================
    SunV880 - Sunfire
    32 GB RAM
    14 numbers of 36GB hard disk
    8 CPUs
    CPU Speed 750MZ.
    Software Configuration:
    Oracle 8i
    OS version - Solaris 8
    Customized our own application - Namex
    Configuration 2
    ================
    Intel PIII - 750 MZ
    2 GB RAM
    2 CPUS
    Software configuration
    Oracle 8i
    OS version linux 6.2
    Customized our own application - Namex (multi threaded application)
    We installed the oracle application in all hard disks. All tables
    are splited in to separate hard disks.
    OS installed in 1 hard disk.
    namex application installed in 1 hard disk
    Oracle installed in 1 hard disk.
    All tables are splited in to other hard disks.
    We are trying to insert some user databases in oracle table. We
    achieved up to 150 records/second in Sun server. But in lower
    configuration our application inserts up to 100 records/second.
    (configuration 2)
    We want improve our inserting database records/per rate
    in Sun Server.
    How to tune our oracle application parameter values in init.ora
    file. Our application tries to insert up to 500 records per second.
    But I can't able to achieve this value.
    init.ora file
    =============
    db_name = "namex"
    instance_name = namex64
    service_names = namex64
    control_files = ("/disk1/oracle64/OraHome1/oradata/Namex64/control01.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control02.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control03.ctl")
    open_cursors = 300
    max_enabled_roles = 145
    #db_block_buffers = 20480
    db_block_buffers = 604800
    #shared_pool_size = 419430400
    shared_pool_size = 8000000000
    #log_buffer = 163840000
    log_buffer = 2147467264
    #large_pool_size = 614400
    java_pool_size = 0
    log_checkpoint_interval = 10000
    log_checkpoint_timeout = 1800
    processes = 1014
    # audit_trail = false # if you want auditing
    # timed_statistics = false # if you want timed statistics
    timed_statistics = true # if you want timed statistics
    # max_dump_file_size = 10000 # limit trace file size to 5M each
    # Uncommenting the lines below will cause automatic archiving if archiving has
    # been enabled using ALTER DATABASE ARCHIVELOG.
    # log_archive_start = true
    # log_archive_dest_1 = "location=/disk1/oracle64/OraHome1/admin/namex64/arch"
    # log_archive_format = arch_%t_%s.arc
    #DBCA uses the default database value (30) for max_rollback_segments
    #100 rollback segments (or more) may be required in the future
    #Uncomment the following entry when additional rollback segments are created and made online
    #max_rollback_segments = 500
    # If using private rollback segments, place lines of the following
    # form in each of your instance-specific init.ora files:
    #rollback_segments = ( RBS0, RBS1, RBS2, RBS3, RBS4, RBS5, RBS6, RBS7, RBS8, RBS9, RBS10, RBS11, RBS12, RBS13, RBS14, RBS15, RBS16, RBS17, RBS18, RBS19, RBS20, RBS21, RBS22, RBS23, RBS24, RBS25, RBS26, RBS27, RBS28 )
    # Global Naming -- enforce that a dblink has same name as the db it connects to
    # global_names = false
    # Uncomment the following line if you wish to enable the Oracle Trace product
    # to trace server activity. This enables scheduling of server collections
    # from the Oracle Enterprise Manager Console.
    # Also, if the oracle_trace_collection_name parameter is non-null,
    # every session will write to the named collection, as well as enabling you
    # to schedule future collections from the console.
    # oracle_trace_enable = true
    # define directories to store trace and alert files
    background_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/bdump
    core_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/cdump
    #Uncomment this parameter to enable resource management for your database.
    #The SYSTEM_PLAN is provided by default with the database.
    #Change the plan name if you have created your own resource plan.# resource_manager_plan = system_plan
    user_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/udump
    db_block_size = 16384
    remote_login_passwordfile = exclusive
    os_authent_prefix = ""
    compatible = "8.0.5"
    #sort_area_size = 65536
    sort_area_size = 1024000000
    sort_area_retained_size = 65536
    DB_WRITER_PROCESSES=4
    How to improve my performance activities on Oracle server.
    Please guide me regarding this issue.
    If anyone wants more info, please let me know.
    Best regards,
    Senthilkumar

    Are you sure that it is not an application constraint ? i.e. the application can't handle so much data per second ? (application locks, threads )
    Have you tried to write a simple test program, which inserts predefined data (which your application inserts) the same data, only changing keys ?
    Then comparing the values from the 1st and the 2nd configuration ?
    Did you check the way your application is communicating with oracle ? If it is TCP/ip (even on the local machine) then this is your main problem.
    And one more thing, do you know if your application is able to run the load (inserts) of data on different threads (i.e. in parallel), because if is not, you won't be able to push the speed higher because your constraint is the speed of a single CPU. Consider running several process, which loads the data.
    We had the same problem ot AIX machines with 4 cpus. Monitoring the machine, we found that only 25% (1 cpu) where in use. We had to run 4 processes to push the speed up. Check your system's overal load while running the 'load' (inserts).
    log_checkpoint_interval = 10000
    Check if this value is appropriate. Maybe you should set it to 0 (infinite). This will disable checkpoints on a 'number of undo record' basis. Checpoints will occure only on log switch.
    How much redo files per redo groups do you have ? What is their size ? Are they on different disks ? How much redo data is generated by a single 'record' inserted ?
    Hope i helped at least a little.

  • Performance issue with Jdeveloper

    Hi Guys,
    I am experiencing strange performance issue with Jdeveloper 10.1.3.3.0.4157. There are many other threads regarding the performance issue in this forum, but the problem I have is a little bit different.
    I have two computers: one is Athlon 3200+ with Vista and another one is P4 dual core 6400 with XP (service pack 2). Both of them have 2GB memory.
    I am running the same simple project on both computer, but only the one with Vista has the problem. The problem is very similar to the problem mentioned in the thread:
    Re: IDE has become extremely slow?
    But it's much worse. It only happens only on JSF pages. Basically, any operations on the JSF pages are very slow. Loading the page, changing the attributes of a button in source editor, or even clicking the items in the design view take forever to run.
    The first weird thing is that it may use 100% CPU, but it never recover, which means the 100% CPU usage never stops or when it stops, the Jdeveloper stops responding.
    The second weird thing is that the project is not big. Actually, it's very small. The problem started to happen since last week. There are not big changes during the period. The only thing I can say is that we created two more JSF pages.
    The third weird thing is that the same project never happened on the P4+XP box. When I open the project on the P4+XP box, it’s always fast and no CPU spike.
    Any advises are welcome!
    Thanks,
    Steven

    Hi Guys,
    I re-made a simple test project for this problem and now I now always reproduce the problem in JDeveloper on both system (XP & Vista). Everytime I open this jspx file in the source editor and try to scroll up/down the source file, or manually delete an attribute, JDeveloepr will hang and the CPU usage is 0%.
    Here is the content of the test file:
    <?xml version='1.0' encoding='windows-1252'?>
    <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.0"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:af="http://xmlns.oracle.com/adf/faces"
    xmlns:afh="http://xmlns.oracle.com/adf/faces/html">
    <jsp:output omit-xml-declaration="true" doctype-root-element="HTML"
    doctype-system="http://www.w3.org/TR/html4/loose.dtd"
    doctype-public="-//W3C//DTD HTML 4.01 Transitional//EN"/>
    <jsp:directive.page contentType="text/html;charset=windows-1252"/>
    <f:view>
    <afh:html binding="#{backing_streettypedetail.html1}" id="html1">
    <afh:head title="streettypedetail"
    binding="#{backing_streettypedetail.head1}" id="head1">
    <meta http-equiv="Content-Type"
    content="text/html; charset=windows-1252"/>
    </afh:head>
    <afh:body binding="#{backing_streettypedetail.body1}" id="body1">
    <af:messages binding="#{backing_streettypedetail.messages1}"
    id="messages1"/>
    <h:form binding="#{backing_streettypedetail.form1}" id="form1">
    <af:panelForm binding="#{backing_streettypedetail.panelForm1}"
    id="panelForm1">
    <af:inputText value="#{bindings.streetTypeID.inputValue}"
    label="#{bindings.streetTypeID.label}"
    required="#{bindings.streetTypeID.mandatory}"
    columns="#{bindings.streetTypeID.displayWidth}"
    binding="#{backing_streettypedetail.inputText1}"
    id="inputText1">
    <af:validator binding="#{bindings.streetTypeID.validator}"/>
    </af:inputText>
    <af:inputText value="#{bindings.description.inputValue}"
    label="#{bindings.description.label}"
    required="#{bindings.description.mandatory}"
    columns="#{bindings.description.displayWidth}"
    binding="#{backing_streettypedetail.inputText2}"
    id="inputText2">
    <af:validator binding="#{bindings.description.validator}"/>
    </af:inputText>
    <af:inputText value="#{bindings.abbr.inputValue}"
    label="#{bindings.abbr.label}"
    required="#{bindings.abbr.mandatory}"
    columns="#{bindings.abbr.displayWidth}"
    binding="#{backing_streettypedetail.inputText3}"
    id="inputText3">
    <af:validator binding="#{bindings.abbr.validator}"/>
    </af:inputText>
    <f:facet name="footer">
    <h:panelGroup binding="#{backing_streettypedetail.panelGroup1}"
    id="panelGroup1">
    <af:commandButton text="Save"
    binding="#{backing_streettypedetail.saveButton}"
    id="saveButton"
    actionListener="#{bindings.mergeEntity.execute}"
    action="#{userState.retrieveReturnNavigationRule}"
    disabled="#{!bindings.mergeEntity.enabled}"
    partialSubmit="false">
    <af:setActionListener from="#{true}"
    to="#{userState.refresh}"/>
    </af:commandButton>
    <af:commandButton text="Cancel"
    binding="#{backing_streettypedetail.cancelButton}"
    action="#{userState.retrieveReturnNavigationRule}"
    id="cancelButton">
    <af:setActionListener from="#{false}"
    to="#{userState.refresh}"/>
    </af:commandButton>
    </h:panelGroup>
    </f:facet>
    </af:panelForm>
    </h:form>
    </afh:body>
    </afh:html>
    </f:view>
    <!--oracle-jdev-comment:auto-binding-backing-bean-name:backing_streettypedetail-->
    </jsp:root>
    Can anybody take a look at the file and let me know what's wrong with it?
    Thanks in advance.
    Steven

  • Performance issue with insert query !

    Hi ,
    I am using dbxml-2.4.16, my node-storage container is loaded with a large document ( 54MB xml ).
    My document basically contains around 65k records in the same table ( 65k child nodes for one parent node ). I need to insert more records in to my DB, my insert XQuery is consuming a lot of time ( ~23 sec ) to insert one entry through command-line and around 50sec through code.
    My container is indexed with "node-attribute-equality-string". The insert query I used:
    insert nodes <NS:sampleEntry mySSIAddress='70011' modifier = 'create'><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
    If I modify my query with
    into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:sampleTable/NS:sampleEntry[@mySSIAddress='1']
    insted of
    into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
    Time taken reduces only by 8 secs.
    I have also tried to use insert "after", "before", "as first", "as last" , but there is no difference in performance.
    Is anything wrong with my query, what should be the expected time to insert one record in a DB of 65k records.
    Has anybody got any idea regarding this performance issue.
    Kindly help me out.
    Thanks,
    Kapil.

    Hi George,
    Thanks for your reply.
    Here is the info you requested,
    dbxml> listIndexes
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
    Index: node-attribute-equality-string for node {}:mySSIAddress
    2 indexes found.
    dbxml> info
    Version: Oracle: Berkeley DB XML 2.4.16: (October 21, 2008)
    Berkeley DB 4.6.21: (September 27, 2007)
    Default container name: n_b_i_f_c_a_z.dbxml
    Type of default container: NodeContainer
    Index Nodes: on
    Shell and XmlManager state:
    Not transactional
    Verbose: on
    Query context state: LiveValues,Eager
    The insery query with update takes ~32 sec ( shown below )
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS';insert nodes <NS:sampleEntry mySSIAddress='70000' modifier = 'create' ><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
    Time in seconds for command 'query': 32.5002
    and the query without the updation part takes ~14 sec ( shown below )
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
    Time in seconds for command 'query': 13.7289
    The query :
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//PMB:sampleTable/PMB:sampleEntry[@mySSIAddress='1000']"
    Time in seconds for command 'query': 0.005375
    is very fast.
    The Updation of the document seems to consume much of the time.
    Regards,
    Kapil.

  • Performance Issue Tracking In Database Level.

    Hi All,
    I am sorry, actually i dont know whether this is the right question to ask in this forum. Below is my question.
    We are working on Oracle 10g and are supposed to moved to 11G. My question is which text book will be best one for getting knowledge regarding database performance issue in broad level and monitoring and resolving the issues. Please suggest.
    Edited by: 930254 on Aug 22, 2012 7:56 AM

    Troubleshooting oracle performance ( Apres) is one book I found very useful along with the Oracle documentation (http://www.oracle.com/pls/db112/to_toc?pathname=server.112/e10822/toc.htm).
    btw, please mark the thread as 'answered', if you feel you got your question answered. This will save the time of others who search for open questions to answer.
    regards,
    CSM

  • Performance issue sdxc card adapter

    Hi,
    I have Macbook (2012). I am using Virtualbox for running Windows XP and Windows 8. Originally I started off saving the virtual harddrives on my internal SSD, but after a while they just consumed to much space on my HD. Thus I moved them to an external harddrive that is via firewire connected to my Apple device. Still everything worked fine and the performance was okay.
    Later I had the need to think of more mobile solution. I thought placing my virtual harddrives on an SD card might be a very mobile idea. So I headed for PhotoFast Memory Expansion Combo Kit (like nifty drive, etc...). That is actually just a tiny adapter for micro sd cards that fits perfect in the sdxd slot of my Mac.
    Anyhow, I moved the virtual disks to a SanDisk 64 gb SDXC Class10 u1 AND since then they perform ugly slow! Why? Do you have any hints regarding that performance issue?
    What I have tried:
    - Reformating the SD card (exFAT, HFS+)
    - Tick "SSD" in the properties of virtual box
    - using a different SDXC adapter in the USB slot
    Nothing helped ;-( Do you have any idea how to improve the perfomance of my internal SDXC reader?
    Thanks in advance.
    Ciao tom

    xl wrote:
    I would just get the readers...
    http://www.bestbuy.com/site/Tripp+Lite+-+USB+3.0+Super+Speed+SDXC+Card+Reader/1304482615.p;?id=mp130...
    http://www.bestbuy.com/site/SYBA+Multimedia+-+USB+3.0+Memory+Card+Reader+Reads+2TB+SDXC%2C+Pocket+Si...
    would i get the sdxc speed? my usb port is 2.0

  • BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Hi Masters,
    I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
    Some of the querys is taking more 15 to 20 minutes to execute the report.
    This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
    Current service pack we are using is
    SAP_BW 350 0016 SAPKW35016
    FINBASIS 300 0012 SAPK-30012INFINBASIS
    BI_CONT 353 0008 SAPKIBIFP8
    SEM-BW 400 0012 SAPKGS4012
    Best of Luck
    Chris
    BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Ravi,
    I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
    they resolved it.
    Regards,
    Chris

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance issues with class loader on Windows server

    We are observing some performance issues in our application. We are Using weblogic 11g with Java6 on a windows 2003 server
    The thread dumps indicate many threads are waiting in queue for the native file methods:
    "[ACTIVE] ExecuteThread: '106' for queue: 'weblogic.kernel.Default (self-tuning)'" RUNNABLE
         java.io.WinNTFileSystem.getBooleanAttributes(Native Method)
         java.io.File.exists(Unknown Source)
         weblogic.utils.classloaders.ClasspathClassFinder.getFileSource(ClasspathClassFinder.java:398)
         weblogic.utils.classloaders.ClasspathClassFinder.getSourcesInternal(ClasspathClassFinder.java:347)
         weblogic.utils.classloaders.ClasspathClassFinder.getSource(ClasspathClassFinder.java:316)
         weblogic.application.io.ManifestFinder.getSource(ManifestFinder.java:75)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.application.utils.CompositeWebAppFinder.getSource(CompositeWebAppFinder.java:71)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.utils.classloaders.CodeGenClassFinder.getSource(CodeGenClassFinder.java:33)
         weblogic.utils.classloaders.GenericClassLoader.findResource(GenericClassLoader.java:210)
         weblogic.utils.classloaders.GenericClassLoader.getResourceInternal(GenericClassLoader.java:160)
         weblogic.utils.classloaders.GenericClassLoader.getResource(GenericClassLoader.java:182)
         java.lang.ClassLoader.getResourceAsStream(Unknown Source)
         javax.xml.parsers.SecuritySupport$4.run(Unknown Source)
         java.security.AccessController.doPrivileged(Native Method)
         javax.xml.parsers.SecuritySupport.getResourceAsStream(Unknown Source)
         javax.xml.parsers.FactoryFinder.findJarServiceProvider(Unknown Source)
         javax.xml.parsers.FactoryFinder.find(Unknown Source)
         javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source)
         org.ajax4jsf.context.ResponseWriterContentHandler.<init>(ResponseWriterContentHandler.java:48)
         org.ajax4jsf.context.ViewResources$HeadResponseWriter.<init>(ViewResources.java:259)
         org.ajax4jsf.context.ViewResources.processHeadResources(ViewResources.java:445)
         org.ajax4jsf.application.AjaxViewHandler.renderView(AjaxViewHandler.java:193)
         org.apache.myfaces.lifecycle.RenderResponseExecutor.execute(RenderResponseExecutor.java:41)
         org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:140)
    On googling this seems to be an issue with java file handling on windows servers and I couldn't find a solution yet. Any recommendation or pointer is appreciated

    Hi shubhu,
    I just analyzed your partial Thread Dump data, the problem is that the ajax4jsf framework ResponseWriterContentHandler triggers internally a new instance of the DocumentBuilderFactory; every time; triggering heavy IO contention because of Class loader / JAR file search operations.
    Too many of these IO operations under heavy load will create excessive contention and severe performance degradation; regardless of the OS you are running your JVM on.
    Please review the link below and see if this is related to your problem.. This is a known issue in JBOSS JIRA when using RichFaces / ajaxJSF.
    https://issues.jboss.org/browse/JBPAPP-6166
    Regards,
    P-H
    http://javaeesupportpatterns.blogspot.com/

  • Performance issues with dynamic action (PL/SQL)

    Hi!
    I'm having perfomance issues with a dynamic action that is triggered on a button click.
    I have 5 drop down lists to select columns which the users want to filter, 5 drop down lists to select an operation and 5 boxes to input values.
    After that, there is a filter button that just submits the page based on the selected filters.
    This part works fine, the data is filtered almost instantaneously.
    After this, I have 3 column selectors and 3 boxes where users put values they wish to update the filtered rows to,
    There is an update button that calls the dynamic action (procedure that is written below).
    It should be straight out, the only performance issue could be the decode section, because I need to cover cases when user wants to set a value to null (@) and when he doesn't want update 3 columns, but less (he leaves '').
    Hence P99_X_UC1 || ' = decode('  || P99_X_UV1 ||','''','|| P99_X_UC1  ||',''@'',null,'|| P99_X_UV1  ||')
    However when I finally click the update button, my browser freezes and nothing happens on the table.
    Can anyone help me solve this and improve the speed of the update?
    Regards,
    Ivan
    P.S. The code for the procedure is below:
    create or replace
    PROCEDURE DWP.PROC_UPD
    (P99_X_UC1 in VARCHAR2,
    P99_X_UV1 in VARCHAR2,
    P99_X_UC2 in VARCHAR2,
    P99_X_UV2 in VARCHAR2,
    P99_X_UC3 in VARCHAR2,
    P99_X_UV3 in VARCHAR2,
    P99_X_COL in VARCHAR2,
    P99_X_O in VARCHAR2,
    P99_X_V in VARCHAR2,
    P99_X_COL2 in VARCHAR2,
    P99_X_O2 in VARCHAR2,
    P99_X_V2 in VARCHAR2,
    P99_X_COL3 in VARCHAR2,
    P99_X_O3 in VARCHAR2,
    P99_X_V3 in VARCHAR2,
    P99_X_COL4 in VARCHAR2,
    P99_X_O4 in VARCHAR2,
    P99_X_V4 in VARCHAR2,
    P99_X_COL5 in VARCHAR2,
    P99_X_O5 in VARCHAR2,
    P99_X_V5 in VARCHAR2,
    P99_X_CD in VARCHAR2,
    P99_X_VD in VARCHAR2
    ) IS
    l_sql_stmt varchar2(32600);
    p_table_name varchar2(30) := 'DWP.IZV_SLOG_DET'; 
    BEGIN
    l_sql_stmt := 'update ' || p_table_name || ' set '
    || P99_X_UC1 || ' = decode('  || P99_X_UV1 ||','''','|| P99_X_UC1  ||',''@'',null,'|| P99_X_UV1  ||'),'
    || P99_X_UC2 || ' = decode('  || P99_X_UV2 ||','''','|| P99_X_UC2  ||',''@'',null,'|| P99_X_UV2  ||'),'
    || P99_X_UC3 || ' = decode('  || P99_X_UV3 ||','''','|| P99_X_UC3  ||',''@'',null,'|| P99_X_UV3  ||') where '||
    P99_X_COL  ||' '|| P99_X_O  ||' ' || P99_X_V  || ' and ' ||
    P99_X_COL2 ||' '|| P99_X_O2 ||' ' || P99_X_V2 || ' and ' ||
    P99_X_COL3 ||' '|| P99_X_O3 ||' ' || P99_X_V3 || ' and ' ||
    P99_X_COL4 ||' '|| P99_X_O4 ||' ' || P99_X_V4 || ' and ' ||
    P99_X_COL5 ||' '|| P99_X_O5 ||' ' || P99_X_V5 || ' and ' ||
    P99_X_CD   ||       ' = '         || P99_X_VD ;
    --dbms_output.put_line(l_sql_stmt); 
    EXECUTE IMMEDIATE l_sql_stmt;
    END;

    Hi Ivan,
    I do not think that the decode is performance relevant. Maybe the update hangs because some other transaction has uncommitted changes to one of the affected rows or the where clause is not selective enough and needs to update a huge amount of records.
    Besides that - and I might be wrong, because I only know some part of your app - the code here looks like you have a huge sql injection vulnerability here. Maybe you should consider re-writing your logic in static sql. If that is not possible, you should make sure that the user input only contains allowed values, e.g. by white-listing P99_X_On (i.e. make sure they only contain known values like '=', '<', ...), and by using dbms_assert.enquote_name/enquote_literal on the other P99_X_nnn parameters.
    Regards,
    Christian

  • Performance Issues with Acrobat Reader 11.0.0.2 when secure mode is enabled

    Hello All,
    We are experiencing sporadic issues with Acrobat 11.0.0.2 across our domain, users are reporting performance issues when opening PDF documents whether locally or from a network share.
    We have found that turning off Secure Mode helps towards reducing this delay and in the cases it doesn't we are repairing the installation and/or reinstalling the application.
    Due to the security implications we need to leave this turned on, I am wondering if anyone has encountered this issue and what steps were taken towards resolving it?
    I also wonder whether the white list function in the new release 11.0.0.3 would be a solution to this issue?
    Kind Regards,
    Ryan McCarty

    No probelm, so....
    We had no problems with Adobe Reader 9 and 10, we encountered the issues when upgrading to 11.0.0.2.
    Initially we found that turning off the Protected Mode, helped but did not resolve the issue.
    We tried;
    1. Turn off protected mode - issue still present
    2. Clearing the recent file registry using the below registry path and deleting the keys underneath it.
    HKEY_CURRENT_USER\Software\Adobe\Acrobat Reader\11.0\AVGeneral\cRecentFiles (this does not turn recent files off permanently). - works but needs clearing regularly
    3. Turning off welcome screen by creating -  HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Adobe\Acrobat Reader\11.0\FeatureLockDown\cWelcomeScreen - works to improve app open speed.
    4. uninstall/reinstall of 11.0.0.2 - works most likley due to the recent files being cleared.
    5. upgrade to 11.0.0.3 - issue still present
    Following reboots the issue is still present.
    When Adobe Reader is the only application open this issue is still present.
    As mentioned I have no systems available which I could test this issue using 11.0.0.1 as we have fixed them, albeit temporarily using the reinstall method.
    I am concious that this issue is going to reoccur once that cache (recent files) builds back up because the fix above (#2) is clearing the recent files cache NOT disabling it.

  • Performance issues with Bapi BAPI_MATERIAL_AVAILABILITY...

    Hello,
    I have a Z program to check ATP which is working with Bapi BAPI_MATERIAL_AVAILABILITY....
    As I am in the retail system we have performance issues with this bapi due the huge amount of articles in the system we need to calculate the ATP.
    any  way to  improve  the  performance?
    Thanks and best regards
    L

    The BAPI appears to execute for only one plant/material, etc., at a time, so I would have to concentrate on making data retrieval and post-bapi processing as efficient as possible.  In your trace output, how much of your overall time is consumed by the BAPI?  How much by the other code?  You might find improvements there...

  • Performance issues with SAP BPC 7.0/7.5 (SP06, 07, 08) NW

    Hi Experts
    There are some performance issues with SAP BPC 7.5/7.0 NW, users are saying they are not getting data or there are some issues while getting data from R/3 system or ECC 6.0. Then what things do I need to consider to check, such as what DataSources or Cubes I need to check? So, how to solve this issue?
    What things I need to consider for SAP NW BI 7.0 u2013 SAP BPC 7.5 NW (SP06, 07, 08) Implementation?
    Your help is greatly appreciated.
    Regards,
    Qadeer

    Hi,
    New  SP was released in February, and now most of the new bugs should been caught ,This has a Central Note. For SP06 it's Note 1527325 - Planning and Consolidation 7.5 SP06 NetWeaver Central Note to fix any issues. Most of the improvements in SP06 were related to performance, especially when logging on from the BPC clients.There you should be able to find a big list of fixes/improvements and Notes that describe those. Some of the Notes even have test description how to reproduce that issue in the old version.
    hope this will help you
    Regards
    Rv

  • Performance issues with BAPI_PROJECT_MAINTAIN

    Hi,
    We are facing Performance issues with BAPI_PROJECT_MAINTAIN.Would like to know any suggestions for improving performance.
    I would also like to know statistics regarding how much time is required to create a project with 3000 activities, 100 WBS etc.
    Regards

    Yes, I have. But didnt find any one relevant. If you can share time reqd for particular number of objects or suggest any solutions.
    Regards

  • Cisco ASA 5505 performance issues on downloads - data into the ASA from the Internet

    I have having serious issues with performance on my ASA 5505s that I am testing with 9.2.3 code.
    I stripped the config and removed as much stuff as I could - no VPN etc. and I am ONLY getting about 30-40Mbps downloads from sites but 95Mbps uploads????  Anyone else seeing these problems?   If I remove the firewall my PC can hit 300/300Mbps to the same sites using the same switch and cable.
    I installed 1Gb of mem on the ASA 5505 but it made no difference. The ASA has a UL IP Security license but I am only using and inside and outside address for these tests, no other ports configured.
    Is anyone else seeing this performance problem with the 9.2.3 code?  I went to this from 8.2.5 to try to resolve QOS failure bugs that I found in the 8.2.5 code. I did not expect to have a performance hit though and it is only on downloads TO the ASA from the Internet from all speed test sites that I try. Uploading speeds seem fine. No access-lists on my interfaces either...barebones config.
    My FIOS and switch interfaces are fine...no errors on any interfaces and the same switch interface hits 300/300Mbps when my laptop is directly attached. 
    Anyone have a barebones config on their ASA 5505 that flies...I will try it on mine and see if some command somewhere (hidden) is causing the issue. I even cleared the config and started with a clean slate just in case I was missing some command from the older configs that may have impacted performance.

    After changing the switch with a high end switch my performance increased but I am still not happy with the throughput out of my ASA. I have about 50+ ASAs 5505s and a dozen 5510s. Most remote sites have 5505s. All my sites right now have 8.2.5-51 and I wanted to put 9.2.3 out there to solve issues I have uncovered on the 8.2.5 code with regards to QOS issues.
    I get much better results using the Cisco 3750X attached to the FIOS  (right around 300/300 with my laptop directly attached to the 3750x bypassing the ASA - my FIOS circuit rating is also 300/300).  Going through the ASA to the same test site I get download speeds of 35 to 75. Changes randomly which really bothers me. My uploads speeds are ALWAYS faster then my download speeds.  Example - best download I would ever get is 75Mb and my upload would usually hit 95Mb during the same test period.
    I may have to live with it but the inconsistency is what really bothers me.
    Here is the config I am currently using. Nothing going on during testing since only a single PC is attached. VPN tunnel to the main site can be up or down...doesn't seem to make any difference. PC does to site directly from outside interface of ASA...split tunneling. Even when I removed tunnels and tested with just the ASA as a firewall to the Internet I was still seeing the same inconsistencies.
    Anything obviously  missing - new command or anything?   Xlates causing issues?

Maybe you are looking for