Possible Performance Issue - CMIS Connector

I don't know whether the Adobe developers read this but when a folder is renamed/deleted using the Drive the request "select * from cmis:document where in_tree(<folder id>)" is sent. Are the all properties really required? For example the request "select cmis:objectId from cmis:document where in_tree(<folder id>)" might suffice and will be quicker to execute, especially on repositories like ours with a lot of properties.

Hello,
This is known behavior (see http://help.sap.com/saphelp_nw70ehp3/helpdata/en/62/468698a8e611d5993600508b6b8b11/frameset.htm) when FSDB Repository is used and its option "Enable FSDB Content Tracking" is ticked.
The part of official docuementation says:
The database synchronization of content access might have a negative impact on performance. Every read or write content request to an FSDB resource waits to obtain a write lock on the lock record in the database. Therefore, the accumulated waiting time for obtaining the write lock in the database might increase and the waiting threads might consume a considerable amount of the available threads in the thread pool.
Best Regards,
Georgi

Similar Messages

  • Possible performance issue about DB table kmc_dbrm_contract

    Hello,
    We've just completed load tests for a large portal.
    EP 6.40 SP20
    During these tests, DB people have identified some contention on this particular table, which contains only three records.
    The "offending" query seems rather fast and is just incrementing a counter. The reason for the contention is that we had a very large number os such queries, resulting on multiple locks, that could last up to a maximum of 1,5 seconds.
    This is not related to our project developements and therefore I assume this is a portal standard.
    Can you help me finding what this table is all about and if there is anything standard we can do to explain and/or prevent this slight delay?
    I'm not a technology expert (just a PM), so I would appreciate a rather detailed response.
    Thank you,
    Luis C Leme

    Hello,
    This is known behavior (see http://help.sap.com/saphelp_nw70ehp3/helpdata/en/62/468698a8e611d5993600508b6b8b11/frameset.htm) when FSDB Repository is used and its option "Enable FSDB Content Tracking" is ticked.
    The part of official docuementation says:
    The database synchronization of content access might have a negative impact on performance. Every read or write content request to an FSDB resource waits to obtain a write lock on the lock record in the database. Therefore, the accumulated waiting time for obtaining the write lock in the database might increase and the waiting threads might consume a considerable amount of the available threads in the thread pool.
    Best Regards,
    Georgi

  • Using Blobs - possible performance issues

    Hello,
    We are considering using BLOBs in our Oracle 9i database( supposed to be migrated to 10g soon)
    Here is the scenario
    ·     20 000 BLOB/year keeping during 5 years
    ·     100 inserts per day
    ·     200 reads per day
    ·     Maximum BLOB size is 150 Kb
    What will be performance impact of using Blobs on INSERT/SELECT ?
    Would it be better to separate BLOBs on the different tablespace ?
    Thanks in advance
    Alexander

    Hello,
    We are considering using BLOBs in our Oracle 9i
    database( supposed to be migrated to 10g soon)
    Here is the scenario
    ·     20 000 BLOB/year keeping during 5 years
    ·     100 inserts per day
    ·     200 reads per day
    ·     Maximum BLOB size is 150 Kb
    What will be performance impact of using Blobs on
    INSERT/SELECT ?Are these stored in the tables or outside the tables using locators?
    Would it be better to separate BLOBs on the different
    tablespace ?
    Yes. It is better to store on different Tablespace.

  • 10.2 Oracle sapdata layout - performance issues?

    Can someone provide the pros cons of /oracle layout for ERP 6.0 production system?
    Is it better to build several additional sapdatas.
    Initially SAP loads data into sapdata 1 through sapdata4.....
    Has anyone seen any possible performance issues only using sapdata1-4 especially with a 3 TB DB running 10.2.04?
    Any advice is appriciated...
    Thanks
    Mikie B

    Hello Mikie,
    > Initially SAP loads data into sapdata 1 through sapdata4....
    What do you mean with this?
    > Is it better to build several additional sapdatas.
    This depends on your storage. Normally if you have a high-range SAN storage it shouldn't matter. I have seen special cases where some disk ranks were overloaded .. but normally you should not think about that. If you don't have SAN and your load needs to be spread .. this is something different.
    > Has anyone seen any possible performance issues only using sapdata1-4 especially with a 3 TB DB running 10.2.04?
    Our main logistic system is round about 3.6 TB and stored in eleven sapdata directories .. but this partitioning has nothing todo with performance issues ... the reason for that is some limitation on OS level.
    Regards
    Stefan
    P.S.:
    > Can someone provide the pros cons of /oracle layout for ERP 6.0 production system?
    If i extend or create a new tablespace i am wondering every time .. why sap is creating a sub directory for every data file by default .. absolutely freaky.

  • Custom CMIS Connector MoveHandler Cache issue

    Hi everybody.
    At the momment I'm working the custom CMIS connector, that should work only with Alfresco. Connector is almost done, but I'm having strange issue with MoveHandler. What happens is when I move file into another directory, and try to move it back, Windows Explorer display the message like the file already exists, and do I want to replace, rename or cancel moving. The same thing happens with moving folders. After initial move, asset is really moved, I checked on Alfresco, and also Adobe Drive displays that the asset is moved into right directory. So, probably there are some cache leftovers after first move. I followed next procedure: rename asset, than delete if exists in destination directory, than move, and than UpdateRecipe for destination object. I tried even to create delete recipe for original file, but it didn't help. Can anybody help me please?
    Thanks in advance

    Hi everybody.
    At the momment I'm working the custom CMIS connector, that should work only with Alfresco. Connector is almost done, but I'm having strange issue with MoveHandler. What happens is when I move file into another directory, and try to move it back, Windows Explorer display the message like the file already exists, and do I want to replace, rename or cancel moving. The same thing happens with moving folders. After initial move, asset is really moved, I checked on Alfresco, and also Adobe Drive displays that the asset is moved into right directory. So, probably there are some cache leftovers after first move. I followed next procedure: rename asset, than delete if exists in destination directory, than move, and than UpdateRecipe for destination object. I tried even to create delete recipe for original file, but it didn't help. Can anybody help me please?
    Thanks in advance

  • Analysis could not be performed in time. There is a possible serious performance issue

    Can someone please advise what I need to do to correct this critical error?
    My computer is VERY slow and when I ran the event viewer, this is listed as CRITICAL.  Any information would be appreciated.
    Log Name:      Microsoft-Windows-Diagnostics-Performance/Operational
    Source:        Microsoft-Windows-Diagnostics-Performance
    Date:          6/24/2010 10:18:46 AM
    Event ID:      400
    Task Category: System Performance Monitoring
    Level:         Critical
    Keywords:      Event Log
    User:          LOCAL SERVICE
    Computer:      user-PC
    Description:
    Information about the system performance monitoring event:
         Scenario  : System Responsiveness
         Analysis result  : Analysis could not be performed in time. There is a possible serious performance issue
         Incident Time (UTC) : 6/24/2010 5:17:07 PM
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-Diagnostics-Performance" Guid="{cfc18ec0-96b1-4eba-961b-622caee05b0a}" />
        <EventID>400</EventID>
        <Version>1</Version>
        <Level>1</Level>
        <Task>4005</Task>
        <Opcode>37</Opcode>
        <Keywords>0x8000000000010000</Keywords>
        <TimeCreated SystemTime="2010-06-24T17:18:46.941Z" />
        <EventRecordID>5491</EventRecordID>
        <Correlation ActivityID="{00000000-E6C8-0000-F4BB-058D9113CB01}" />
        <Execution ProcessID="1884" ThreadID="5956" />
        <Channel>Microsoft-Windows-Diagnostics-Performance/Operational</Channel>
        <Computer>user-PC</Computer>
        <Security UserID="S-1-5-19" />
      </System>
      <EventData>
        <Data Name="ShellScenarioStartTime">2010-06-24T17:17:07.442Z</Data>
        <Data Name="ShellScenarioEndTime">2010-06-24T17:17:12.442Z</Data>
        <Data Name="ShellSubScenario">1</Data>
        <Data Name="ShellScenarioDuration">5000</Data>
        <Data Name="ShellRootCauseBits">0</Data>
        <Data Name="ShellAnalysisResult">2</Data>
        <Data Name="ShellDegradationType">1</Data>
        <Data Name="ShellTsVersion">1</Data>
        <Data Name="ShellMachineUpTimeHours">0</Data>
        <Data Name="ShellMachineSleepPattern">0</Data>
      </EventData>
    </Event>

    I do get the same problem. It believe it started after I switched from HDD to SSD some month's ago. My machine is very fast now so I do not have performance problems (only a very slow power on boot).
    The description above is exactly the same as I have.
    Does somebody has the same problems?

  • CMIS Connector Issues

    Dear Sir/Madam,
    We have implemented a CMIS repository for our database and have noticed a couple of issues with the Drive CMIS connector.
    1. Paths
    In our repository the paths are not made up of the names. For example if a consider a top level folder with the name Top Level Folder and sub-folder called Sub Folder, i.e
    Top Level Folder
    |
    +-- Sub Folder
    The CMIS connector assumes that the path to the folder is /Top Level Folder/Sub Folder whereas, in our repository, then path has the format /FOLDER00002/FOLDER00010. For folders the GetObject call returns the cmis:path property that should be used instead of assumnig that the path is made up of the names. For documents the GetObjectParents and GetChildren calls allow the path segment to be returned.
    2. Multiple Parents
    Our repository supports multi-filing and therefore documents can have multiple parents. We have found that thsi causes a problem when saving new assets. In our repository assets can be automatically added to a second folder when created. The connector requires the first parent returned by the GetObjectParents call to be the folder that the asset has just been created in. We have worked around this by ordering the parents returned but it would be better if the connector checked all the parents returned.
    Kind regards,
    Ian

    Hi mandarpkulkarni,
    For files, could you check your product whether there is ' cmis:versionSeriesId ' property for document object? CMIS connector uses it as asset id, and it must be not null.
    As CMIS 1.0 Spec Section 2.1.9.5 'Versioning Properties on Document Objects' points out, all Document objects will have the following read-only property values pertaining to versioning:
    cmis:versionSeriesId             ID
         ID of the Version Series for this Object.
    Thanks,
    Hui

  • Adobe Premiere 4 Elements 4 performance issue

    Hi
    I am a new user of Premiere 4 Elements and I completed my first project to get myself through the learning curve. However I experience a serious performance issue on my computer with this product. Sorry about the long post, I tried to give as complete info as possible. Thanks in advance for any advice on solving the issue.
    1. When I open my project, a progress bar stays for several minutes at the near-completed state before the application window opens. Then the window is still frozen for several more minutes (the Windows hourglass displays for about 4 to 5 minutes) before I can actually start using the program.
    2. When I task-switch (e.g. to edit a photo in another application, or to check in Windows Explorer where a file is that I want to open) task-switching is slow, and when I switch back to Premiere 4 Elements, the screen freezes again for several minutes (4 to 6 minutes) before I can continue working. Same happens after I used the option to add media from Hard Drive to the project.
    3. When I press the Play button, the sound sometimes plays while the video preview gets stuck on a single frame. When I wait a while, it seems like Premiere is busy updating the icons on the timeline, and after that, I can play the video again. Sometimes the screen freezes so I canot press the pause button, and the audio just continues playing for a minute or two.
    4. Last night I eventually got the project finished, and started rendering the project to a PAL DVD - quality MPEG. It was still rendering this morning, with the progress bar showing less than a third of progress. At last check it displayed an estimated 16 hours of predicted rendering time, and still increasing the time....
    5. In one week of use, I experienced one to five crashes per evening while using this product.
    My input files for the project are existing MPEG files that I loaded from the camera (with it's software) long before I purchased Adobe Premiere. (so I did not use Premiere's feature to import the video). This camera generates a new MPEG file each time you press the start/stop button. I can see hundreds of clips on the camera when I mount it as an external USB hard drive on Windows. I suspect the number of files has to do with my issues, so I need assistance on an improved workflow, given the source format I work with. In my test project the original footage consists of about 140 MPEG clips; about 15 photos sized to PAL DVD size (720 x 5xx), one imported DVD VOB file, and then a menu, a title or two, transition effects, and so on. The final project should render 30 minutes of video, but the original footage is about three times that duration. Can someone please suggest a workflow considering the possibility that I will have to deal with these hundreds of small clips?
    The software's box specifies a Pentium 4 and 512 Mb RAM, Direct-X compatibility and 4 Gb HDD space.
    My computer spec's are:
    Intel Core 2, 2.13 GHz,
    2 Gb DDR RAM,
    200Gb Serial ATA HDD, with:
    Premiere on partition C: with 25 Gb free,
    Project on partition D: with 40 Gb free.
    Very entry-level graphics card (purchased the PC with no thought of games or video).
    XP Home is installed.
    In Windows Task Manager I notice about One Million Page faults being generated for Adobe Premiere in the periods I mentioned above when Premiere gets stuck. Item 4 above was at 3.5 million page faults with about 25% progress indicated of the rendering process. Is this indicative that I need more RAM?
    I have read other posts on this forum that people advise users to have Premiere on one HDD, the swap file on another physical disc and the project on yet another. I can understand the logic, but I checked that my PC was well above the software specifications on the box before I purchased, and I am not so serious about making videos that I would want to upgrade to such a monster PC. Besides if the requirements are such, Adobe should have specified that on the box.
    Thanks, Willem

    Willem,
    Looking at your system specs., I see one major bottleneck - your I/O system. You have a partitioned HDD.
    This means that the heads on your HDD are being asked to be in several places at the same time. Partitioning is an element left over from a much earlier time in computing and should not be used nowadays, with very few exceptions. I will explain. You have a fixed number of platters and heads on a single HDD. Those heads are what access the data on the platters. In the case of PE, you are asking the heads to be:
    1.) reading the OS
    2.) reading Premiere (your program)
    3.) reading & writing the Windows Page File (Windows' Virtual Memory)
    4.) reading & writing your media Assets (depending on what you're doing)
    5.) reading & writing your Scratch Disk data
    6.) reading & writing data for any other Process that requires it
    The first step that I'd take would be to add another HDD, and do away with the partition on you Drive0. With two physical (not logical as you have) HDDs, the ideal setup would be:
    1.) OS and all programs on C:\
    2.) Project, Media and Scratch Disks on D:\
    Things would even better, though not to the %, that adding only one additional disk, if you added two. Ideal setup for 3 HDDs would be:
    1.) OS and all programs on C:\
    2.) Media and additional Assets on D:\
    3.) Project and Scratch Disks on E:\
    I've also found speed increases by having my Windows' Page File on a separate HDD, but that is overkill, really. Now, I've got 5 internals with 1.75TB of space, so it's easy for me to do this sort of thing.
    You have a 200GB SATA HDD (Drive0), and I'd keep that for your C:\ (no partition) and add another SATA in 500GB - 1TB range. If you have SATA II, make sure to go with a new HDD with the exact same connector (SATA, or SATA II). Most on-board controllers can handle 4x SATA channels, so it should not be a big deal.
    Most work will come from having to do a complete system backup of your Drive0 (your C:\ & D:\), as you'll need re-partition it. Another way around this would be to leave the C:\ & D:\ partitions, and just move all of your Media, Assets, Project and Scratch Disks to your new physical drive. Keep your D:\ partition, but use it only for storing maybe Audio Assets, that you use infrequently, and the like. Do not partition your new HDD (drive1).
    Note on load time: that will speed up somewhat, if you get another HDD, but PE does need time to read it's XML files, that tell it the location of all Assets, then go find them, and verify that they are correct. Only a bit of a speed up there. Almost everything else that you mention, regarding speed, will improve greatly.
    Others will have some more ideas, on ways to speed things up, and I'd consider them all, plus the addition of another fast, large physical HDD.
    BTW: great post. It is probably the most complete that I've seen in any of the Adobe forums. It's easy to read (good use of paragraphs) and contains all the data, that I could want. I'd hold this up as a model on how to ask a question on these boards.
    Good luck,
    Hunt
    [EDIT] To let you know, I actually have my Drive0 (500GB SATA II) partitioned (for another reason) and use it just like my second suggestion, re: re-partitioning. My "Movie Music," SFX and minor Assets are stored on my D:\, plus I have my Windows' Page File split over three drives. This after much experimentation. I'd not recommend this, unless you have a lot of extra time on your hands. Just keep it on C:\

  • Regression? Adobe CMIS Connector 4.2 and CC ten times slower than 4.0.

    Hello dear forum members!
    We developed an Adobe Drive CMIS connector for our DAM system and noticed a major navigation performance degradation
    when updating CMIS connector on windows client systems from Version 4.0 to Version 4.2 or CC.
    When visiting a folder:
    Version 4.0 does a getChildren request ("children" url) and is finished: 1 call
    Version 4.2 and CC do a getChildren request and IN ADDITION call:
       - getObjectParents ("parents" url) for each object in folder
       - getObjectParents for folder of path of each object
    So lets say you have some folderX with path
    /top1/folder1/folderX
    containing 500 images of some photo session that needs sorting you get
    1x getChildren
    500x getObjectParents (one for each image)
    2x500x getObjectParents (2 for each folder of path of each image)
    in sum: 1501 calls until that same folder gets displayed.
    Any chance that this gets fixed on Adobe Drive CMIS connector?
    Can we do something on server side to prevent that behavior?
    Does Adobe Drive support paging in big folders?
    Anything else we can do to help?
    Regards,
        Carsten

    Hello together,
    I noticed the same issue on Adobe Drive 5.
    After accessing a folder with 35 files in it on Adobe Drive MAC CMIS Connector I saw over 200 HTTP Requests over the network (AtomPub).
    On other CMIS Clients it were only 14 requests.
    On Windows Adobe Drive CMIS Connector I did not record the requests but on windows it was anyway much much faster.
    I assume the Java layer is the same on Windows and Mac so I think that maybe the underlying AdobeDrive4 filesystem (at least "mount" shows an "AdobeDrive4"  FS) implementation on Mac polls very often and this results in extensive HTTP requests. Thats why I also thought of writing an own CMIS Connector which uses clever caching.
    Maybe we can bundle our ideas and implement a fast open source CMIS 1.1 Connector for Adobe Drive?
    @md_Zool:
    How are your experiences with an own Connector?
    Could you speed up things at Mac?
    Maybe Adobe also can also shed some light on this issue?
    tia
    Cheers
    Sascha

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance issues with dynamic action (PL/SQL)

    Hi!
    I'm having perfomance issues with a dynamic action that is triggered on a button click.
    I have 5 drop down lists to select columns which the users want to filter, 5 drop down lists to select an operation and 5 boxes to input values.
    After that, there is a filter button that just submits the page based on the selected filters.
    This part works fine, the data is filtered almost instantaneously.
    After this, I have 3 column selectors and 3 boxes where users put values they wish to update the filtered rows to,
    There is an update button that calls the dynamic action (procedure that is written below).
    It should be straight out, the only performance issue could be the decode section, because I need to cover cases when user wants to set a value to null (@) and when he doesn't want update 3 columns, but less (he leaves '').
    Hence P99_X_UC1 || ' = decode('  || P99_X_UV1 ||','''','|| P99_X_UC1  ||',''@'',null,'|| P99_X_UV1  ||')
    However when I finally click the update button, my browser freezes and nothing happens on the table.
    Can anyone help me solve this and improve the speed of the update?
    Regards,
    Ivan
    P.S. The code for the procedure is below:
    create or replace
    PROCEDURE DWP.PROC_UPD
    (P99_X_UC1 in VARCHAR2,
    P99_X_UV1 in VARCHAR2,
    P99_X_UC2 in VARCHAR2,
    P99_X_UV2 in VARCHAR2,
    P99_X_UC3 in VARCHAR2,
    P99_X_UV3 in VARCHAR2,
    P99_X_COL in VARCHAR2,
    P99_X_O in VARCHAR2,
    P99_X_V in VARCHAR2,
    P99_X_COL2 in VARCHAR2,
    P99_X_O2 in VARCHAR2,
    P99_X_V2 in VARCHAR2,
    P99_X_COL3 in VARCHAR2,
    P99_X_O3 in VARCHAR2,
    P99_X_V3 in VARCHAR2,
    P99_X_COL4 in VARCHAR2,
    P99_X_O4 in VARCHAR2,
    P99_X_V4 in VARCHAR2,
    P99_X_COL5 in VARCHAR2,
    P99_X_O5 in VARCHAR2,
    P99_X_V5 in VARCHAR2,
    P99_X_CD in VARCHAR2,
    P99_X_VD in VARCHAR2
    ) IS
    l_sql_stmt varchar2(32600);
    p_table_name varchar2(30) := 'DWP.IZV_SLOG_DET'; 
    BEGIN
    l_sql_stmt := 'update ' || p_table_name || ' set '
    || P99_X_UC1 || ' = decode('  || P99_X_UV1 ||','''','|| P99_X_UC1  ||',''@'',null,'|| P99_X_UV1  ||'),'
    || P99_X_UC2 || ' = decode('  || P99_X_UV2 ||','''','|| P99_X_UC2  ||',''@'',null,'|| P99_X_UV2  ||'),'
    || P99_X_UC3 || ' = decode('  || P99_X_UV3 ||','''','|| P99_X_UC3  ||',''@'',null,'|| P99_X_UV3  ||') where '||
    P99_X_COL  ||' '|| P99_X_O  ||' ' || P99_X_V  || ' and ' ||
    P99_X_COL2 ||' '|| P99_X_O2 ||' ' || P99_X_V2 || ' and ' ||
    P99_X_COL3 ||' '|| P99_X_O3 ||' ' || P99_X_V3 || ' and ' ||
    P99_X_COL4 ||' '|| P99_X_O4 ||' ' || P99_X_V4 || ' and ' ||
    P99_X_COL5 ||' '|| P99_X_O5 ||' ' || P99_X_V5 || ' and ' ||
    P99_X_CD   ||       ' = '         || P99_X_VD ;
    --dbms_output.put_line(l_sql_stmt); 
    EXECUTE IMMEDIATE l_sql_stmt;
    END;

    Hi Ivan,
    I do not think that the decode is performance relevant. Maybe the update hangs because some other transaction has uncommitted changes to one of the affected rows or the where clause is not selective enough and needs to update a huge amount of records.
    Besides that - and I might be wrong, because I only know some part of your app - the code here looks like you have a huge sql injection vulnerability here. Maybe you should consider re-writing your logic in static sql. If that is not possible, you should make sure that the user input only contains allowed values, e.g. by white-listing P99_X_On (i.e. make sure they only contain known values like '=', '<', ...), and by using dbms_assert.enquote_name/enquote_literal on the other P99_X_nnn parameters.
    Regards,
    Christian

  • Performance issues with Homesharing?

    I have a Time Capsule as the base station for my wireless network, then 2 Airport Express setup to extend the network around the house, an iMac i7 as the main iTunes Library and couple of iPads, and a couple of Apple TVs. Everything has the latest software, but I have several performance issues with Home sharing. I've done several tests making sure nothing is taking additional bandwidth, so here are the list of issues:
    1) With nothing else running, trying playing a movie via home sharing in an iPad 2 which is located on my iMac, it stops and I have to keep pressing the play button over and over again. I typically see that the iPad tries to download part of the movie first and then starts playing so that it deals with the bandwidth, but in many cases it doesn't.
    2) When trying to play any iTunes content (movies, music, photos, etc) from my Apple TV I can see my computer library, but when I go in on any of the menus, it says there's no content. I have to reboot the Apple TV and then problem fixed. I's just annoying that I have to reboot.
    3) When watching a Netflix movie on my iPad and with Airplay I send the sound to some speakers via Airplay through an Airport Express. At time I lose the connection to the speakers.
    I've complained about Wifi's instability, but here I tried to keep everything with Apples products to avoid any compatibility issues and stay within N wireless technology, which I understood it was much more stable.
    Has anyone some suggestions?

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance issues with Bapi BAPI_MATERIAL_AVAILABILITY...

    Hello,
    I have a Z program to check ATP which is working with Bapi BAPI_MATERIAL_AVAILABILITY....
    As I am in the retail system we have performance issues with this bapi due the huge amount of articles in the system we need to calculate the ATP.
    any  way to  improve  the  performance?
    Thanks and best regards
    L

    The BAPI appears to execute for only one plant/material, etc., at a time, so I would have to concentrate on making data retrieval and post-bapi processing as efficient as possible.  In your trace output, how much of your overall time is consumed by the BAPI?  How much by the other code?  You might find improvements there...

  • Performance issues with FDK in large XML documents

    In my current project with FrameMaker 8 I'm experiencing severe performance issues with some FDK API calls.
    The documents are about 3-8 MBytes in size. Fortmatted they cover 150-250 pages.
    When importing such an XML document I do some extensive "post-processing" using FDK. This processing happens in Sr_EventHandler() during the SR_EVT_END_READER event. I noticed that some FDK functions calls which modify the document's structure, like F_ApiSetAttribute() or F_ApiNewElementInHierarchy(), take several seconds, for the larger documents even minutes, to complete one single function call. I tried to move some of these calls to earlier events, mostly to SR_EVT_END_ELEM. There the calls work without a delay. Unfortunately I can't rewrite the FDK client to move all the calls that are lagging to earlier events.
    Does anybody have a clue why such delays happen, and possibly can make a suggestion, how to solve this issue? Thank you in advance.
    PS: I already thought of splitting such a document in smaller pieces by using the FrameMaker book function. But I don't think, the structure of the documents will permit such an automatic split, and it definitely isn't an option to change the document structure (the project is about migrating documents from Interleaf to XML with the constraint of keeping the document layout identical).

    FP_ApplyFormatRules sounds really good--I'll give it a try on Monday. Wonder how I could miss it, as I already tried FP_Reformatting and FP_Displaying at no avail?! By the way, what is actually meant with FP_Reformatting (when I used it I assumed it would do exactly what FP_ApplyFormatRules sounds to do), or is that one another of Lynne's well-kept secrets?
    Thank's for all the helpful suggestions, guys. On Friday I already had my first improvements in a test version of my client: I did some (not all necessary) structural changes using XSLT pre-processing, and processing went down from 8 hours(!) to 1 hour--Yeappie! I was also playing with the idea of writing a wrapper to F_ApiNewElementInHierarchy() which actually pastes an appropriate element created in a small flow on the reference pages at the intended insertion location. But now, with FP_ApplyFormatRules on the horizon, I'm quite confident to get even the complicated stuff under control, which cannot be handled by the XSLT pre-processing, as it is based on the actual formatting of the document at run-time and cannot be anticipated in pre-processing.
    --Franz

  • Performance issues with Exits

    Hi,
    Has anyone had any issues with performance when updating the planning buffer using the Exit FM? What are the things to prepare for about performance issues in production system? Please recommend.
    Thanks
    RT

    Hi Rob
    In the case of performances problems you should check the statistics in BPS_STAT0 for possible long database selection on data
    About this topic you can try to  implement the SAP note 729362
    Ciao
    Andr

Maybe you are looking for

  • Desktop Manager v6.0 is unusable.

    Hi I have a Curve 8310. I have moved back to using DM 5.0.1.42 in place of 6.0.0.43 for the following reasons:  DM6 auto-mounts my BB's flash drive whether I want it to or not. This means I have to unmount (by hand) this drive before I unplug the BB.

  • I have a  new iMac   cannot find how to access my files in Dreamweaver

    I cannot find how to access my files,  as I am now using Yosemite and it is all a different layout . can anyone get me started please Clicking on the cloud icon is no help

  • Pismo startup problem

    I am thinking about purchasing a pismo from someone for parts. I have been told that the pismo shuts down before the startup screen. I was wondering if Jpl or someone can tell me what the problem may be. thanks in advance.

  • How do we specify user and his manager hierarchy in weblogic

    Hi, I am trying to find out the way to implement org hierarchy in weblogic ldap so that I can leverage it in human task. Any suggestions ? Thanks in advance. Sai

  • Boot MacBook hard drive(Snow Leopard) via USB on MacBook Pro(Leopard)

    Recently my old MacBook's (2.0 Core Duo) logic board failed. It was running Snow Leopard and is either on 10.6.3 or 10.6.4 (not sure when I last updated). I would like to remove the 500GB hard drive and use a SATA to USB connector to connect the hard