Performance issue if lots of LOVs

Hi,
we have htmldb 1.5 installed in which we have several workspaces. One of the workspaces hosts several (about 20 ) applications. One of the applications has 40-50 LOVs. If I log into the workspace with ADMIN user and navigate to the Builder and choose the application which has lots of LOVs and try to click the LOVs tab it first hangs for ages and finally errors with page not found. This warning appears into the apache error_log:
mod_plsql: Long running URL [pls/grp/f] timed out
Can you please advice on any steps we could do to improve performance on LOV admin page. Overall performance with other applications and htmldb in general is fine. We only have issue with this one application which has large number of LOVs. Can you also advice if it is possible to remove LOVs by removing them directly from the htmldb tables as we are not able to remove through htmldb website.
thanks!
Nina

Hi,
thanks for looking into this thread!
This issue was caused by too small buffer cache in the database.
Db_cache_size was set to bigger value and few other parameters
adjusted. This fixed the issue with the LOVs screen.
thanks,
Nina

Similar Messages

  • Performance issues with LOV bindings in 3-tier BC4J architecture

    We are running BC4J and JClient (Jdeveloper 9.0.3.4/9iAS 9.0.2) in a 3-tier architecture, and have problems with the performance.
    One of our problems are comboboxes with LOV bindings. The view objects that provides data for the LOV bindings contains simple queries from tables with only 4-10 rows, and there are no view links or entity objects to these views.
    To create the LOV binding and to set the model for the combobox takes about 1 second for each combobox.
    We have tried most of tips in http://otn.oracle.com/products/jdev/tips/muench/jclientperf/index.html, but they do not seem to help on our problem.
    The performance is OK (if not great) when the same code is running as 2-tier.
    Does anyone have any good suggestions?

    I can recommend that you look at the following two bugs in Metalink: Bug 2640945 and Bug 3621502
    They are related to the disabling of the TCP socket-level acknowledgement which slows down remote communications for EJB components using ORMI (the protocol used by Oracle OC4J) to communicate between remote EJB client and server.
    A BC4J Application Module deployed as an EJB suffers this same network latency penalty due to the TCP acknowledgement.
    A customer sent me information (that you'll see there as a part of Bug# 3621502) like this on a related issue:
    We found our application runs very slow in 3-Tier mode (JClient, BC4J deployed
    as EJB Session Bean on 9iAS server 9.0.2 enterprise edition). We spent a lot
    of time to tune up our codes but that helped very little. Eventually, we found
    the problem seemed to happen on TCP level. There is a 200ms delay in TCP
    level. After we read some documents about Nagle Algorithm,  we disabled a
    registry key (TcpDelAckTicks) in windows2000  on both client and server. This
    makes our program a lot faster.
    Anyway, we think we should provide our clients a better solution other than
    changing windows registry for them, for example, there may be a way to disable
    that Nagle's algorithm through java.net.Socket.setTcpNoDelay(true), in BC4J,
    or anywhere in our codes. We have not figured out yet.
    Bug 2640945 was fixed in Oracle Application Server 10g (v9.0.4) and it now disables this TCP Acknowledgement on the server side in that release. In the BugDB, I see backport patches available for earlier 9.0.3 and 9.0.2 releases of IAS as well.
    Bug 3621502 is requesting that that same disabling also be performed on the client side by the ORMI code. I have received a test patch from development to try out, but haven't had the chance yet.
    The customer's workaround in the interim was to disable this TCP Acknowledgement at the OS level by modifying a Windows registry setting as noted above.
    See Also http://support.microsoft.com/default.aspx?kbid=328890
    "New registry entry for controlling the TCP Acknowledgment (ACK) behavior in Windows XP and in Windows Server 2003" which documents that the registry entry to change disable this acknowledgement has a different name in Windows XP and Windows 2003.
    Hope this info helps. It would be useful to hear back from you on whether this helps your performance issue.

  • PERFORMANCE ISSUE IN LOV(ORACLE FORMS)

    I have a requirement to populate an LOV in a Form Which is taking LOT of TIME (PERFORMANCE ISSUE)
    the Record Group Query is as
    select segment1 INVENTORY_ITEM ,
    inventory_item_id,
    description,
    primary_uom_code,
    decode(service_item_flag, 'Y', service_duration, NULL) service_duration,
    service_duration_period_code,
    shippable_item_flag,
    Decode(bom_item_type ,
    1,'MDL',2,'OPT',3,'PLN',4,
    Decode( service_item_flag,'Y','SRV',
    Decode( serviceable_product_flag,'Y','SVA','STD'))) item_type_code
    from mtl_system_items_b --table name
    where organization_id = :QOTLNDET_LINES.ORGANIZATION_ID
    AND (bom_item_type = 1 or bom_item_type = 4)
    AND vendor_warranty_flag = 'N'
    AND primary_uom_code <> 'ENR'
    AND ((:QOTLNDET_LINES.LINE_CATEGORY_CODE = 'ORDER' and customer_order_enabled_flag = 'Y') OR
    (:LINE_CATEGORY_CODE = 'RETURN' and NVL(returnable_flag, 'Y') = 'Y'))
    AND segment1 like :QOTLNDET_LINES.INVENTORY_ITEM || '%'
    When ever i give :QOTLNDET_LINES.INVENTORY_ITEM from Front end This LOV need to be displayed.
    IT IS TAKING MORE THAT 3 MINUTES DEPENDING ON THE ITEM GIVEN.
    SUGGEST ME TO REDUCE THIS TIME.
    Thanks,
    Durga Srinivas
    Edited by: DurgaSrinivas_886836 on May 31, 2012 5:14 PM

    I had an idea ,
    record_group1=
    select segment1 INVENTORY_ITEM ,
    inventory_item_id,
    description,
    primary_uom_code,
    decode(service_item_flag, 'Y', service_duration, NULL) service_duration,
    service_duration_period_code,
    shippable_item_flag,
    Decode(bom_item_type ,
    1,'MDL',2,'OPT',3,'PLN',4,
    Decode( service_item_flag,'Y','SRV',
    Decode( serviceable_product_flag,'Y','SVA','STD'))) item_type_code
    from mtl_system_items_b --table name
    where organization_id = :QOTLNDET_LINES.ORGANIZATION_ID
    AND (bom_item_type = 1 or bom_item_type = 4)
    AND vendor_warranty_flag = 'N'
    AND primary_uom_code 'ENR'
    AND ((:QOTLNDET_LINES.LINE_CATEGORY_CODE = 'ORDER' and customer_order_enabled_flag = 'Y') OR
    (:LINE_CATEGORY_CODE = 'RETURN' and NVL(returnable_flag, 'Y') = 'Y'))
    AND segment1 like :QOTLNDET_LINES.INVENTORY_ITEM
    Record_group2 =
    select segment1 INVENTORY_ITEM ,
    inventory_item_id,
    description,
    primary_uom_code,
    decode(service_item_flag, 'Y', service_duration, NULL) service_duration,
    service_duration_period_code,
    shippable_item_flag,
    Decode(bom_item_type ,
    1,'MDL',2,'OPT',3,'PLN',4,
    Decode( service_item_flag,'Y','SRV',
    Decode( serviceable_product_flag,'Y','SVA','STD'))) item_type_code
    from mtl_system_items_b --table name
    where organization_id = :QOTLNDET_LINES.ORGANIZATION_ID
    AND (bom_item_type = 1 or bom_item_type = 4)
    AND vendor_warranty_flag = 'N'
    AND primary_uom_code 'ENR'
    AND ((:QOTLNDET_LINES.LINE_CATEGORY_CODE = 'ORDER' and customer_order_enabled_flag = 'Y') OR
    (:LINE_CATEGORY_CODE = 'RETURN' and NVL(returnable_flag, 'Y') = 'Y'))
    AND segment1 like :QOTLNDET_LINES.INVENTORY_ITEM || '%'
    If i can give Full item name then dynamically I will assign Record_group1 else i will assign Record_group2 by using Set_LOV_Property()
    so that if i give full item name lov is populated quickly .
    Suggest me Which Triggers Should i use.
    Edited by: DurgaSrinivas_886836 on May 31, 2012 6:49 PM

  • Performance issue on LOV in JSP

    Hi,
    In my JSP application, I used two LOV for look up. One of them will show about 3,000 and so records. When I clicked the LOV button for this LOV, it took me more than 20 seconds to see the result. This is unacceptable. Is there anyone know how I can tune the BC4J framework to improve performance? Your any input will be highly appreciated.
    Rick
    null

    On many projects I've worked on, including non-JDev, a general user interface rule was that you don't use an LOV if there's much more than 20-30 items. Think of this from a USER standpoint.
    Note also that in Swing/DACF the combobox does NOT allow you to type the letters "starting" and automatically jump to the right place.. ( i.e. Enter WI in a state list doesn't take you to WISCONSIN ). This is a Sun JDK issue. ( Or has it changed in 1.3 in some manner... or is there a property set to allow this? )
    As such, you've more a application design issue than a performance issue.
    Good Luck
    null

  • How to get around a performance issue when dealing with a lot of data

    Hello All,
    This is an academic question really, I'm not sure what I'm going to do with my issue, but I have some options.  I was wondering if anyone would like to throw in their two cents on what they would do.
    I have a report, the users want to see all agreements and all conditions related to the updating of rebates and the affected invoices. From a technical perspective ENT6038-KONV-KONP-KONA-KNA1.  THese are the tables I have to hit.  The problem is that when they retroactively update rebate conditions they can hit thousands of invoices, which blossoms out to thousands of conditions...you see the problem. I simply have too much data to grab, it times out.
    I've tried everything around the code.  If you have a better way to get price conditions and agreement numbers off of thousands of invoices, please let me know what that is.
    I have a couple of options.
    1) Use shared memory to preload the data for the report.  This would work, but I'm not going to know what data is needed to be loaded until report run time. They put in a date. I simply can't preload everything. I don't like this option much. 
    2) Write a function module to do this work. When the user clicks on the button to get this particular data, it will launch the FM in background and e-mail them the results. As you know, the background job won't time out. So far this is my favored option.
    Any other ideas?
    Oh...nope, BI is not an option, we don't have it. I know, I'm not happy about it. We do have a data warehouse, but the prospect of working with that group makes me whince.

    My two cents - firstly totally agree with Derick that its probably a good idea to go back to the business and justify their requirement in regards to reporting and "whether any user can meaningfully process all those results in an aggregate". But having dealt with customers across industries over a long period of time, it would probably be bit fanciful to expect them to change their requirements too much as in my experience neither do they understand (too much) technology nor they want to hear about technical limitations for a system etc. They want what they want if possible yesterday!
    So, about dealing with performance issues within ABAP, I'm sure you must be already using efficient programming techniques like using Hash internal tables with Unique Keys, accessing rows of the table using Field-Symbols and all that but what I was going to suggest to you is probably look at using [Extracts|http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9ed135c111d1829f0000e829fbfe/content.htm]. I've had to deal with this couple of times in the past when dealing with massive amount of data and I found it to be very efficient in regards to performance. A good point to remember when using Extracts that, I quote from SAP Help, "The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem."
    Hope this helps,
    Cheers,
    Sougata.

  • After Effects CS6 Performance issues

    Hello all. Well, got three new mac pro 12 core systems all running 65 - 128gigs of ram with Quadra 4000 GPU cards, up to date drivers, up to date AECS6 and dedicated 500 gig solid state drives for global cache. Their all set-up according to Adobe specifications/instructions. And, no we are not working with the ray trace activated in comps.
    We have all noticed that performance on some projects is painfully slow. In fact some projects which run fine on an older MacPro with AE CS5.5 creep on the new boxes running CS6. So slo in fact we have had to abort and jump to the older machines to get things done in a timely manner. Simple things like typing text, navigating the GUI and scrubbing the timeline are frustrating. We are not new to mac boxes or After Effects. We have all been using AE since version 4. It also seems like projects with a lot of footage in them or a lot of stills that have higher resolution dpi really seem to bog. Something is definitely wrong here. Soloing doesn't seem to help, reducing resolution doesn't seem to help and have tried about every preference option to free up performance.
    If anyone else is experiencing these same issues, I would love to hear about it or possible solutions. The global cache is really nice when all hardware/software is hitting on all cylinders, but something seems a miss. Not getting why there are performance issues on some projects and basic operational uses like text.
    Thanks in advance for any help that can be provided.
    Chris
    Chris Abolt
    Motion designer
    Abolt Media

    See images of all settings for the fastest of 3 machines.
    12 core Mac Pro
    OS 10.7.4
    128 GIGS OF RAM
    2 Quadra 4000 mac gpu cards running parallel in a cubix break-out-box
    Monitors being run one each off of each graphics card
    Internal GPU card not being used for monitors.
    Cuda drivers up to date 5.0.17
    Ae update version 11.0.1.12

  • AFP Performance Issues

    Hi,
    I have just got my first Mac so if what I am asking is a newbie question please excuse me.
    I am trying to connect my Mac to a QNAP TS-219+ NAS using AFP, but I have noticed that the performance is unusable, on a Windows PC I can connect to a drive on the NAS almost instantly, while on my Mac it takes several minutes to connect and I also keep getting errors which seem to look like the Share/Folder is read only (lock errors, even when I try to copy a new file to the folder) where a few minutes prior everything was working fine.
    After googling I have seen reconmendations to add exclusions to my NAS for Spotlight which can slow things down to a crawl (apparently), after a lot of struggling due to the perofrmance I have managed to add all the shares to the exclusion list of the nas.
    I have also noticed that I seem to have to goto Finder and select the Server (under Shared) and a then select the Share (which as I mentioned above takes a long time to occur) after each reboot, so with all this in mind is anyone able to help with the following
    1. How can I improve the performance of connecting/using the NAS (I mentioned AFP, but I have the same performance issues when I try to connect with Smb so I am fairly confident that the problem isn't with the protocol I am uisng to ocnnect ot the nas.)
    2. How can I get the Mac to Auto Connect to the Shares on Startup
    3. How to I resolve that after a while of writing data to a share the share seems to go Read Only and the only solution seems to be to reboot Mac/NAS, I have made sure that no other devices are attached to the network, so that I can confirm I am not getting any conflicts of any kind (can't imagine why, but want to make sure)
    A quick note as I have so far been unable to get Time Machine to work with my NAS (should work, but am getting an error which I am waiting for a reply form QNAP Support on), I am currently using GoodSync File Syncronization to keep my Data backed up until I can resolve this issue, so not sure if this could be causing the problem, but again cant see why.
    Incase it helps the spec of the Mac I am running is as follows
    Mac Mini (2012 Model)
    8gb 1600mhz Ram
    2.6 ghz i7 Processor
    1TB Fusion Drive
    Thanks,
    Gavin,

    SMB can be a pita too, so don't fall in love with it.
    You might get disappointed too.
    Pita? can you be more specific?
    I am having the same dilemma, should I save and backup my files via SMB or AFP? Which is a more reliable method? I am backing up via a network Lacie Drive formatted to Fat32... And when restoring, I assume that you should always restore using the original protocol used to backup?
    *I also noticed after a recent upgrade that backups tested via SMB on my mac running osX 10.4.8 are smoother that when I try to back up via SMB on my ibook running OsX 10.3.9, were there bugs in 10.3 which prevented smooth backups via SMB?
    Thanks.

  • AP invoice workbench performance issue.

    Hi Guys,
    We have a production system with RHEL 5 ,R12 12.0.6 and DB:10.2.0.4
    WE are facing Performance issue in Inovoice workbench were users when click on Inovoice  batches or Inovoices it takes a lot of time to open the invoice form.
    Also Validation of inovoices consumes a lot of time which is unaccepted .
    Request you please give some pointers or note id to followd already have locked an SR with oracle .
    This is urgent
    Regards,
    Milan

    Please see these docs.
    R12 Invoice Workbench Form Has A Performance Issue [ID 1072338.1]
    Bad Performance When Checking Funds In Invoice Workbench [ID 1091280.1]
    Bad Performance In Invoice Workbench (APXINWKB) Find Window When Searching By Purchase Order [ID 1195623.1]
    R12 AP Invoice Workbench Performance Issues [ID 957105.1]
    Poor Performance On Invoice Validation In The Invoice Workbench [ID 1130313.1]
    Invoice Workbench> Actions: Pay in Full Performance Issue [ID 983804.1]
    Invoice Workbench (APXINWKB) Performance Issue While Selecting The Self Assessment Check Box [ID 1210340.1
    Performance of Project Expenditure LOV At AP Invoice Header in Invoice Workbench [ID 1143943.1]
    R12.1.1 Performance Problem in AP Invoice Workbench [ID 861205.1]
    R12 Invoice Performance FAQs [ID 579737.1]
    Thanks,
    Hussein

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Returning multiple values from a called tabular form(performance issue)

    I hope someone can help with this.
    I have a form that calls another form to display a multiple column tabular list of values(needs to allow for user sorting so could not use a LOV).
    The user selects one or more records from the list by using check boxes. In order to detect the records selected I loop through the block looking for boxes checked off and return those records to the calling form via a PL/SQL table.
    The form displaying the tabular list loads quickly(about 5000 records in the base table). However when I select one or more values from the table and return back to the calling form, it takes a while(about 3-4 minutes) to return to the called form with the selected values.
    I guess it is going through the block(all 5000 records) looking for boxes checked off and that is what is causing the noticeable pause.
    Is this normal given the data volumes I have or are there any other perhaps better techniques or tricks I could use to improve performance. I am using Forms6i.
    Sorry for being so long-winded and thanks in advance for any help.

    Try writing to your PL/SQL table when the user selects (or remove when deselect) by usuing a when-checkbox-changed trigger. This will eliminate the need for you top loop through a block with 5000 records and should improve your performance.
    I am not aware of any performance issues with PL/SQL tables in forms, but if you still have slow performance try using a shared record-group instead. I have used these in the past for exactly the same thing and had no performance problems.
    Hope this helps,
    Candace Stover
    Forms Product Management

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance issues with Homesharing?

    I have a Time Capsule as the base station for my wireless network, then 2 Airport Express setup to extend the network around the house, an iMac i7 as the main iTunes Library and couple of iPads, and a couple of Apple TVs. Everything has the latest software, but I have several performance issues with Home sharing. I've done several tests making sure nothing is taking additional bandwidth, so here are the list of issues:
    1) With nothing else running, trying playing a movie via home sharing in an iPad 2 which is located on my iMac, it stops and I have to keep pressing the play button over and over again. I typically see that the iPad tries to download part of the movie first and then starts playing so that it deals with the bandwidth, but in many cases it doesn't.
    2) When trying to play any iTunes content (movies, music, photos, etc) from my Apple TV I can see my computer library, but when I go in on any of the menus, it says there's no content. I have to reboot the Apple TV and then problem fixed. I's just annoying that I have to reboot.
    3) When watching a Netflix movie on my iPad and with Airplay I send the sound to some speakers via Airplay through an Airport Express. At time I lose the connection to the speakers.
    I've complained about Wifi's instability, but here I tried to keep everything with Apples products to avoid any compatibility issues and stay within N wireless technology, which I understood it was much more stable.
    Has anyone some suggestions?

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance Issues with Folio format in Ipad.

    Hello everyone! My FIRST post here!!!
    I work with educational games and I'm facing performance issues with games that I've made in HTML5 to play in Ipad. I tried to import them to the format folio in DPS (Adobe Digital Publishing Suite). However when I import the HTML game into Indesign and try to preview it in Adobe content viewer, the game doesn't open or works without perform properly (there is a lag that doesn't let you play the game with a little of fun).
    The games that I've created have a memory use max of 35mb and weighs 30mb max.
    Does anyone know what's happen and what I can do to fix that performance issue?
    Thanks a lot!

    Moved to DPS

  • Performance issues with the Tuxedo MQ Adapter

    We are experimenting some performance issues with the MQ Adapter. For example, we are seeing that the MQ Adapter takes from 10 to 100 ms in reading a single message from the queue and sending to the Tuxedo service. The Tuxedo service takes 80 ms in its execution so there is a considerable waste of time in the MQ adapter that we cannot explain.
    Also, we have looked a lot of rollback transactions on the MQ adapter, for example we got 980 rollback transactions for 15736 transactions sent and only the MQ adapter is involved in the rollback. However, the operations are executed properly. The error we got is
    135027.122.hqtux101!MQI_QMTESX01.7636.1.0: gtrid x0 x4ec1491f x25b59: LIBTUX_CAT:376: ERROR: tpabort: xa_rollback returned XA_RBROLLBACK.
    I am looking for information at Oracle site, but I have not found nothing. Could you or someone from your team help me?

    Hi Todd,
    We have 6 MQI adapters reading from 5 different queues, but in this case we are writing in only one queue.
    Someone from Oracle told us that the XA_RBROLLBACK occurs because we have 6 MQ adapters that are reading from the same queues and when one adapter finds a message and try to get that message, it can occurs that other MQ Adapter gets it before. In this case, the MQ adapter rollbacks the transaction. Even when we got some XA_RBROLLBACK errors, we don´t lose message. Also, I read something about that when XA sends a xa_end call to MQ adapter, it actually does the rollback, so when the MQ adapter receives the xa_rollback call, it answers with XA_RBROLLBACK. Is that true?
    However, I am more worried about the performance. We are putting a request message in a MQ queue and waiting for the reply. In some cases, it takes 150ms and in other cases it takes much more longer (more than 400ms). The average is 300ms. MQ adapter calls a service (txgralms0) which lasts 110ms in average.
    This is our configuration:
    "MQI_QMTESX01" SRVGRP="g03000" SRVID=3000
    CLOPT="-- -C /tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg"
    RQPERM=0600 REPLYQ=N RPPERM=0600 MIN=6 MAX=6 CONV=N
    SYSTEM_ACCESS=FASTPATH
    MAXGEN=1 GRACE=86400 RESTART=N
    MINDISPATCHTHREADS=0 MAXDISPATCHTHREADS=1 THREADSTACKSIZE=0
    SICACHEENTRIESMAX="500"
    /tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg:
    *SERVER
    MINMSGLEVEL=0
    MAXMSGLEVEL=0
    DEFMAXMSGLEN=4096
    TPESVCFAILDATA=Y
    *QUEUE_MANAGER
    LQMID=QMTESX01
    NAME=QMTESX01
    *SERVICE
    NAME=txgralms0
    FORMAT=MQSTR
    TRAN=N
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KGCRQ01
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KGCPQ01
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KPSAQ01
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KPINQ01
    *QUEUE
    LQMID=QMTESX01
    MQNAME=QAT.Q.NACAR.TO.TUX.KDECQ01
    Thanks in advance,
    Marling

Maybe you are looking for

  • IPhone 6 won't play music

    I just got my new iPhone 6 and restored my 4s to it. All the apps loaded up, but only Kenny Loggins songs will play. When I push the play button from the quick menu that pops up from the bottom of the screen, there is no response for about 17 seconds

  • How to change setlinewidth on a given page?

    We have our PostScript printer driver send a text file with some extra instructions. Among other things, this file establishes a custom "6 setlinewidth" mainly for fractional divider lines. The custom value is apparently reset to the default value of

  • Scrollpane to applet size urgent!!!

    Hi, I have 2 issues in my project. 1. I want JScrollPane container in my applet to take the size of applet. 2. I want applet to take the size of browser. Applet might be run on different computers with different resolutions. So, I want the applet to

  • Pre Doesn't Allow USB Drive Mode in Snow Leopard Mac OS X 10.6

    Hi All-- One other question. I have two Macs, a MacBook Pro late 2006 running Mac OS X 10.5 (leopard) and a MacBook 2008 running Mac OS X 10.6 (Snow Leopard). The older MBP prompts me to choose USB Drive Mode, but the newer Macbook does not. I was ho

  • Expired validity price of Consignment Info record

    For the consignment process we created the consignment info record with the pricing condition validity from 1.1.2008 to 31.7.2008. the consignment contract validity is from 1.1.2008 to 31.12.2008. After 31 July the price was agreed to be changed, but