Performance issue : itemRenderers created twice

Hi,
I have a big performance issue with my flex app.
I have a repeater whose repeated items contain a Tree.
I noticed every of those tree elements where created twice (I
call a static incremental counter on their creationComplete event).
Does that symptom make you think of any problem I could have
in my code ?

First, Repeater is not a good choice if you have many complex
elements because it renderers all off the items. How many items are
you repeating? (what is the length of your dataProivder?)
Second, I suspect your methodology for analyzing the
instantiations. I have not see such a behavior. You might post a
sample app that demonstrates the problem.
Tracy

Similar Messages

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • Performance issue with create triggered by

    Hi,
    We have a big performance problem when trying to create a triggered by on a business function.
    We created through headstart utilities (productivity boosters - analyses - maintain default events for entities) a lot of events on different entities.
    When trying to create a triggered by for a business function, the select list never comes up.
    This is what exactly happens:
    Enter the Ron
    choose a application
    choose a business function
    click on triggered by
    Click on the plus sign on the toolbar to create a reference.
    The next screen never appears, we let the program run for over 4 hours without any results. When investigating we found two queries are coninuesly executed, this one is causing the major problem:
    SELECT MAX(1) FROM I$SDD_WA_CONTEXT CTXT WHERE CTXT.OBJECT_IV
    ID = :b1 AND CTXT.WORKAREA_IRID = JR_CONTEXT.WORKAREA AND CTXT
    .WASTEBASKET = JR_CONTEXT.WASTEBASKET
    We run the tkprof utility and found this:
    SELECT MAX(1)
    FROM
    I$SDD_WA_CONTEXT CTXT WHERE CTXT.OBJECT_IVID = :b1 AND CTXT.WORKAREA_IRID =
    JR_CONTEXT.WORKAREA AND CTXT.WASTEBASKET = JR_CONTEXT.WASTEBASKET
    call count cpu elapsed disk query current rows
    Parse 0 0.00 0.00 0 0 0 0
    Execute 43995 0.00 0.00 0 0 0 0
    Fetch 43994 0.00 0.00 0 88041 0 43994
    total 87989 0.00 0.00 0 88041 0 43994
    Misses in library cache during parse: 0
    Optimizer goal: CHOOSE
    Parsing user id: 34 (REPOS_MANAGER) (recursive depth: 1)
    Rows Execution Plan
    0 SELECT STATEMENT GOAL: CHOOSE
    0 SORT (AGGREGATE)
    0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'PK_WA_CTXT' (UNIQUE)
    Did anybody had similar problems and know a solution for this?
    Yvon

    Yvon,
    This must be a Designer (RON) problem, unrelated to Headstart. If you had created the events manually, you should have had the same problem.
    I saw that you already posted this same question in the Designer forum. If you don't get any reply, I'd advise you to contact Oracle Worldwide Customer Support.
    kind regards,
    Sandra Muller

  • Process flow/map performance issues

    We have some issues with our OWB-based application and we're looking to find out if there are different ways we could be using the tool, or features/options we've missed.
    We are trying to maintain a near real time feed of data from a front end system into our warehouse which was built using OWB 10.2.0.3 over a 10.2.0.4 database. The bulk of the application consists of OWB maps with a few hand-written PL/SQL objects, all executed in a series of hierachical OWB process flows. Maps/transformations are executed either sequentially or in parallel where the referential integrity of the model allows.
    The problem is that we have around 150 tables in the datamart which could potentially require updating on each refresh cycle, although in practice only a few tables have any activity on a typical refresh cycle. The cycle consists of loading data into a set of staging tables, and from there the data is transformed into the main schema, often with multiple maps per target table.
    On every cycle we run hundreds of maps, the vast majority of which process zero rows. Each map runs quickly and efficiently in its own right but collectively they add up to a 5 - 10 min cycle even if there is no data to process.
    There are 2 avenues which we'd like to explore and would be grateful if anyone could provide any pointers/suggestions :-
    1) It appears that each map opens and closes its own database session when it executes. I presume this was done because a single process flow could be constructed with maps executing in different target schemas, but we know that's not the case for us. We'd like to know if there is anyway to configure the database connection at a higher level (eg. process flow) so it opens a connection once and executes each of the maps (database packages) in that one session.
    Our DBAs are experimenting with 'shared server' settings at a database level which may help to some degree but won't be the whole story.
    2) Another option is simply to run less maps eg. load the staging area as now, collate stats on which staging tables contain new data, and then apply some logic such that subsequent maps only execute if the relevant staging table(s) contain(s) some new data, otherwise bypass that map.
    We tried experimenting with the 'Pre Mapping Process' operator, but essentially that just generates another function call from the map package, so we still have the overhead of opening a database session for each map to run the package. Minimal gain.
    We thought about adding a function call in the process flow before each map and then branching to either execute/bypass the map as approriate, but the function call still requires opening/closing of a database session each time so, once again, minimal gain.
    What we really want is some way for a map or process flow to check without logging onto the database repeatedly.
    Any ideas on the above, or other potential solutions anyone could suggest, would be greatly appreciated.

    Hi,
    Please see if these documents help.
    Note: 554635.1 - Create Accounting Process Performs Poorly When 100K + Distributions are Passed for an Event
    Note: 954273.1 - Multiple Create Accounting Requests Result In Poor Performance For Online Accruals
    Note: 763500.1 - R12: Performance Issue with Create Accounting
    Note: 733637.1 - R12:Performance Issue When Running Accounting Program Xlaaccup
    Note: 781311.1 - Create Accounting Process Taking A Long Time To Complete After Appying Critical Patches
    Note: 557869.1 - EBS: R12 Oracle Financials Critical Patches
    Regards,
    Hussein

  • While creating Billing, system is very slow..performance issue

    Hi,
    While creating Billing, system is very slow. How can I debugg and provide the analysis where is the exact problem.
    This is showing performance issue.
    Waiting for kind response.
    Best Regards,
    Padhy
    Moderator Message : Duplicate post locked.
    Edited by: Vinod Kumar on May 12, 2011 10:59 AM

    hi,
    Chk the links
    http://help.sap.com/saphelp_nw04/helpdata/en/4a/e71f39488fee0ce10000000a114084/content.htm
    Re: How to create Secondary Index?
    How may secondary indices I can create on the ODS?
    Deletion of ODS index
    Ramesh

  • As a number of Oracle Connections created - Performance Issue

    Hello,
    We are using Oracle 9i as a backend and VB 6.0 as Frontend. For one client one connection will be opened in the Database Server, if 200 clients open the application then 200 connections will be opened in the Database Server, due to that performance issue is raising.
    We are doing: If the frontend application is not in use for 10-minutes then the connection is closed and at the same time front-end application is also terminated or if the Client terminates the application the connection is closed in the Server.
    Eventhought if 200-Clients connect the Server only 4 to 5 Clients send a request to the Server for retriving or for saving the data.
    I want to Open a fixed number of connections in the DataBase Server eg., around 10-connections, and i want to use this 10 connections for 200 clients.
    At First 10 Connections are opened and 10 Clients are using this 10 connections, among 10 Clients 1 released the database connection after finishing the Job and if 11'th Client opens then this released connection as to be used without opening a new connection, in the same way if some of the connections from 9 get free and if any other clients open application then this connections which are free as to be used for opening the application or else for doing any transaction. So, that the number of connection to be create can be redused.
    Please give me the suggestion how to make use the released connection for doing the transaction or else for opening the application.
    Thanking U with Regards,
    Sravan,
    Hyderabad.

    As Satish mentioned, Shared Server can be a good solution, but it would also be worth telling us , how did you quantify this thing that due to the number of connections, you have a performance issue? 200 is not that big number I guess. What's the o/s , exact database version and the system details with database details? Also if you have statspack report, post that too over here.
    HTH
    Aman....

  • Performance issue: Calling a BAPI PO create in test mode to get error msgs

    Hi,
    We have an ALV report in which we display purchase orders that got created in SAP, but either got blocked due to not meeting PO Release strategy tolerances or have failed output messages. we are displaying the failed messages even.
    We are looping the internal table of eban(PR) & calling bapi po create in test mode to get failed messages.
    Now we are facing performance issue in production. What can be the alternate efficient way to get the error msgs with efficiency
    Regards,
    Ayub H.
    Moderator message: duplicate post (different ID, same company...), see below:
    Performance issue calling bapi po create in test mode to get error messages
    Edited by: Thomas Zloch on Mar 9, 2012

    Hi Suvarna,
    so you need to reduce the number of PO-simulations.
    - Likely you checked already, that all EBAN-entries should already be converted into POs. If there would be a large number of "new" EBAN-entries, they don't need to be simulated.
    - If it's a temporary problem: give aid to correct the problems (maintain prices or whatever the error-reasons are) Then the amount of not-converted purchase requisitions (PR) should drop, too
    - If it's likely, that your volume of open PR will stay high: create a Z-Table with key of EBAN and a counter, simulate (once a day) PO conversions and store the results in the Z-table. In your report you can use the results... if they are "new enough". From time to time new simulations should be done, missing master data might be available.
    Maybe users should be allowed to start this 2nd report manually (in background), too -> then they can update the messages after some data corrections themself, without waiting for the result (just check later in online report and do something different in between).
    And you might need to explain, PO simulation takes as long as PO creation... there is no easy or fast way around this.
    Best regards,
    Christian

  • Performance issue calling bapi po create in test mode to get error messages

    Hi,
    We have  a report which displays in alv the purchase orders that got created in SAP, but either got blocked due to not meeting PO Release Strategy tolerances or have failed output messages .We are displaying the failed messages too.
    We are looping the internal table of eban(purchase requisition) and calling bapi po create in test mode to get failed messages.
    Now we are facing performance issue in production.What can be the other effecient way to get the error messages without effecting performance.
    Regards,
    Suvarna

    Hi Suvarna,
    so you need to reduce the number of PO-simulations.
    - Likely you checked already, that all EBAN-entries should already be converted into POs. If there would be a large number of "new" EBAN-entries, they don't need to be simulated.
    - If it's a temporary problem: give aid to correct the problems (maintain prices or whatever the error-reasons are) Then the amount of not-converted purchase requisitions (PR) should drop, too
    - If it's likely, that your volume of open PR will stay high: create a Z-Table with key of EBAN and a counter, simulate (once a day) PO conversions and store the results in the Z-table. In your report you can use the results... if they are "new enough". From time to time new simulations should be done, missing master data might be available.
    Maybe users should be allowed to start this 2nd report manually (in background), too -> then they can update the messages after some data corrections themself, without waiting for the result (just check later in online report and do something different in between).
    And you might need to explain, PO simulation takes as long as PO creation... there is no easy or fast way around this.
    Best regards,
    Christian

  • Creating datafile performance issues

    Hi guys,
    I´m using a oracle rac 10g with 3 nodes, and i needed do create a new datafile in production environment, then i had some performance issues, do you guys have any idea what the cause of this ?
    CREATE TABLESPACE "TBS_TDE_CMS_DATA"
    DATAFILE
    '+RDATA' SIZE 1000M REUSE AUTOEXTEND ON NEXT 1000M MAXSIZE 30000M
    LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
    Any hint would be helpful.
    thanks.

    BrunoSales wrote:
    Hi guys,
    I´m using a oracle rac 10g with 3 nodes, and i needed do create a new datafile in production environment, then i had some performance issues, do you guys have any idea what the cause of this ?
    CREATE TABLESPACE "TBS_TDE_CMS_DATA"
    DATAFILE
    '+RDATA' SIZE 1000M REUSE AUTOEXTEND ON NEXT 1000M MAXSIZE 30000M
    LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
    Any hint would be helpful.
    thanks.why are creating 1GB of initial extent? I think is too big. Remember whenever you add datafile to database, oracle first formats the complete datafile. So doing this can have performance issue as it depends upon the datafile size. check if you face any issue in I/O or check if there are any spikes in I/O.
    You can try adding a datafile with default values and check how much time it take

  • Question about a view I created to fix the performance issues

    Dear alll;
    I have this interesting problem. I created a view to help solve certain performance issue I have been having with my query
    see view below
    create or replace view view_test as
    Select trunc(c.close_date, 'YYYY-MM-DD') as close_date, t.names
    from tbl_component c, tbl_joborder t
    where c.t_id = t.p_id
    and c.type = 'C'
    group by trunc(c.close_date, 'YYYY-MM-DD'), t.names
    ;and I tried testing the view out by using the following syntax and I am getting the following errors
    select k.close_date, k.names from view_test k
    where k.names = 'Kay'
    and k.close_date between to_date('2010-01-01', 'YYYY-MM-DD') and to_date('2010-12-31', 'YYYY-MM-DD')however, I am getting the error messages below
    ora-o1898: too many precision specifiersI have googled it and tried a lot of stuff online but I cant fix the problem unfortunately and I dont know why.

    Hi,
    Instead of
    trunc(c.close_date, 'YYYY-MM-DD')say
    trunc (c.close_date, 'DD')Oracle apparantly gets confused by saying you want to truncate (or round) to two or more different units.
    If it were up to me, I would allow it, since truncatng to the year implies truncating to the month and to the day (not to mention hours, minutes and seconds), but there may be reasons why that would be a pain to implement. (Maybe because there are units, such as 'WW' and 'IW', that are not neat sub-divisions of larger units.)

  • Performance issue Create table as select BLOB

    Hi!
    I have a performance issue when moving BLOB´s between tables. (The size of images files are from 2MB to 10MB).
    I'm using follwing statement for example,
    "Create table tmp_blob as select * from table_blob
    where blob_id = 333;"
    Is there any hints that I can give when moving data like this or is Oracle10g better with BLOB's?

    Did you find a resolution to this issue?
    We are also having the same issue and wondering if there is a faster mechanism to copy LOBs between two table.

  • IPhoto Performance issues at 5,000 photos, /15Gb

    My name is Mike and we have a 17” flat-panel iMac – 800MHz G4. We have an 80Gb hard drive with 20Gb still free. iPhoto takes ~ 15Gb with a “mere” 5,000 photos, ~ 2,000 of which are from the 4 megapixel digital camera (2-3Mb per pic).
    I have experienced severe performance issues in the past few months as we approached and surpassed 5,000 photos (I almost exclusively work with only iPhoto running). First-tier apple 800-support suggests that I need more memory than the current 512k. Process Viewer shows that iPhoto consumes from 10-90% of CPU cycles and usually 40-55% memory, with very infrequent “spikes” to 66.7%. (only Process Viewer and iPhoto are running.) iPhoto consumes ~ 33-37% of memory when “just sitting there” – when it’s open and “doing nothing else”… I’m just moving the mouse on the screen to keep the screen saver from kicking in.
    Does anybody agree with the assessment that we just need more memory? If it works, and performance improves, do I have reason to believe that we are going to max out ~ 10,000 photos (twice the memory leads to almost twice the pix before performance degrades (drive size notwithstanding)?
    Also, if more memory is the solution, then what about the machine prevents memory consumption over 66.7%? I’d expect the spikes to be closer to 80% or 90%.
    Do newer iMacs have similar issues?
    What is the recommended size of the 25,000 pictures Apple notes as the amount all our iPods should be able to hold? If the photos are ~ 2.5Mb apiece (4 megapixels have been “standard” since 25,000 was first mentioned, and is arguably “small” compared to the newer 5 & 6+ megapixel cameras flooding the stores today), 25,000 would consume ~ 62.5 Gb by my math – larger than all iPods.
    On a related note, are iMacs similar to other machines, where their overall performance, iPhoto and otherwise, degrade as drive consumption approaches and surpasses 80% or 90%?
    The Apple Store personnel agreed to help me test and install the memory… I will just be “stuck with the memory” whether or not it works, and I’d like to avoid any unnecessary purchased.
    Any and all help will be appreciated.
    Thanks,
    Mike
    P.S. If you are willing, I'd prefer to discuss your insights on the phone. If you're willing, we can trade phone numbers via e-mail first. Mine is [email protected].

    Hi Michael,
    I have the same computer, same ram, 26 gigs free.
    I have a 3,000 image library at 3 gigs in size.
    I use a Canon camera which does not make larger Makernotes in the EXIF data.
    If you want to find out more about the large Makernote problem, visit this thread
    http://discussions.info.apple.com/webx?128@@.68b4c129
    -Will more memory help speed iPhoto up. It sure will and many have noted that it has.
    -the images are converted to a smaller size before they are synced to the iPod.
    -There are steps you can take to speed up scrolling in the iPhoto Library.
    do not use shadows
    do not use outlines
    use white for background
    view by rolls and keep the rolls closed
    when viewing images keep them around 5 or less across in the rows
    choose not to "show photo count for albums"
    Since your images are quite large it is also advisable to make multiple libraries. Smaller libraries will load and work faster. Use the Option key when launching iPhoto to create and switch between libraries.
    You can also use iPhoto Library Manager or iPhoto Buddy to manage your libraries.
    iPhoto Library Manager
    iPhoto Library Manager documentation
    this page will show you everything you can do with iPhoto Library Manager.
    You can also try iPhoto Buddy
    You should have a t 15% free space on your hard drive. Things get slow after that.
    @}-}-- Lori

  • Report Performance Issue - Activity

    Hi gurus,
    I'm developing an Activity report using Transactional database (Online real time object).
    the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
    In order to fullfill that requirment I've created 2 report
    1) All Activities related to Contact -- Report A
    pull in Acitivity ID , Activity Type, Status, Contact ID
    2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
    to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
    <Activity ID not equal to any Activity ID in Report B>
    Anyone encountered performance issue due to the advanced filter in analytic before?
    any input is really appriciated
    Thanks in advanced,
    Fina

    Fina,
    Union is always the last option. If you can get all record in one report, do not use union.
    since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
    if contact id is null (or = 'Unspecified') then owner name else contact name
    Hopefully, this is helping.

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

  • Performance Issue for BI system

    Hello,
    We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
    There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
    The Main problem is while running a report or creating a query is taking way too long time.
    Kindly help me.

    Hello SIva,
    Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
    we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
    No one is using the system currently but in ST02 i can see the swaps are in red.
    Buffer                 HitRatio   % Alloc. KB  Freesp. KB   % Free Sp.   Dir. Size  FreeDirEnt   % Free Dir    Swaps    DB Accs
    Nametab (NTAB)                                                                                0
       Table definition     99,60     6.798                                                   20.000                                            29.532    153.221
       Field definition     99,82      31.562        784                 2,61           20.000      6.222          31,11          17.246     41.248
       Short NTAB           99,94     3.625      2.446                81,53          5.000        2.801          56,02             0            2.254
       Initial records      73,95        6.625        998                 16,63          5.000        690             13,80             40.069     49.528
                                                                                    0
    boldprogram                97,66     300.000     1.074                 0,38           75.000     67.177        89,57           219.665    725.703bold
    CUA                    99,75         3.000        875                   36,29          1.500      1.401          93,40            55.277      2.497
    Screen                 99,80         4.297      1.365                 33,35          2.000      1.811          90,55              119         3.214
    Calendar              100,00       488            361                  75,52            200         42              21,00               0            158
    OTR                   100,00         4.096      3.313                  100,00        2.000      2.000          100,00              0
                                                                                    0
    Tables                                                                                0
       Generic Key          99,17    29.297      1.450                  5,23           5.000        350             7,00             2.219      3.085.633
       Single record        99,43    10.000      1.907                  19,41           500         344            68,80              39          467.978
                                                                                    0
    Export/import          82,75     4.096         43                      1,30            2.000        662          33,10            137.208
    Exp./ Imp. SHM         89,83     4.096        438                    13,22         2.000      1.482          74,10               0    
    SAP Memory      Curr.Use %    CurUse[KB]    MaxUse[KB]    In Mem[KB]    OnDisk[KB]    SAPCurCach      HitRatio %
    Roll area               2,22                5.832               22.856             131.072     131.072                   IDs           96,61
    Page area              1,08              2.832                24.144               65.536    196.608              Statement     79,00
    Extended memory     22,90       958.464           1.929.216          4.186.112          0                                         0,00
    Heap memory                                    0                  0                    1.473.767          0                                         0,00
    Call Stati             HitRatio %     ABAP/4 Req      ABAP Fails     DBTotCalls         AvTime[ms]      DBRowsAff.
      Select single     88,59               63.073.369        5.817.659      4.322.263             0                         57.255.710
      Select               72,68               284.080.387          0               13.718.442             0                        32.199.124
      Insert                 0,00                  151.955             5.458             166.159               0                           323.725
      Update               0,00                    378.161           97.884           395.814               0                            486.880
      Delete                 0,00                    389.398          332.619          415.562              0                             244.495
    Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM

Maybe you are looking for

  • How to access UCM Web Service in Oracle.

    Hi All, I am trying to call UCM Web Service (CheckIn.wsdl) using oracle. But these web services are secured and expecting username/password. Any one have any idea to pass username/password in oracle function. My Code :- CREATE OR REPLACE FUNCTION che

  • Dynamic SELECT statement causing CX_SY_DYNAMIC_OSQL_SEMANTICS error.

    Hello Gurus, We have a dynamic SELECT statement in our BW Update Rules where the the Selection Fields are populated at run-time and so are the look-up target and also the WHERE clause. The code basically looks like below:           SELECT (lt_select_

  • Tax number 1 check

    Hi While creating foreign vendor master in US, we are getting following error Tax Code 1 must be numerical For example. Vendor Country - FR Tax number 1 given by US - 62-1660610 In this case system will check FR country specific checks and throwing a

  • Embedding XE in Application. Please Help

    Hi. I have developed an application and also have created the setup using Installshield. I want to embed Oracle XE as a part of the application installation procedure.I would like to understand and know the following things 1) After installing the ap

  • MySQL Error!

    MySQL Error! The Error returned was: Data too long for column 'useragent' at row 1 Error Number: 1406