Using a hashmap for caching -- performance problems

Hello!
1) DESCRIPTION OF MY PROBLEM:
I was planing to speed up computations in my algorithm by using a caching-mechanism based on a Java HashMap. But it does not work. In fact the performance decreased instead.
My task is to compute conditional probabilities P(result | event), given an event (e.g. "rainy day") and a result of a measurement (e.g. "degree of humidity").
2) SITUATION AS AN EXCERPT OF MY CODE:
Here is an abstraction of my code:
====================================================================================================================================
int numOfevents = 343;
// initialize cache for precomputed probabilities
HashMap<String,Map<String,Double>> precomputedProbabilities = new HashMap<String,Map<String,Double>>(numOfEvents);
// Given a combination of an event and a result, test if the conditional probability has already been computed
if (this.precomputedProbability.containsKey(eventID)) {
if (this.precomputedProbability.get(eventID).containsKey(result)) {
return this.precomputedProbability.get(eventID).get(result);
} else {
// initialize a new hashmap to maintain the mappings
Map<String,Double> resultProbs4event = new HashMap<String,Double>();
precomputedProbability.put(eventID,resultProbs4event);
// unless we could use the above short-cut via the cache, we have to really compute the conditional probability for the specific combination of the event and result
* make the necessary computations to compute the variable "condProb"
// store in map
precomputedProbabilities.get(eventID).put(result, condProb);
====================================================================================================================================
3) FINAL COMMENTS
After introducing this cache-mechanism I encountered a severe decrease in performance of my algorithm. In total there are over 94 millions of combinations for which the conditional probabilities have to be computed. But there is a lot of redundancy in this set of feasible combinations. Basically it can be brought down to just about 260.000 different combinations that have to be captured in the caching structure. Therefore I expected a significant increase of the performance.
What do I do wrong? Or is the overhead of a nested HashMap so severe? The computation of the conditional probabilities only contains basic operations.
Only for those who are interested in more details
4) DEEPER CONSIDERATION OF THE PROCEDURE
Each defined event stores a list of associated results. These results lists include 7 items on average. To actually compute the conditional probability for a combination of an event and a result, I have to run through the results list of this event and perform an Java "equals"-operation for each list item to compute the relative frequency of the result item at hand. So, without using the caching, I would estimate to perform on average:
7 "equal"-operations (--> to compute the number of occurences of this result item in a list of 7 items on average)
plus
1 double fractions (--> to compute a relative frequency)
for 94 million combinations.
Considering the computation for one combination (event, result) this would mean to compare the overhead of the look-up operations in the nested HashMap with the computational cost of performing 7 "equal' operations + one double fration operation.
I would have expected that it should be less expensive to perform the lookups.
Best regards!
Edited by: Coding_But_Still_Alive on Sep 10, 2008 7:01 AM

Hello!
Thank you very much! I have performed several optimization steps. But still caching is slower than without caching. This may be due to the fact, that the eventID and results all share long common prefixes. I am not sure how this affects the computation of the values of the hash method.
* Attention: result and eventID are given as input of the method
Map<String,Map<String,Double>> precomputedProbs = new HashMap<String,Map<String,Double>>(1200);
HashMap<String,Double> results2probs = (HashMap<String,Double>)this.precomputedProbs.get(eventID);
if (results2Probs != null) {
       Double prob = results2Probs.get(result);
       if (prob != null) {
             return prob;
} else {
       // so far there are no conditional probs for the annotated results of this event
       // initialize a new hashmap to maintain the mappings
       results2Probs = new HashMap<String,Double>(2000);
       precomputedProbs.put(eventID,results2Probs);
* Later, in case of the computation of the conditional probability... use the initialized map to save one "get"-operation on "precomputedProbs"
// the variable results2probs still holds a reference to the inner HashMap<String,Double> entry of the HashMap  "precomputedProbs"
results2probs.put(result, condProb);And... because it was asked for, here is the computation of the conditional probability in detail:
* Attention: result and eventID are given as input of the method
// the computed conditional probabaility
double condProb = -1.0;
ArrayList resultsList = (ArrayList<String>)this.eventID2resultsList.get(eventID);
if (resultsList != null) {
            // listSize is expected to be about 7 on average
            int listSize = resultsList.size(); 
            // sanity check
            if (listSize > 0) {
                // check if given result is defined in the list of defined results
                if (this.definedResults.containsKey(result)) { 
                    // store conditional prob. for the specific event/result combination
                    // Analyze the list for matching results
                    for (int i = 0; i < listSize; i++) {
                        if (result.equals(resultsList.get(i))) {
                            occurrence_count++;
                    if (occurrence_count == 0) {
                        condProb = 0.0;
                    } else {
                        condProb = ((double)occurrence_count) / ((double)listSize);
                    // the variable results2probs still holds a reference to the inner HashMap<String,Double> entry of the HashMap  "precomputedProbs"
                    results2probs.put(result, condProb);
                    return condProb;
                } else {
                    // mark that result is not part of the list of defined results
                    return -1.0;
            } else {
                throw new NullPointerException("Unexpected happening. Event " + eventID + " contains no result definitions.");
        } else {
            throw new IllegalArgumentException("unknown event ID:"+ eventID);
        }I have performed tests on a decreased data input set. I processed only 100.000 result instances, instead of about 250K. This means there are 343 * 100K = 34.300K = 34M calls of my method per each iteration of the algorithm. I performed 20 iterations. With caching it took 8 min 5 sec, without only 7 min 5 sec.
I also tried to profile the lookup-operations for the HashMaps, but they took less than a ms. The same as with the operations for computing the conditional probability. So regarding this, there was no additional insight from comparing the times on ms level.
Edited by: Coding_But_Still_Alive on Sep 11, 2008 9:22 AM
Edited by: Coding_But_Still_Alive on Sep 11, 2008 9:24 AM

Similar Messages

  • Problem setting up data collection for a performance problem

    We have a performance problem on our production environment: there is a query that is long to display specially on specific records involving a lot of détails. I had the idea to set up data collection to investigate the problem.
    I used the wizard to set up data collection with default settings.
    I had created a database PERF_DW on our test server for that purpose.
    I didn't have any problem with the wizard: it created data collection sets and SQL Server agents tasks and probably many objects. But I don't know where I missed something  but I didnt, have the chance to specify the database I wanted to tune
    and even when I started data collection, I couldn't figure out where were the reports.
    Then I tought that I did it all wrong so I stopped data collection and dropped the database and tried to deleted the jobs created and I had the following message:
    TITRE : Microsoft SQL Server Management Studio
    Échec de Supprimer pour Travail « collection_set_1_noncached_collect_and_upload ».  (Microsoft.SqlServer.Smo)
    Pour obtenir de l'aide, cliquez sur :
    http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=11.0.2100.60+((SQL11_RTM).120210-1917+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Supprimer+Job&LinkId=20476
    INFORMATIONS SUPPLÉMENTAIRES :
    Une exception s'est produite lors de l'exécution d'une instruction ou d'un lot Transact-SQL. (Microsoft.SqlServer.ConnectionInfo)
    The DELETE statement conflicted with the REFERENCE constraint "FK_syscollector_collection_sets_collection_sysjobs". The conflict occurred in database "msdb", table "dbo.syscollector_collection_sets_internal", column 'collection_job_id'.
    The statement has been terminated. (Microsoft SQL Server, Erreur : 547)
    Pour obtenir de l'aide, cliquez sur :
    http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&ProdVer=11.00.3128&EvtSrc=MSSQLServer&EvtID=547&LinkId=20476
    BOUTONS :
    OK
    Now I understand that the collectors prevent the jobs from being deleted but I can't delete the collectors.
    What should I do?
    Any help is appreciated.
    Thank you for your advice.
    Sylvie P

    Hello,
    Please refer to the following article.
    http://blogs.msdn.com/b/sqlagent/archive/2011/07/22/remove-associated-data-collector-jobs.aspx
    You can run the dcclenaup.sql script mentioned in the article.
    Another option is using sp_syscollector_cleanup_collector.
    http://blogs.msdn.com/b/sqlagent/archive/2012/04/05/remove-associated-data-collector-jobs-in-sql-2012.aspx
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Asking for some guidance for a performance problem

    Hello,
    We found some very slow sql queries, and they all share one pattern, large query number in trace file.
    For example:
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 150 0.05 0.09 0 0 0 0
    Fetch 300 442.38 464.74 170 86428872 0 150
    total 460 442.43 464.83 170 86428872 0 150
    a. 300 Fetch to get only 150 rows back
    b. 170 disk visits, 86428872 consistent reads
    Since it is a complicate application (indexes/stats are all up-to-date), I just want to ask for some general guidance for investigating the problem.
    1. What/where should I look for?
    2. Any possible explanation for the huge query number?

    query = consistent gets = making the result set consistent
    It could be because the query runs a long time and thus has to read from the undo alot.
    Can you give us a peek at the sql and plan? I'd imagine it's doing some full table scans. If it's OLTP sql then you might be able to get a better plan, but if it's a report that's scanning every row in a table you might be stuck with it.

  • Is there a way to make Aperture use more RAM for better performance?

    I know in Photoshop or Lightroom (I forget which) you can designate the amount of RAM the program uses. Is it possible to do this in Aperture? If not, is there any other way. I just want it so that when I'm using Aperture, it designates most system resources to Aperture. Thanks.

    I think that RAM designation in Ps or Lr is rather a limit/maximum which you can set then what it will/must use (perhaps in order to keep other programs running smoothly).
    If you simply dont run any other programs Aperture should get all the attention it can get from your computer. Well maybe it cant use all your RAM if you have 8GB.. but it will use what it needs.
    I believe that there are some terminal commands that can assign CPU priority to some processes, but in not entirly sure about how that works and if it helps.. just search for it on the web..

  • Variable selection issue can we use variable exit for the below problem

    Hi experts,
    i have query in which i have an infoobject which is a characterstic i have even set the sort property for that infoobject but when the variable screen comes up and when we go into the selection screen all the help values are not sorted in the way i have set them. for example
    ihave project managers like below
    A
    B
    C
    D
    E
    F
    G
    H
    instead of displaying in order above its displyaing in
    H
    A
    G
    C
    B
    D
    F
    How to set it right.
    thanks and regards
    Message was edited by:
            Neel Kamal

    Hi
    actually i have done the same thing its getting displayed in the bex analyzer but not in the protal.
    thanks and regards
    Message was edited by:
            Neel Kamal

  • Use an HashMap, an TreeMap or just an array?

    Hi,
    i have a fixed size of graph nodes, lets say 100.000.
    Each of these nodes should have an own id, where i just would take the node number - if i increment on these nodes - as such an id.
    Suggestion a)
    If i safe them to an HashMap, the key is the id as
    nodeCounter = 0;
    HashMap h = new HashMap(100.000);
    h.put(new Integer(nodeCounter++), nodeInstance);
    //...put in  all nodes-> To search for a node would be constant time O(1)
    -> What about insertion, also O(1)?
    Suggestion b)
    if i safe it to a TreeMap i would have also the key as the id and put in all
    the nodes. Since the key is just an unique Integer from 1-100.000 a
    comparator can be used to keep the RedBlackTree sorted.
    -> To search for a node would cost O(log(n))
    -> To insert a node would cost O(log(n))
    Suggestion c)
    Since a node is just represented on screen by his id and a fixed String as Node"+"1" -> "Node 1" i thought of using an simple array to safe the nodes, since each index of the array is just the id of the node.
    -> To find a node costs O(1)
    -> To insert a node is dynamically not possible
    My preferring suggestion is a)
    but only if the insertion an finding of an node is both constant O(1).
    Is it an advantage for a TreeMap to keep
    the elements sorted, compared to a HashMap which keeps them unordered?
    What do you think?
    Do you have any good advice for me or any other good alternative how to solve this problem? By reaching best performance?
    Thanks a lot for your answer!
    Message was edited by:
    Cinimood

    ok, thanks for your answer - i will describe the whole problem i want to solve:
    given is an undirected graph of nodes, let�s say a network graph.
    This graph contains about 1000 nodes(less or 2000 or 3000 is also possible), where the nodes are linked by each other -
    could be a full mesh in the worst case. Each link is assigned a weight.
    The features this graph should provide are:
    - adding a node when the graph is already created i.e. represented by the datastructure.
    - searching for a link between two nodes
    To represent a graph by a datastructure is best by using an adjacency matrix or an adjacency list.
    Searching for a link between two nodes the adjancency matrix provides best performance by just O(1). But the adjacency
    matrix needs memory by O((n^2)/2) -> divided by 2, because the graph is undirected, O(n^2) in a directed graph.
    ok, using an array like:
    Node nodes[] = new Nodes[1000]; is of course best to retreive each node by just his id which is equivalent to his index -> node[1] has id 1.
    but if i�m using an array, i cannot add another node dynamically.
    (Note: I need Node instances because each node holds its x and y coords for displaying, which is not important
    now.)
    Now, i think of a solution like this - with focus on adjacency matrix and HashMap without regarding adjacency list:
    use an adjacency matrix for searching for the specific link between two nodes because of the good performance of O(1). Because the graph is undirected i only need the upper(or lower) part of the diagonal of the matrix,
    thus O((n^2)/2) memory.
    use a HashMap for the nodes, where the key of the node entry for the HashMap is just his ID:nodeMap.put(new Integer(nodeCounter++), nodeInstance);use a HashMap for the links between the nodes, where a link is represented by the class Link which holds just the weight.
    To identify the link, use an ID which consists of the concatenation row+column of the adjacency matrix e.g.
    row = 1, column = 2 -> ID = 12
    linkMap.put(new Integer(row+column), new Link(weight));-> if i want to insert a weighted link between node 2 and node 5 i just modify the Link object in the linkMap which has the
    key 2+5 -> 25.
    That�s what i want to do and what makes me thinking all the time of good performance,
    because a lot of nodes might exist and searching, deleting and inserting must be quick.

  • 10.6 Performance Problems

    Although I installed Snow Leopard from scratch, I encountered severe performance problems after a while. To copy a file, for instance, took minutes instead of seconds for a some 100 MB file. To switch between windows took long time. The processing was interrupted by waiting loops every few seconds. And so on.
    I looked around in various forums to find hints how to solve this problem, but nothing worked. The activity monitor doesn't show anything unusual; from its point of view everything is fine.
    In the meantime, I reinstalled again Snow Leopard from scratch. After installing iLife 08, I now have the impression that some Finder operations are again getting slower. This may be a trace to the reason for that performance problems. However, this only affects the file copying times, not the application performance, so this does not explain the full picture
    So my question: does anyone else - having performance problems with SL - has similar observations in combination with iLife 08? Does anyone else have similar performance problems and solved them?
    Regards,
    Hardy

    Sometimes the performance of the system is impacted by permission errors, I would recommend running Disk Utility and repair permissions, also, just in case... check the disk to make sure you don't have nay bad sectors. You can also use a system utility to optimize system performance, Onyx is a good utility that is also free, just make sure to download the appropriate version for your system. http://www.titanium.free.fr/pgs2/english/download.html

  • Critical performance problem upon bulk load of groups

    All (including product development),
    I think there are missing indexes in wwsec_flat$ and wwsec_sys_priv$. Anyway, I'd like assistance on fixing the critical performance problems I see, properly. Read on...
    During and after bulk load of a few (about 500) users and groups from an external database, it becomes evident that there's a performance problem somewhere. Many of the calls to wwsec_api.addGroupToList took several minutes to finish. Afterwards the machine went 100% CPU just from logging in with the portal30 user (which happens to be group owner for all the groups).
    Running SQL trace points in the directions of the following SQL statement:
    SELECT ID,PARENT_ID,NAME,TITLE_ID,TITLEIMAGE_ID,ROLLOVERIMAGE_ID,
    DESCRIPTION_ID,LAYOUT_ID,STYLE_ID,PAGE_TYPE,CREATED_BY,CREATED_ON,
    LAST_MODIFIED_BY,LAST_MODIFIED_ON,PUBLISHED_ON,HAS_BANNER,HAS_FOOTER,
    EXPOSURE,SHOW_CHILDREN,IS_PUBLIC,INHERIT_PRIV,IS_READY,EXECUTE_MODE,
    CACHE_MODE,CACHE_EXPIRES,TEMPLATE FROM
    WWPOB_PAGE$ WHERE ID = :b1
    I checked the existing indexes, and see that the following ones are missing (I'm about to test with these, but have not yet done so):
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_GROUP_ID"
    ON "PORTAL30"."WWSEC_FLAT$"("GROUP_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_PERSON_ID"
    ON "PORTAL30"."WWSEC_FLAT$"("PERSON_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_SYS_PRIV_IX_PATCH1"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OWNER", "GRANTEE_GROUP_ID",
    "GRANTEE_TYPE", "OWNER", "NAME", "OBJECT_TYPE_NAME")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 80K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    Note that when I deleted the newly inserted groups, the CPU consumption immediately went down from 100% to some 2-3%.
    This behaviour has been observed on a Sun Solaris system, but I think it's the same on NT (I have observed it during the bulk load on my NT laptop, but so far have not had the time to test further.).
    Also note: In the call to addGroupToList, I set owner to true for all groups.
    Also note: During loading of the groups, I logged a few errors, all of the same type ("PORTAL30.WWSEC_API", line 2075), as follows:
    Error: Problem calling addGroupToList for child group'Marketing' (8030), list 'NO_OSL_Usenet'(8017). Reason: java.sql.SQLException: ORA-06510: PL/SQL: unhandled user-defined exception ORA-06512: at "PORTAL30.WWSEC_API", line 2075
    Please help. If you like, I may supply the tables and the java program that I use. It's fully reproducable.
    Thanks,
    Erik Hagen (you may call me on +47 90631013)
    null

    YES!
    I have now tested with insertion of the missing indexes. It seems the call to addGroupToList takes just as long time as before, but the result is much better: WITH THE INDEXES DEFINED, THERE IS NO LONGER A PERFORMANCE PROBLEM!! The index definitions that I used are listed below (I added these to the ones that are there in Portal 3.0.8, but I guess some of those could have been deleted).
    About the info at http://technet.oracle.com:89/ubb/Forum70/HTML/000894.html: Yes! Thanks! Very interesting, and I guess you found the cause for the error messages and maybe also for the performance problem during bulk load (I'll look into it as soon as possible anbd report what I find.).
    Note: I have made a pretty foolproof and automated installation script (or actually, it's part of my Java program), that will let anybody interested recreate the problem. Mail your interest to [email protected].
    ============================================
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_PERS_IX1"
    ON "PORTAL30"."WWSEC_PERSON$"("MANAGER")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_IX2
    ON PORTAL30.WWSEC_PERSON$('ORGANIZATION')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_PK
    ON PORTAL30.WWSEC_PERSON$('ID')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_UK
    ON PORTAL30.WWSEC_PERSON$('USER_NAME')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_UK
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID",
    "SPONSORING_MEMBER_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_PK
    ON PORTAL30.WWSEC_FLAT$("ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX5
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX4
    ON PORTAL30.WWSEC_FLAT$("SPONSORING_MEMBER_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX3
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX2
    ON PORTAL30.WWSEC_FLAT$("PERSON_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX1"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_GROUP_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX2"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_USER_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX3"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME", "NAME")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_PK"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_UK"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME",
    "NAME", "OWNER", "GRANTEE_TYPE", "GRANTEE_GROUP_ID",
    "GRANTEE_USER_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 88K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    ==================================
    Thanks,
    Erik Hagen
    null

  • Performance Problem  while signing into Application

    Hello
    Could someone plz throw some light into which area I can look into for my performance problem. its an E-business suite version 11.5.10.2 which was upgrade from 11.5.8.
    the Problem : When the Sign in Page is displayed , After the user name / Pwd is entered it sort of takes for ever for the System to actually log the user in. Sometimes I have to click twice on the Sign in Button.
    I have run purge sign on audit / purge concurrent Request/manager logs / gather schema stats but its still slow. Are there any way of check whether the Middle Tier is the bottle neck.
    Thanks
    Nini

    can you check the profile option FND%diagnostic% if it was enabled or not
    fadi

  • ESS Performance Problem

    Hi,
    I'm using ESS and i've performance problems using one application: Per_Bank_ES application spends a lot of time in response. I've seen it spends a lot of time in j2ee engine. Other ESS webdynpro applications have no performace problems since they work fine.
    Using sap.session.ssr.showInfo=true parameter we get:
    Browser: 531; J2EE: 767094; Backend: 44688
    What can i do since this application is a sap webdynpro application?
    Does anyone has a similar problem?
    Thanks

    Hi,
    It looks the problem backbend,  check backbend application performance. (check transaction related to Per_Bank_ES ).
    Regards
    Ben

  • Performance problems when using Premiere Elements for photo slideshows

    Hello,
    I had been using Premiere Elements 9 (PE9) to make a simple slideshow for my parents from their vacation trip and I ran into some serious performance problems.  I had used it to create similar projects before, but not nearly as big.  This one is like 260 photos, so basically it is 260 seperate clips.  I have a POWERHOUSE workstation (see below) so it isn't my PC.  Even when PE9 crashes, looking at my performance monitor my CPU and RAM aren't even halfway being utilized.  I finally switched to Windows Movie Maker of all things and it worked seemlessly, amazing really.  I'm wondering if I was just using PE9 for something other than what it was designed for since there weren't really any video clips, just a ton of photos that I made into video clips, if that makes sense.  Based upon my experience with this so far, I can't imagine using PE9 anymore for anything really.  I might have the need for a more professional video editing program in the near future, although it does seem like PE has a lot of features.  How can I make sure it utilizes my workstation to its full potential?  Here are my specs:
    PC
    Intel Core i7-2600K 4.6 GHz Overclocked
    ASUS P8P67 Deluxe Motherboard
    AMD Firepro V8800 Video Card
    Crucial 128 GB SATA 6Gb/s Solid State Drive (Operating System)
    Corsair Vengeance 16GB (4x4GB) Memory
    Corsair H60 Liquid CPU Cooler
    Corsair Professional Series Gold AX850 Power Supply
    Graphite Series 600T Mid-Tower Case
    Western Digital Caviar Black 1 TB SATA III Hard Drive
    Western Digital Caviar Black 2 TB SATA III Hard Drive
    Western Digital Green 3 TB SATA III Hard Drive
    Logitech Wireless Gaming Mouse G700
    I don’t play any games but it’s a great productivity mouse with 13 customizable buttons
    Wacom Intuos5 Pen Tablet
    Yes, this system is blazingly fast.  I have yet to feel it slow down, even with Photoshop, Lightroom, InDesign, Illustrator and numerous other apps running at the same time.  HOWEVER, Premiere Elements 9 has crashed NUMERUOS times, every time my system wasn't even close to being fully taxed. 
    Monitors – All run on the ATI V8800
    Dell Ultra Sharp 30 inch
    Samsung 27 Inch
    HAANS-G 28 Inch
    Herman Miller Embody Ergonomic Chair (one of my favorite items)

    Andy,
    There ARE some differences between PrE and PrPro w/ an approved CUDA-capable and MPE hardware acceleration-enabled nVidia video card, but those differences show up ONLY in the quality of the Scaling. The processing overhead is almost exactly the same, when it comes to handling the extra pixels.
    As of PrPro CS 5, two things changed:
    The max. size of Still Images went up from 4096 x 4096 pixels, to quite a bit larger (cannot recall the numbers now).
    The Scaling algorithms have been improved, though ONLY with the correct nVidia cards, with MPE hardware support enabled.
    Now, there CAN be another consideration, between the two programs, in that PrPro CS 5 - CS 6, are 64-bit ONLY, so one benefits from the computer and OS to run it. PrE can be either 32-bit, or 64-bit, so one might, or might not, be taking advantage of the 64-bit program and OS. Still, the processing overhead will be almost identical, it's just that the 64-bit OS can spread it around a bit.
    I still recommend Scaling the large Still Images in PS, prior to Import, to keep that processing overhead as low as is possible. Scaled Still Images work just fine, and I have one Project with 3000+ Scaled Still Images, that edits just fine in PrPro, even on my older 32-bit workstation. Testing that same machine, and PrPro some years ago, I could ONLY work with up to 5 - 4096 x 4096 Stills, before things ground to a crawl.
    Now, Adobe AfterEffects handles large Still Images differently, so I just moved that test Project to AE, and added another 20 large Images, which edited just fine. IIRC, AE can handle Still Images up to 10K x 10K pixels, and that might have gone up, as of CS 5.
    Good luck, and hope that helps,
    Hunt

  • Performance problem using OBJECT tag

    I have a performance problem using the java plugin and was wondering if anyone else was has seen the same thing. I have a rather complex applet that interacts with java script in a web page using the LiveConnect API. The applet both calls javascript in the page and is called by java script.
    Im using IE6 with the java plugin that ships with the 1.4.2_06 JVM. I have noticed that if I deploy the applet using the OBJECT tags, the application seems the trash everytime I call a java method on the applet from javascript. When I deplot the same applet using the APPLET tag the perfomance is much better. I would like to use the OBJECT tag because it applet bahaves better and I have more control over the caching.
    This problem seems to be on the boundaries of IE6, JScript, the JVM and my Applet (and I suppose any could be the real culprit). My application is IE5+ specific so I can not test the applet in isolation from the surround HTML/JavaScript (for example in another browser).
    Does anyone have any idea?
    thanks in advance.
    dennis.

    I have a performance problem using the java plugin and was wondering if anyone else was has seen the same thing. I have a rather complex applet that interacts with java script in a web page using the LiveConnect API. The applet both calls javascript in the page and is called by java script.
    Im using IE6 with the java plugin that ships with the 1.4.2_06 JVM. I have noticed that if I deploy the applet using the OBJECT tags, the application seems the trash everytime I call a java method on the applet from javascript. When I deplot the same applet using the APPLET tag the perfomance is much better. I would like to use the OBJECT tag because it applet bahaves better and I have more control over the caching.
    This problem seems to be on the boundaries of IE6, JScript, the JVM and my Applet (and I suppose any could be the real culprit). My application is IE5+ specific so I can not test the applet in isolation from the surround HTML/JavaScript (for example in another browser).
    Does anyone have any idea?
    thanks in advance.
    dennis.

  • Query performance problem - events 2505-read cache and 2510-write cache

    Hi,
    I am experiencing severe performance problems with a query, specifically with events 2505 (Read Cache) and 2510 (Write Cache) which went up to 11000 seconds on some executions. Data Manager (400 s), OLAP data selection (90 s) and OLAP user exit (250 s) are other the other event with noticeable times. All other events are very quick.
    The query settings (RSRT) are
    persistent cache across each app server -> cluster table,
    update cache in delta process is checked ->group on infoprovider type
    use cache despite virtual characteristics/key figs checked (one info-cube has1 virtual key figure which should have a static result for a day)
    =>Do you know how I can get more details than what's in 0TCT_C02 to break down the read and write cache events time or do you have any recommandation?
    I have checked and no dataloads were in progres on the info-providers and no master data loads (change run). Overall system performance was acceptable for other queries.
    Thanks

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • How do I access "Firefox is NOT compatible with this application. For best performance, please use Internet Explorer 5.0 and above...." web sites; when I try to download any alternate browser, then a warning that alternate is "imcompatable with your opera

    How do I access websites that warn: "Firefox is NOT compatible with this application. For best performance, please use Internet Explorer 5.0 and above...."? When I try to download any alternate browser, all I get is another warning that the alternate is "not compatible with your operating system." Is Firefox preventing this? The site listed below is a job application site. I've had this same problem with other job application sites also.
    == URL of affected sites ==
    https://storefront.kenexa.com/lithia/cc/Home.ss

    There should be a User Agent Switcher menu item under Tools, which gives you the browser names you can impersonate.
    The menu item name changes to the browser UA you are presently using.
    There is also a User Agent Switcher button, you can add it using View -> Toolbars -> Customize, and dragging the button to your toolbar.
    See http://chrispederick.com/work/user-agent-switcher/features/ and http://chrispederick.com/work/user-agent-switcher/help/
    You can just start trying IE versions (or the versions it says on the site) until it lets you in.

  • Performance Problems with "For all Entries" and a big internal table

    We have big Performance Problems with following Statement:
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
      FOR ALL ENTRIES IN gt_zmon_help
        WHERE
        status = 'IAI200' AND
        logdat IN gs_dat AND
        ztrack = gt_zmon_help-ztrack.
    In the internal table gt_zmon_help are over 1000000 entries.
    Anyone an Idea how to improve the Performance?
    Thank you!

    >
    Matthias Weisensel wrote:
    > We have big Performance Problems with following Statement:
    >
    >  
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
    >   FOR ALL ENTRIES IN gt_zmon_help
    >     WHERE
    >     status = 'IAI200' AND
    >     logdat IN gs_dat AND
    >     ztrack = gt_zmon_help-ztrack.
    >
    > In the internal table gt_zmon_help are over 1000000 entries.
    > Anyone an Idea how to improve the Performance?
    >
    > Thank you!
    You can't expect miracles.  With over a million entries in your itab any select is going to take a bit of time. Do you really need all these records in the itab?  How many records is the select bringing back?  I'm assuming that you have got and are using indexes on your ZEEDMT_ZMON table. 
    In this situation, I'd first of all try to think of another way of running the query and restricting the amount of data, but if this were not possible I'd just run it in the background and accept that it is going to take a long time.

Maybe you are looking for