How to Improve Local Index Performance ??

I need to store telecom CDR Data, which having following fields
CDR_DATE => Date
Telephone_Num=> Varchar2(20)
A=> Varchar2(40)
B=> Varchar2(10)
The Input Data volume is very High.. At Present 100 Million/Day
So i created the Oracle Partition Table with Date Range Partition.
The application will run always one type query
select * from CDR where Telephone_Num='&TNUM' AND CDR_DATE between JAN09 AND MAR09;
Question1
what will be Best way to create Index? Which can provide best performance and not degrade Daily Loading of Data.
Question2- For this I created the LOCAL Index
Create Index ABC ON CDR (CDR_DATE,Telephone_Num) LOCAL;
The Data fetching is using the index but I can see in Trace, the count of CONSISTENT GETS & PHYSICAL READS are very High.
So please suggest, Creating LOCAL INDEX is wise decision or not. Or any other way.
Thanks in advance.
Sumit
Edited by: Sumit2 on Jul 31, 2010 6:27 PM

Sumit2 wrote:
The Input Data volume is very High.. At Present 100 Million/Day
So i created the Oracle Partition Table with Date Range Partition.
The application will run always one type query
select * from CDR where Telephone_Num='&TNUM' AND CDR_DATE between JAN09 AND MAR09;
Question1
what will be Best way to create Index? Which can provide best performance and not degrade Daily Loading of Data.
Question2- For this I created the LOCAL Index
Create Index ABC ON CDR (CDR_DATE,Telephone_Num) LOCAL;
The Data fetching is using the index but I can see in Trace, the count of CONSISTENT GETS & PHYSICAL READS are very High.
You've created the index with the columns in the wrong order - your equality condition is on telephone number and the range-based condition is on the date, so you need the telephone number first in the index.
In fact, if your query is always going to be for whole days, you might as well exclude the date column from the index because it adds no precision to the query.
Another option to consider is to create the table as an index-organized table that starts with a primary key that (telephone number, cdr_date) so that all the data for a given telephone number is contained within a very small number of index leaf blocks. (However, if you rarely have more than a couple of calls per number per day then the supporting strategies for this approach will cost more than the benefit you get from building the data structure to match the query requirements.)
As far as data loading is concerned, your best strategy is to look at playing games with local indexes and partition exchange.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
+"Science is more than a body of knowledge; it is a way of thinking"+
+Carl Sagan+                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Similar Messages

  • How to improve query & loading performance.

    Hi All,
    How to improve query & loading performance.
    Thanks in advance.
    Rgrds
    shoba

    Hi Shoba
    There are lot of things to improve the query and loading performance.
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    weblogs:
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1. BW Statistics
    2. BW Workload Analysis in ST03N (Use Export Mode!)
    3. Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
    RSA1, choose Tools -> BW statistics for InfoCubes
    (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in detail?
    1. Transaction RSRT
    2. Transaction RSRTRACE
    4. Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
    3. If Buffers, I/O, CPU, memory on the database server are exhausted?
    4. If Cube compression is used regularly
    5. If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1. If the CPUs on the application server are exhausted
    2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
    8. Where can I get specific runtime information for one query?
    1. Again you can use ST03N -> BW System Load
    2. Depending on the time frame you select, you get historical data or current data.
    3. To get to a specific query you need to drill down using the InfoCube name
    4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
    values for a specific query?
    (Use Details to get the runtime segments)
    1. High Database Runtime
    2. High OLAP Runtime
    3. High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
    3. Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    3. Check if a user exit Usage is involved in the OLAP runtime?
    4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5. Check if a proper index on the inclusion table exist
    12. What can I do if a query has a high frontend runtime?
    1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3. Check if the bandwidth for WAN connection is sufficient
    and the some threads:
    how can i increse query performance other than creating aggregates
    How to improve query performance ?
    Query performance - bench marking
    may be helpful
    Regards
    C.S.Ramesh
    [email protected]

  • How to improve database link performance?

    Hello all,
    We use db links to do DML operations on remote databases. For OLTP applications we are facing performance problems for transactions dependent on data on remote database.
    For legal and business reasons we cannot state all the data locally.
    Could anybody suggest how to improve database links performance or suggest methods/procedures/techniques to enhance speed of OLTP applications going against remote databases ?
    Thanks
    Sky

    AQ is as reliable as Oracle-- the guarantees about delivery of queued messages are the same as the guarantees about committed transactions (i.e. ACID). AQ is designed for asynchronous operation, though. If you are batching transactions, it sounds like you are already doing some sort of asynchronous operations-- I've generally found AQ a lot easier to administer & maintain than rolling your own batching system.
    If you want to tune the Oracle side of things, you'll need to explain more about the system(s) involved here. Architecture, data flow, operations that involve the dblink, etc. If you're not comfortable posting that sort of information to a public forum, feel free to send me mail directly [email protected]
    As an aside, I'm interested in how you can legally pull data from the remote system to display to your users but that you can't legally cache that data in your system via replication. Sounds like an odd constraint.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • How to improve the load performance while using Datasources for the Invoice

    HI All,
    How to improve the  load performance while using Datasources for the Invoice . Actually my invoice load (Appx. 0.4 M records) is taking very long time nearly ~16 to 18 hrs  to update data from R/3 to 0ASA_DS01.
    If I load through flat file it will load with in ~20 Min for the same amount of data.
    Please suggest how to improve load performance.
    PS: I have done the Inpo package settings as per the OSS note.
    Regads
    Srininivasarao.Namburi.

    Hi Srinivas,
    Please refer to my blog posting [/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction|/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction] which gives the details about the package size setting for extractors. I am sure that will be helpful in your case.
    Thanks,
    Divyesh
    Edited by: Divyesh Jain on Jul 20, 2010 8:47 PM

  • How to improve the OpenGL performance for AE

    I upgraded my display card from Nvidia 8600GT to GTX260+ hoping to have a better and smoother scrubbing of the timeline in AE. But to my disappointment, there is absolutely no improvement at all. I checked the OpenGL benchmark of the 2 cards with the Cinebench software and the results are almost the same for the 2 cards.
    I wonder why the GTX260+ costs as much as about 3 times the cost of the 8600GT, but the OpenGL performance is almost the same.
    Any idea how to improve the OpenGL performance please ?
    Regards

    juskocf wrote:
    But to scrub the timeline smoothly, I think OpenGL plays an important role.
    No, not necessarily. General things like footage I/O performance can be much more critical in that case. Generally speaking, AE only uses OpenGL in 2 specific situations: When navigating 3D space and with hardware-accelerated effects. It doesn't do so consistently, though, as any non-accelerated function, such as a specific effect or exhaustion of the avialbale resources can negate that.
    juskocf wrote:
    Also, some 3D plugins such as Boris Continuum 6 need OpenGL to smoothly maneuver the 3D objects.  Just wonder why the OpenGL Performance of such an expensive card should be so weak.
    It's not the card, it's what the card does. See my above comment. Specific to the Boris stuff: Geometry manipulation is far simpler than pixel shaders. Most cards will allow you to manipulate bazillions of polygons - as long as they are untextured and only use simple shading, you will not see any impact on performance. Things get dicy, when it needs to use textures and load those textures into the graphics card's memory. Either loading those textures takes longer than the shading calculations, or, if you use multitexturing (different images combined with transparencies or blendmodes), you'll at some point reach the maximum. It's really a mixed bag. Ultimately the root of all evil is, that AE is not build around OpenGL because at the time it didn't exist, but rather the other way around OpenGL was plugged-on at some point and now there is a number of situations where one gets in the way of the other...
    Mylenium

  • How to improve the query performance or tune query from Explain Plan

    Hi
    The following is my explain plan for sql query. (The plan is generated by Toad v9.7). How to fix the query?
    SELECT STATEMENT ALL_ROWSCost: 4,160 Bytes: 25,296 Cardinality: 204                                         
         8 NESTED LOOPS Cost: 3 Bytes: 54 Cardinality: 1                                    
              5 NESTED LOOPS Cost: 2 Bytes: 23 Cardinality: 1                               
                   2 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 13 Cardinality: 1                          
                        1 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   4 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 1 Bytes: 10 Cardinality: 1                          
                        3 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1                     
              7 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 1 Bytes: 31 Cardinality: 1                               
                   6 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1                          
         10 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              9 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         15 NESTED LOOPS Cost: 2 Bytes: 29 Cardinality: 1                                    
              12 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                               
                   11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                          
              14 TABLE ACCESS BY INDEX ROWID TABLE ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 17 Cardinality: 1                               
                   13 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Cardinality: 1                          
         21 FILTER                                    
              16 TABLE ACCESS FULL TABLE ONT.OE_TRANSACTION_TYPES_TL Cost: 2 Bytes: 1,127 Cardinality: 49                               
              20 NESTED LOOPS Cost: 2 Bytes: 21 Cardinality: 1                               
                   18 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                          
                        17 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   19 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Bytes: 9 Cardinality: 1                          
         23 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              22 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         45 NESTED LOOPS Cost: 4,160 Bytes: 25,296 Cardinality: 204                                    
              42 NESTED LOOPS Cost: 4,150 Bytes: 23,052 Cardinality: 204                               
                   38 NESTED LOOPS Cost: 4,140 Bytes: 19,992 Cardinality: 204                          
                        34 NESTED LOOPS Cost: 4,094 Bytes: 75,850 Cardinality: 925                     
                             30 NESTED LOOPS Cost: 3,909 Bytes: 210,843 Cardinality: 3,699                
                                  26 PARTITION LIST ALL Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18          
                                       25 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_HEADERS Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18     
                                            24 INDEX SKIP SCAN INDEX XLA.XLA_AE_HEADERS_N1 Cost: 264 Cardinality: 1,398,115 Partition #: 29 Partitions accessed #1 - #18
                                  29 PARTITION LIST ITERATOR Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32           
                                       28 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_LINES Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32      
                                            27 INDEX RANGE SCAN INDEX (UNIQUE) XLA.XLA_AE_LINES_U1 Cost: 1 Cardinality: 1 Partition #: 32
                             33 PARTITION LIST ITERATOR Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35                
                                  32 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_DISTRIBUTION_LINKS Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35           
                                       31 INDEX RANGE SCAN INDEX XLA.XLA_DISTRIBUTION_LINKS_N3 Cost: 1 Cardinality: 1 Partition #: 35      
                        37 PARTITION LIST SINGLE Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 38                     
                             36 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_EVENTS Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 39 Partitions accessed #2               
                                  35 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_EVENTS_U1 Cost: 1 Cardinality: 1 Partition #: 40 Partitions accessed #2          
                   41 PARTITION LIST SINGLE Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 41                          
                        40 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_TRANSACTION_ENTITIES Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 42 Partitions accessed #2                    
                             39 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_TRANSACTION_ENTITIES_U1 Cost: 1 Cardinality: 1 Partition #: 43 Partitions accessed #2               
              44 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 1 Bytes: 11 Cardinality: 1                               
                   43 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1

    damorgan wrote:
    Tuning is NOT about reducing the cost of i/o.
    i/o is only one of many contributors to cost and only one of many contributors to waits.
    Any time you would like to explore this further run this code:
    SELECT 1 FROM dual
    WHERE regexp_like(' ','^*[ ]*a');but not on a production box because you are going to experience an extreme tuning event with zero i/o.
    And when I say "extreme" I mean "EXTREME!"
    You've been warned.I think you just need a faster server.
    SQL> set autotrace traceonly statistics
    SQL> set timing on
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');
    no rows selected
    Elapsed: 00:00:00.00
    Statistics
              1  recursive calls
              0  db block gets
              0  consistent gets
              0  physical reads
              0  redo size
            243  bytes sent via SQL*Net to client
            349  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processedRepeated from an Oracle 10.2.0.x instance:
    SQL> SELECT DISTINCT SID FROM V$MYSTAT;
           SID
           310
    SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    Session altered.
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');The session is hung. Wait a little while and connect to the database using a different session:
    COLUMN STAT_NAME FORMAT A35 TRU
    SET PAGESIZE 200
    SELECT
      STAT_NAME,
      VALUE
    FROM
      V$SESS_TIME_MODEL
    WHERE
      SID=310;
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0Seems to be using a bit of time for the hard parse (hard parse elapsed time). Wait a little while, then re-execute the query:
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0The session is not reporting additional CPU usage or parse time.
    Let's check one of the session's statistics:
    SELECT
      SS.VALUE
    FROM
      V$SESSTAT SS,
      V$STATNAME SN
    WHERE
      SN.NAME='consistent gets'
      AND SN.STATISTIC#=SS.STATISTIC#
      AND SS.SID=310;
         VALUE
           163Not many consistent gets after 20+ minutes.
    Let's take a look at the plan:
    SQL> SELECT SQL_ID,CHILD_NUMBER FROM V$SQL WHERE SQL_TEXT LIKE 'select 1 from du
    al%';
    SQL_ID        CHILD_NUMBER
    04mpgrzhsv72w            0
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('04mpgrzhsv72w',0,'TYPICAL'))
    select 1 from dual where regexp_like   (' ','^*[ ]*a')
    NOTE: cannot fetch plan for SQL_ID: 04mpgrzhsv72w, CHILD_NUMBER: 0
          Please verify value of SQL_ID and CHILD_NUMBER;
          It could also be that the plan is no longer in cursor cache (check v$sql_p
    lan)No plan...
    Let's take a look at the 10053 trace file:
    Registered qb: SEL$1 0x19157f38 (PARSER)
      signature (): qb_name=SEL$1 nbfros=1 flg=0
        fro(0): flg=4 objn=258 hint_alias="DUAL"@"SEL$1"
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    CVM: Considering view merge in query block SEL$1 (#0)
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    Subquery Unnest
    SU: Considering subquery unnesting in query block SEL$1 (#0)
    Set-Join Conversion (SJC)
    SJC: Considering set-join conversion in SEL$1 (#0).
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    PM:     PM bypassed: Outer query contains no views.
    FPD: Considering simple filter push in SEL$1 (#0)
    FPD:   Current where clause predicates in SEL$1 (#0) :
              REGEXP_LIKE (' ','^*[ ]*a')
    kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
    predicates with check contraints:  REGEXP_LIKE (' ','^*[ ]*a')
    after transitive predicate generation:  REGEXP_LIKE (' ','^*[ ]*a')
    finally:  REGEXP_LIKE (' ','^*[ ]*a')
    apadrv-start: call(in-use=592, alloc=16344), compile(in-use=37448, alloc=42256)
    kkoqbc-start
                : call(in-use=592, alloc=16344), compile(in-use=38336, alloc=42256)
    kkoqbc-subheap (create addr=000000001915C238)Looks like the query never had a chance to start executing - it is still parsing after 20 minutes.
    I am not sure that this is a good example - the query either executes very fast, or never has a chance to start executing. But, it might still make your point physical I/O is not always the problem when performance problems are experienced.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • How to improve the query performance

    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    -------WHERE TSK.ProjectID = @Project-----
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    hi..
    My SP is as above..
    I connected this SP to dataset in SSRS report..as per my logic..Portfolio contains many Programs and Program contains many Projects.
    When i selected the ALL value for parameters Program and Project..i'm unable to get output.
    but when i select values for all 3 parameters i'm getting output. i took default values for paramters also.
    so i commented the where condition in SP as shown above
    --------where TSK.ProjectID=@Project-------------
    now i'm getting output when selecting ALL value for parameters.
    but here the issue is performance..it takes 10sec to retrieve for single project when i'm executing the sp.
    how can i create index on temp table in this sp and how can i improve the query performance..
    please help.
    thanks in advance..
    lucky

    Didnt i provide you solution in other thread?
    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    WHERE (TSK.ProjectID = @Project OR @Project = -1)
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • How to improve X11 apps performance?

    Hi all,
    I'm looking for advices on how to improve performance of x11 apps on Lion (10.7.3), specifically apps running in (Amazon) cloud because local x11 apps (e.g. GIMP) run just fine.
    Some of the apps I have tried include: (basic) xfontsel, Firefox on Amz EC2 Linux64 AMI, Chromium on Amz EC2 Ubuntu AMI. These mostly seem sluggish compared to the average performance I've come to expect from most OS X apps, including x11 ones like, again, GIMP.
    Thanks in advance.

    You wrote "VNC into the Linux system and run the X11 session local to the virtual machine".
    On my headless remote Linux virtual machine client, I have it configured so vncserver is started at boot time on port 5951 (because I want to use Display 51 ) running a gnome session via ~username/.vnc/xstartup:
    #!/bin/sh
    # Bob Harris $HOME/.vnc/xstartup
    [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
    xsetroot -solid grey
    vncconfig -iconic &     # needed for clipboard support.
    # run gnone as my session manager.
    /usr/bin/gnome-session &
    I modify
    /etc/sysconfig/vncservers
    and add
    VNCSERVERS="51:myusername"
    VNCSERVERSARGS[51]="-geometry 800x600"
    Then run the command
    sudo chkconfig --level 345 vncserver on
    to configure the vncserver so it is started in runlevel's 3, 4, and 5 after booting.
    Do i understand correctly that you VNC into a shell?
    As stated above, vncserver is started at boot time via one of the /etc/sysconfig/vncservers file and the chkconfig command.
    I ssh into a shell session on my remote Linux system (often several ssh shell sessions), but they are not involved with the VNC sessions (except when I did my initial vncserver configuration work.
    and the VM uses a vnc server that is not x11vnc.
    While I have played with x11vnc, I do not need it for my headless Linux system.
    Where I have found x11vnc useful is when I want to mirror a "real" monitor attached to a Linux workstation.  Generally speaking the vncserver will NOT attach to a real monitor.  But if you use the Linux workstation while in the office, but then want to take over the active sessions when you go home at night, or are working from home the next day, then x11vnc is useful.  There are several people in our office that only come into work a few days a week, and whant the ability to continue working where they left off while at work.
    My remote Linux system does not have a display head, so the default vncserver is perfectly OK.
    And running X11 local on VM means startx from that shell?
    That means, I use a VNC client on my Mac that connects to the remote Linux vncserver, started as specified above.  This VNC session gives me access to the remote Linux's vncserver started desktop.  From there I can start local to the remote Linxu box, X11 GUI session, I can start xterm sessions, etc... all of which are presented to me via the VNC session.
    As for Mac VNC clients.  There is always the built-in Mac OS X VNC client:  Finder -> Go -> Connect to server -> vnc://address.of.remote.Linux:5951.  Or you can use Chicken (formally known as Chicken of the VNC), RealVNC, JollysFastVNC, and if you want to you can even use TightVNC via MacPorts.org that will use the Mac OS X X11 as its display.  There are most likely other Mac VNC clients, but these are the ones I'm familiar with.
    To recap.  I have VNC started on my remote system via configuration options.  I connect using a Mac OS X VNC client, then through the VNC client, I start X11 sessions that run local to the remote Linux box and allow VNC to show me the image.
    I also ssh into a bash shell session on my remote Linux box where I mostly edit via Vim sources, do compiles, source code control, etc...  All the typical software developer activities.
    And for some very specific X11 applications (mostly gvimdiff) I will allow the X11 display to be exported to my Mac across an ssh -Y connection, but ONLY because the work network connection to that facility 2,000 miles away is a very fat very fast networking connection, AND because gvimdiff is not as X11 chatty as a lot of other X11 GUI applications.  But I do not use gvimdiff across the internet if my connection is slow.  For example, if I'm at home, my home network connection is not all that fast, so I then use a VNC session for that kind of thing.  But since I mostly go into the office, I only really have play with VNC from home when I'm sick or need to be home for a delivery or repair people working at the house.
    Hopefully you understand my VNC vs ssh vs X11 usage now.

  • How to improve slow PowerPivot performance when adding/modifying measures, calculated columns or Relationships?

    I have been using PowerPivot for a couple of months now and whilst it is extremely quick when pulling in data to populate Pivot Tables, it is extremely slow to make the following kind of changes to the Data Model:
    - Add a Measure / Calculated Field
    - Add a Calculated Column
    - Rename a Calculated Field
    - Re-name a Calculated Column
    - Modify a relationship
    - Change a tables properties
    - Update a table
    In the status bar of excel I get a very quick 'calculating', then it spends a lot of time 'reading data',
    then it 'finalises' after which nothing is in the status bar but it still takes approx. 45 seconds before the program becomes responsive again. This waiting time does not change depending on the action, it is the same if I rename a
    column as it is if I add a new measure.
    My question is what affects performance of these actions and how do I improve it?
    To give you an idea of where my data comes from, I have:
    - 7 tables that feed into the Data Model directly from within the workbook which contains the data model itself. These are a combination of static tables and tables that connect to a MySQL database.
    - 6 separate workbooks which contain static data that is updated manually periodically (copied and pasted from another source)
    - 5 separate workbooks which contain dynamic tables that are linked to our MySQL database and update when opened.
    Now I realise that this is probably where my issue is, however I have no idea how to fix it. You do not seem to be able to connect to a MySQL database directly within the PowerPivot window itself so there is no way to generate and update tables without
    first creating them either in a worksheet or separate workbook (as far as I know).  If I try to create all of the tables directly within the single workbook containing the Data Model I get performance and crashing issues hence why I separate tables into
    individual workbooks.
    Any advice on how to improve performance would be tremendously appreciated. I'm new and keen to learn, I'm aware this set-up is far from best practice.
    Hardware wise I am using:
    - Windows 8 64-bit
    - Excel 2013 64-bit
    - Intel Core i7 processor
    - 6 GB Ram
    Thanks,
    James

    Darren,
    I think the point I was making is its in memory, geez... BTW what do all applications do when they run out of paged memory,  if PowerPivot is using all available memory then wouldn't this force the other applications to use Virtual or essentially write
    back and forth to the disks? I think Virtual memory white to disk ??, lol Also, there are parts if the architecture of Excel 2013 that when importing data into PowerPivot require memory and when working in SharePoint the PowerPivot data is cached to disk
    unless recently refreshed... But this conversation isn't help the James who asked the question and as much as I would love to continue its become a little boring..
    Hi James,
    If you download one the ODBC MySQL Connectors
    http://dev.mysql.com/downloads/connector/odbc/ and I believe yours is the first one for x64 systems and connect directly to the data you should be able to reduce the number of workbooks your opening and if you notice in the following graphic these
    connection are automatically refreshed by default, the parts in red are the differences between PowerPivot 2010 and 2013
    You should notice a lot of improvement especially when refreshing data please let us know how it goes...
    After registering the ODBC Driver
    Click Add. on the User-DSN sheet, choose the “MySQL ODBC 5.x driver”, fill in the credentials, choose a database (from the select menu) and a data source name and you’re done.
    Back in Excel you go on the PowerPivot section of the ribbon and open the PowerPivot window  (the green icon on the left side). In the ‘Home’ section of that window you will see a small gray cylindrical symbol (the international
    symbol for “database”) which will suggest to you different data sources to choose from. Take the one where it says “ODBC”.
    In the next dialog you click on create, choose the adapter, and then Ok. Back in the assistant you can check the connection and proceed.
    Now you have the choice between importing the data from tables using the import assistant or Query depends on your skillset..
    Cheers,
    Ivan
    Ivan Sanders <a href="http://www.linkedin.com/in/iasanders">My LinkedIn </a> , <a href="http://msmvps.com/blogs/ivansanders">My Blog</a>, <a href="http://twitter.com/iasanders"> @iasanders</a>,
    <a href="http://shop.oreilly.com/product/0790145372703.do">BI in SP2013</a>, <a href="http://sharepointdemobuilds.codeplex.com">SP2013 Content Packs</a>.

  • How to improve the load performance

    can any body tell me how to improve the load performance

    Hi,
    for all loads: improve your ABAP code in routines.
    for master data load:
    - load master data attributes before the charateristic itself
    - switch number range buffering on for initial loads
    for transactional loads:
    - load all your master data IObjs prior loading your cube / ODS
    - depending on the ratio No.Records loaded / No.Records in Cube F fact table, drop / recreate indexes (if ration is mor than 40/50%
    - switch on number range buffering for dimensions with high number of records for initial loads
    - switch on number range buffering on master data IObjs which aren't loaded via master data (SIDs always created while transactional loads; eg document, item....)
    these recommendations are just some among others like system tuning, DB parameters...
    hope this helps...
    Olivier.

  • How to improve the write performance of the database

    Our application is a write intense application, maybe will write 2M/second data to the database, how to improve the performance of the database? We mainly write to 5 tables of the database.
    Currently, the database get no response and the CPU is 100% used.
    How to tuning this? thanks in advance.

    Your post says more by what is not provided than by what is provided. The following is the minimum list of information needed to even begin to help you.
    1. What hardware (server, CPU, RAM, and NIC and HBA cards if any pointing to storage).
    2. Storage solution (DAS, iSCSCI, SAN, NAS). Provide manufacturer and model.
    3. If RAID which implementation of RAID and on how many disks.
    4. If NAS or SAN how is the read-write cache configured.
    5. What version of Oracle software ... all decimal points ... for example 11.1.0.6. If you are not fully patched then patch it and try again before asking for help.
    6. What, in addition to the Oracle database, is running on the server?
    2MB/sec. is very little. That is equivalent to inserting 500 VARCHAR2(4000)s. If I couldn't do 500 inserts per second on my laptop I'd trade it in.
    SQL> create table t (
      2  testcol varchar2(4000));
    Table created.
    SQL> set timing on
    SQL> BEGIN
      2    FOR i IN 1..500 LOOP
      3      INSERT INTO t SELECT RPAD('X', 3999, 'X') FROM dual;
      4    END LOOP;
      5  END;
      6  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.07
    SQL>Now what to do with the remaining 0.93 seconds. <g> And this was on a T61 Lenovo with a slow little 7500RPM drive and 4GB RAM running Oracle Database 11.2.0.1. But I will gladly repeat it using any currently supported version of the product.

  • How to Improve Report View performance

    Hi All, i have a webi report which runs about 3 minutes. But when i click on view the report takes about 21 seconds(average) or so to open up. Any ideas on how to improve the report view performance? Does it have anything to do with server load? Any server settings to tweak to speed it up? Any ideas are appreciated.
    The requirement is that my web team has to strip off the Business Objects logo etc(using sdk), and display the report in my company web page, so its
    looking sort of ugly as the web page is taking about 21 seconds just to display the report.
    Some Report statistics:
    Report size is about 90 MB, as it has about 300 k rows of data(which i am aggregating using formulas)
    Report has about 15 simple division formulas
    Report is in Drill Mode. There are about 5 drill filters
    Thanks,
    Kon

    Hi Larry,
    I'll assume you are scheduling this report and viewing the instance in ~21 seconds.  Is that correct?
    We definitely need some environment info to go along with this post.  Like Simone said, Product Version, Patch Level, and other OS, Hardware, App Server details would help as well.
    There are certain properties of a document that can slow down the rendering of a report but we generally have to look at the logs to determine what part of the report is taking the longest time to process.  Assuming this is an instance, I would be curious to know if it is quicker to come up if you immediately view it a second time?
    If you were to turn on a trace, you would see a number of lines like this:
    2011/06/15 20:11:54.153|>=| | | 7676|7436|{|||||||||||||||C3_DPSerialization:ContextPromptList_StreamUnit_SerializeOut
    2011/06/15 20:11:54.153|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:ContextPromptList_StreamUnit_SerializeOut: 0
    2011/06/15 20:11:54.153|>=| | | 7676|7436|{|||||||||||||||C3_DPSerialization:cdbSQLStreamUnit_SerializeOut
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:cdbSQLStreamUnit_SerializeOut: 0.015
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:QTDP_StreamUnit_SerializeOut: 0.015
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_QTDataprovider:SaveMe_Serial: 0.015
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_QTDataprovider:SaveAll_Serial: 0.015
    The numbers at the end are how long the function took to run.  Generally the function gives us an idea of what the engine was doing.
    When evaluating performance issues, you can occasionally find a function that is taking long to run within the logs and based on the function and module names, it can sometime lead you to the reason it is taking longer than expected.
    Another good test might be to run a very basic report to see how long it takes to come up.  Even a report without a datasource would suffice as that will give you your baseline time on how long it takes to load the viewer, convert the WID file to XML and send it up through the application server to your browser.  If a test report takes 15 seconds to view, then you are really only looking at 6 seconds for this other report.
    Hope this helps and gets you started.  More environment info would help take it further.
    Thanks
    Jb

  • How to improve and maintain performance of droid phones

    ive read bits and pieces about how to make the phones faster and stuff but whats the best way of improving  the phones performance without overclocking and putting on custom roms and maintaining that performance.

    Biggest thing to do is keep the cache cleared out the applications. I recommend once a week check depending on usage.
    Keep an eye on your internal storage. Any thing below 30mb needs some serious cleaning of applications, cache, call history, text messages, in that order. I try and keep my internal storage at 50 mb or higher.
    Also try and pay attention to your Dialer Storage. It holds the call history and text messages. But it can grow quickly. I found out the hard way. Had some ringtones saved in the text message thread but rarely looked at them. Then one weekend almost six months after I got, I was looking at the thread a lot because a new message had been sent from that number. The dialer storage went from 5mb to 21mb in a couple of days. Even after deleting the entire thread it only went down 1mb. There was no way to clear data for that app so I ended up doing a factory reset. Now Dialer Storage is a baby size of 64 kb!
    I never used a task killer only task managers.
    I have a battery monitor and have seen no big difference. However I don't use facebook or twitter so I don't have those constantly updating.

  • How to Improve ASM IO Performance

    How can I improve ASM IO Performance? Are there any parameters to do the same?
    I am using 11.2.0.3 on Linux x86-64.

    Hello;
    There a paper on this here :
    www.orafaq.com/papers/tuning_asm.pdf
    You will have to judge how good it is.
    Best Regards
    mseberg

Maybe you are looking for

  • Duvidas sobre o processo de consignação em compras (MM) no Brazil.

    Sap recomenda o uso de 2 códigos de material para tratar o processo de consignação para empresas no Brazil. Um código é o material consignado de propriedade do fornecedor. O outro código de material é o material que passa a ser de propriedade da empr

  • LINE-SIZE

    Hi all, I have to develop a report in ALV LIST LAYOUT, and i want to know which is the maximum number of characters possible at Line. I have read that it´s 255, but i need a longer line. Thank you beforehand, Regards.

  • Any way to improve FR?

    After trying it for a few weeks, FR feels like the "other guys" developed it. Are there workarounds, alternatives, etc for any of the following (I've looked here and not found anything) 1. Takes several clicks on the remote to get FR started 2. You a

  • Setting REPORTS_PATH on a 9i AS AIX environment

    Hi Can someone tell me where you set REPORTS_PATH on a 9i AS AIX environment?

  • Reaching Safari menu with a "non-Apple" mouse and keyboard

    How can I switch to "Safari" menu to select "preferences" to go to general - open "safe" files after downloading as I do not have an Apple keyboard therefore I do not have a "command key"