How to improve the query performance in to report level and designer level

How to improve the query performance in to report level and designer level......?
Plz let me know the detail view......

first its all based on the design of the database, universe and the report.
at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
and when you create a paremeter try to get it match with the key fields in the database.
good luck
Amr

Similar Messages

  • How to improve the query performance

    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    -------WHERE TSK.ProjectID = @Project-----
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    hi..
    My SP is as above..
    I connected this SP to dataset in SSRS report..as per my logic..Portfolio contains many Programs and Program contains many Projects.
    When i selected the ALL value for parameters Program and Project..i'm unable to get output.
    but when i select values for all 3 parameters i'm getting output. i took default values for paramters also.
    so i commented the where condition in SP as shown above
    --------where TSK.ProjectID=@Project-------------
    now i'm getting output when selecting ALL value for parameters.
    but here the issue is performance..it takes 10sec to retrieve for single project when i'm executing the sp.
    how can i create index on temp table in this sp and how can i improve the query performance..
    please help.
    thanks in advance..
    lucky

    Didnt i provide you solution in other thread?
    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    WHERE (TSK.ProjectID = @Project OR @Project = -1)
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • How to improve the query performance or tune query from Explain Plan

    Hi
    The following is my explain plan for sql query. (The plan is generated by Toad v9.7). How to fix the query?
    SELECT STATEMENT ALL_ROWSCost: 4,160 Bytes: 25,296 Cardinality: 204                                         
         8 NESTED LOOPS Cost: 3 Bytes: 54 Cardinality: 1                                    
              5 NESTED LOOPS Cost: 2 Bytes: 23 Cardinality: 1                               
                   2 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 13 Cardinality: 1                          
                        1 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   4 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 1 Bytes: 10 Cardinality: 1                          
                        3 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1                     
              7 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 1 Bytes: 31 Cardinality: 1                               
                   6 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1                          
         10 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              9 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         15 NESTED LOOPS Cost: 2 Bytes: 29 Cardinality: 1                                    
              12 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                               
                   11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                          
              14 TABLE ACCESS BY INDEX ROWID TABLE ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 17 Cardinality: 1                               
                   13 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Cardinality: 1                          
         21 FILTER                                    
              16 TABLE ACCESS FULL TABLE ONT.OE_TRANSACTION_TYPES_TL Cost: 2 Bytes: 1,127 Cardinality: 49                               
              20 NESTED LOOPS Cost: 2 Bytes: 21 Cardinality: 1                               
                   18 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                          
                        17 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   19 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Bytes: 9 Cardinality: 1                          
         23 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              22 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         45 NESTED LOOPS Cost: 4,160 Bytes: 25,296 Cardinality: 204                                    
              42 NESTED LOOPS Cost: 4,150 Bytes: 23,052 Cardinality: 204                               
                   38 NESTED LOOPS Cost: 4,140 Bytes: 19,992 Cardinality: 204                          
                        34 NESTED LOOPS Cost: 4,094 Bytes: 75,850 Cardinality: 925                     
                             30 NESTED LOOPS Cost: 3,909 Bytes: 210,843 Cardinality: 3,699                
                                  26 PARTITION LIST ALL Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18          
                                       25 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_HEADERS Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18     
                                            24 INDEX SKIP SCAN INDEX XLA.XLA_AE_HEADERS_N1 Cost: 264 Cardinality: 1,398,115 Partition #: 29 Partitions accessed #1 - #18
                                  29 PARTITION LIST ITERATOR Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32           
                                       28 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_LINES Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32      
                                            27 INDEX RANGE SCAN INDEX (UNIQUE) XLA.XLA_AE_LINES_U1 Cost: 1 Cardinality: 1 Partition #: 32
                             33 PARTITION LIST ITERATOR Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35                
                                  32 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_DISTRIBUTION_LINKS Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35           
                                       31 INDEX RANGE SCAN INDEX XLA.XLA_DISTRIBUTION_LINKS_N3 Cost: 1 Cardinality: 1 Partition #: 35      
                        37 PARTITION LIST SINGLE Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 38                     
                             36 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_EVENTS Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 39 Partitions accessed #2               
                                  35 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_EVENTS_U1 Cost: 1 Cardinality: 1 Partition #: 40 Partitions accessed #2          
                   41 PARTITION LIST SINGLE Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 41                          
                        40 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_TRANSACTION_ENTITIES Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 42 Partitions accessed #2                    
                             39 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_TRANSACTION_ENTITIES_U1 Cost: 1 Cardinality: 1 Partition #: 43 Partitions accessed #2               
              44 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 1 Bytes: 11 Cardinality: 1                               
                   43 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1

    damorgan wrote:
    Tuning is NOT about reducing the cost of i/o.
    i/o is only one of many contributors to cost and only one of many contributors to waits.
    Any time you would like to explore this further run this code:
    SELECT 1 FROM dual
    WHERE regexp_like(' ','^*[ ]*a');but not on a production box because you are going to experience an extreme tuning event with zero i/o.
    And when I say "extreme" I mean "EXTREME!"
    You've been warned.I think you just need a faster server.
    SQL> set autotrace traceonly statistics
    SQL> set timing on
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');
    no rows selected
    Elapsed: 00:00:00.00
    Statistics
              1  recursive calls
              0  db block gets
              0  consistent gets
              0  physical reads
              0  redo size
            243  bytes sent via SQL*Net to client
            349  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processedRepeated from an Oracle 10.2.0.x instance:
    SQL> SELECT DISTINCT SID FROM V$MYSTAT;
           SID
           310
    SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    Session altered.
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');The session is hung. Wait a little while and connect to the database using a different session:
    COLUMN STAT_NAME FORMAT A35 TRU
    SET PAGESIZE 200
    SELECT
      STAT_NAME,
      VALUE
    FROM
      V$SESS_TIME_MODEL
    WHERE
      SID=310;
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0Seems to be using a bit of time for the hard parse (hard parse elapsed time). Wait a little while, then re-execute the query:
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0The session is not reporting additional CPU usage or parse time.
    Let's check one of the session's statistics:
    SELECT
      SS.VALUE
    FROM
      V$SESSTAT SS,
      V$STATNAME SN
    WHERE
      SN.NAME='consistent gets'
      AND SN.STATISTIC#=SS.STATISTIC#
      AND SS.SID=310;
         VALUE
           163Not many consistent gets after 20+ minutes.
    Let's take a look at the plan:
    SQL> SELECT SQL_ID,CHILD_NUMBER FROM V$SQL WHERE SQL_TEXT LIKE 'select 1 from du
    al%';
    SQL_ID        CHILD_NUMBER
    04mpgrzhsv72w            0
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('04mpgrzhsv72w',0,'TYPICAL'))
    select 1 from dual where regexp_like   (' ','^*[ ]*a')
    NOTE: cannot fetch plan for SQL_ID: 04mpgrzhsv72w, CHILD_NUMBER: 0
          Please verify value of SQL_ID and CHILD_NUMBER;
          It could also be that the plan is no longer in cursor cache (check v$sql_p
    lan)No plan...
    Let's take a look at the 10053 trace file:
    Registered qb: SEL$1 0x19157f38 (PARSER)
      signature (): qb_name=SEL$1 nbfros=1 flg=0
        fro(0): flg=4 objn=258 hint_alias="DUAL"@"SEL$1"
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    CVM: Considering view merge in query block SEL$1 (#0)
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    Subquery Unnest
    SU: Considering subquery unnesting in query block SEL$1 (#0)
    Set-Join Conversion (SJC)
    SJC: Considering set-join conversion in SEL$1 (#0).
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    PM:     PM bypassed: Outer query contains no views.
    FPD: Considering simple filter push in SEL$1 (#0)
    FPD:   Current where clause predicates in SEL$1 (#0) :
              REGEXP_LIKE (' ','^*[ ]*a')
    kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
    predicates with check contraints:  REGEXP_LIKE (' ','^*[ ]*a')
    after transitive predicate generation:  REGEXP_LIKE (' ','^*[ ]*a')
    finally:  REGEXP_LIKE (' ','^*[ ]*a')
    apadrv-start: call(in-use=592, alloc=16344), compile(in-use=37448, alloc=42256)
    kkoqbc-start
                : call(in-use=592, alloc=16344), compile(in-use=38336, alloc=42256)
    kkoqbc-subheap (create addr=000000001915C238)Looks like the query never had a chance to start executing - it is still parsing after 20 minutes.
    I am not sure that this is a good example - the query either executes very fast, or never has a chance to start executing. But, it might still make your point physical I/O is not always the problem when performance problems are experienced.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Select Statement taking more time.How to improve the query performance.

    SELECT DISTINCT ORDERKEY, SUM(IMPRESSIONCNT) AS ActualImpressions ,SUM(DiscountedSales)AS ActualRevenue ,SUM(AgencyCommAmt) as AgencyCommAmt
    ,SUM(SalesHouseCommAMT) as SalesHouseCommAMT
    --INTO Anticiapted_ADXActualsMeasures
    FROM AdRevenueFact_ADX ADx WITH(NOLOCK)
    Where FiscalMonthkey >=201301 and Exists (Select 1 from Anticipated_cdr_AX_OrderItem OI Where Adx.Orderkey=Oi.Orderkey)
    GROUP BY ORDERKEY
    Clustered indexes on orderkey,fiscalmonthkey and orderkey in AdRevenueFact_ADX(contain more than 170 million rows)
    thanks

    As mentioned by Kalman, if your clustered index starts with Orderkey, then this query will require a full table scan. If it is an option to change the clustered index in such a way that FiscalMonthkey is the leading column, then only the data of the last
    two year has to be queried.
    In addition, you should have a look at the indexes of table Anticipated_cdr_AX_OrderItem. Ideally, there is a nonclustered index on Orderkey.
    To get better advice, please post the query plan and list all available indexes of these tables.
    Finally, an off topic remark: it is a good practice to keep consistent spelling of object names, and to keep the same spelling as their declaration. Your query would cause serious problems if the database is ever run with case sensitive collation.
    Gert-Jan

  • How to improve the load performance while using Datasources for the Invoice

    HI All,
    How to improve the  load performance while using Datasources for the Invoice . Actually my invoice load (Appx. 0.4 M records) is taking very long time nearly ~16 to 18 hrs  to update data from R/3 to 0ASA_DS01.
    If I load through flat file it will load with in ~20 Min for the same amount of data.
    Please suggest how to improve load performance.
    PS: I have done the Inpo package settings as per the OSS note.
    Regads
    Srininivasarao.Namburi.

    Hi Srinivas,
    Please refer to my blog posting [/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction|/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction] which gives the details about the package size setting for extractors. I am sure that will be helpful in your case.
    Thanks,
    Divyesh
    Edited by: Divyesh Jain on Jul 20, 2010 8:47 PM

  • How to improve the OpenGL performance for AE

    I upgraded my display card from Nvidia 8600GT to GTX260+ hoping to have a better and smoother scrubbing of the timeline in AE. But to my disappointment, there is absolutely no improvement at all. I checked the OpenGL benchmark of the 2 cards with the Cinebench software and the results are almost the same for the 2 cards.
    I wonder why the GTX260+ costs as much as about 3 times the cost of the 8600GT, but the OpenGL performance is almost the same.
    Any idea how to improve the OpenGL performance please ?
    Regards

    juskocf wrote:
    But to scrub the timeline smoothly, I think OpenGL plays an important role.
    No, not necessarily. General things like footage I/O performance can be much more critical in that case. Generally speaking, AE only uses OpenGL in 2 specific situations: When navigating 3D space and with hardware-accelerated effects. It doesn't do so consistently, though, as any non-accelerated function, such as a specific effect or exhaustion of the avialbale resources can negate that.
    juskocf wrote:
    Also, some 3D plugins such as Boris Continuum 6 need OpenGL to smoothly maneuver the 3D objects.  Just wonder why the OpenGL Performance of such an expensive card should be so weak.
    It's not the card, it's what the card does. See my above comment. Specific to the Boris stuff: Geometry manipulation is far simpler than pixel shaders. Most cards will allow you to manipulate bazillions of polygons - as long as they are untextured and only use simple shading, you will not see any impact on performance. Things get dicy, when it needs to use textures and load those textures into the graphics card's memory. Either loading those textures takes longer than the shading calculations, or, if you use multitexturing (different images combined with transparencies or blendmodes), you'll at some point reach the maximum. It's really a mixed bag. Ultimately the root of all evil is, that AE is not build around OpenGL because at the time it didn't exist, but rather the other way around OpenGL was plugged-on at some point and now there is a number of situations where one gets in the way of the other...
    Mylenium

  • What version of osx would give an Imac 24'' 2.3 ghz intel core 2 Duo, with 4gb of DDR2 SDRAM, the best performance? To do illustration and design jobs.

    What version of osx would give an Imac 24'' 2.3 ghz intel core 2 Duo, with 4gb of DDR2 SDRAM, the best performance? To do illustration and design jobs.
    Would the more recents os versions increase the overall performance? Is it worth it?

    Hello!
    Snow Leopard is what i am currently using. But i guess it does need some clean up and maybe a clean install to see if it gets faster.
    I believe 4gb is the limit for this model, although i've seen some with 6gb…
    Thank you for the help

  • How to improve the write performance of the database

    Our application is a write intense application, maybe will write 2M/second data to the database, how to improve the performance of the database? We mainly write to 5 tables of the database.
    Currently, the database get no response and the CPU is 100% used.
    How to tuning this? thanks in advance.

    Your post says more by what is not provided than by what is provided. The following is the minimum list of information needed to even begin to help you.
    1. What hardware (server, CPU, RAM, and NIC and HBA cards if any pointing to storage).
    2. Storage solution (DAS, iSCSCI, SAN, NAS). Provide manufacturer and model.
    3. If RAID which implementation of RAID and on how many disks.
    4. If NAS or SAN how is the read-write cache configured.
    5. What version of Oracle software ... all decimal points ... for example 11.1.0.6. If you are not fully patched then patch it and try again before asking for help.
    6. What, in addition to the Oracle database, is running on the server?
    2MB/sec. is very little. That is equivalent to inserting 500 VARCHAR2(4000)s. If I couldn't do 500 inserts per second on my laptop I'd trade it in.
    SQL> create table t (
      2  testcol varchar2(4000));
    Table created.
    SQL> set timing on
    SQL> BEGIN
      2    FOR i IN 1..500 LOOP
      3      INSERT INTO t SELECT RPAD('X', 3999, 'X') FROM dual;
      4    END LOOP;
      5  END;
      6  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.07
    SQL>Now what to do with the remaining 0.93 seconds. <g> And this was on a T61 Lenovo with a slow little 7500RPM drive and 4GB RAM running Oracle Database 11.2.0.1. But I will gladly repeat it using any currently supported version of the product.

  • How to improve the load performance

    can any body tell me how to improve the load performance

    Hi,
    for all loads: improve your ABAP code in routines.
    for master data load:
    - load master data attributes before the charateristic itself
    - switch number range buffering on for initial loads
    for transactional loads:
    - load all your master data IObjs prior loading your cube / ODS
    - depending on the ratio No.Records loaded / No.Records in Cube F fact table, drop / recreate indexes (if ration is mor than 40/50%
    - switch on number range buffering for dimensions with high number of records for initial loads
    - switch on number range buffering on master data IObjs which aren't loaded via master data (SIDs always created while transactional loads; eg document, item....)
    these recommendations are just some among others like system tuning, DB parameters...
    hope this helps...
    Olivier.

  • How to eleminate the co-releated Subquery to improve the query performance

    Please find the query below which takes long time to fetch the records.
    SQL> SET LINE 120
    SQL> EXPLAIN PLAN FOR select *
      2                 from KEMP_SRC a1
      3                 where ('MOFF' is null or eq_name = 'MOFF')
      4                       and
      5                         is_ad_hoc <> 1
      6                         and (pb_proc_id is null
      7                         or pb_proc_id in (select proc_id from KEMP_CONFIG where frequency_type <> -1)
      8                         )
      9                    and     KEMPUtility.DTTM(end_dt, end_tm) in      (select max(KEMPUtility.DTTM(end_dt, end_tm))
    10                                from KEMP_SRC a2
    11                                where a2.eq_name = a1.eq_name
    12                                and (a2.pb_proc_id = a1.pb_proc_id or (a2.pb_proc_id is null and a1.pb_proc_id is null))
    13                                and a2.is_ad_hoc <> -1 -- repeating case
    14                                group by eq_name, pb_proc_id
    15                                );
    Explained.
    SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY());
    PLAN_TABLE_OUTPUT
    Plan hash value: 2624956131
    | Id  | Operation             | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |                           |     1 |    96 | 69399   (3)| 00:13:53 |
    |*  1 |  FILTER               |                           |       |       |            |          |
    |*  2 |   TABLE ACCESS FULL   | KEMP_SRC                  |  2896 |   271K|   124   (2)| 00:00:02 |
    |*  3 |   TABLE ACCESS FULL   | KEMP_CONFIG               |     1 |    26 |     2   (0)| 00:00:01 |
    |*  4 |    FILTER             |                           |       |       |            |          |
    |   5 |     HASH GROUP BY     |                           |     1 |    35 |   125   (3)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    |*  6 |      TABLE ACCESS FULL| KEMP_SRC               |   364 | 12740 |   124   (2)| 00:00:02 |
    Predicate Information (identified by operation id):
       1 - filter(("PB_PROC_ID" IS NULL OR  EXISTS (SELECT /*+ */ 0 FROM
                  "KEMP_CONFIG" "KEMP_CONFIG" WHERE "PROC_ID"=:B1 AND
                  "FREQUENCY_TYPE"<>(-1))) AND  EXISTS (SELECT /*+ */ 0 FROM "KEMP_SRC" "A2" WHERE
                  "A2"."EQ_NAME"=:B2 AND ("A2"."PB_PROC_ID"=:B3 OR :B4 IS NULL AND "A2"."PB_PROC_ID" IS
                  NULL) AND "A2"."IS_AD_HOC"<>(-1) GROUP BY "EQ_NAME","PB_PROC_ID" HAVING
    PLAN_TABLE_OUTPUT
                  "KEMPUtility"."DTTM"(:B5,:B6)=MAX("KEMPUtility"."DTTM"("END_DT","END_TM"))))
       2 - filter("EQ_NAME"='BILAN_MAZOUT_BFOE' AND "IS_AD_HOC"<>1)
       3 - filter("PROC_ID"=:B1 AND "FREQUENCY_TYPE"<>(-1))
       4 - filter("KEMPUtility"."DTTM"(:B1,:B2)=MAX("KEMPUtility"."DTTM"("END_DT","END_TM")))
       6 - filter("A2"."EQ_NAME"=:B1 AND ("A2"."PB_PROC_ID"=:B2 OR :B3 IS NULL AND
                  "A2"."PB_PROC_ID" IS NULL) AND "A2"."IS_AD_HOC"<>(-1))
    28 rows selected.

    When i comment the reference to a1 in the subquery , then the cost drastically reduced .
    select *
      2                 from KEMP_SRC a1
      3                 where ('MOFF' is null or eq_name = 'MOFF')
      4                       and
      5                         is_ad_hoc != 1
      6                         and (pb_proc_id is null
      7                         or pb_proc_id in (select proc_id from KEMP_CONFIG where frequency_type  -1)
      8                         )
      9                    and     KEMPUtility.DTTM(end_dt, end_tm) in      (select max(KEMPUtility.DTTM(end_dt, end_tm))
    10                                from KEMP_SRC a2
    11                                where
                                                 -- a2.eq_name = a1.eq_name
    12                            --    and (a2.pb_proc_id = a1.pb_proc_id or (a2.pb_proc_id is null and a1.pb_proc_id is null))
    13                                --and
                                                 a2.is_ad_hoc != -1 -- repeating case
    14                                group by eq_name, pb_proc_id
    15                                );
    PLAN_TABLE_OUTPUT
    Plan hash value: 3739658629
    | Id  | Operation              | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |                           |    78 |  8190 |   249   (3)| 00:00:03 |
    |*  1 |  FILTER                |                           |       |       |            |          |
    |*  2 |   HASH JOIN            |                           |   203 | 21315 |   249   (3)| 00:00:03 |
    |   3 |    VIEW                | VW_NSO_1                  |     7 |    63 |   125   (3)| 00:00:02 |
    |   4 |     HASH UNIQUE        |                           |     7 |   245 |   125   (3)| 00:00:02 |
    |   5 |      HASH GROUP BY     |                           |     7 |   245 |   125   (3)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    |*  6 |       TABLE ACCESS FULL| KEMP_SRC               |  2896 |    98K|   124   (2)| 00:00:02 |
    |*  7 |    TABLE ACCESS FULL   | KEMP_SRC               |  2896 |   271K|   124   (2)| 00:00:02 |
    |*  8 |   TABLE ACCESS FULL    | KEMP_CONFIG |     1 |    26 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("PB_PROC_ID" IS NULL OR  EXISTS (SELECT /*+ */ 0 FROM
                  "KEMP_CONFIG" "KEMP_CONFIG" WHERE "PROC_ID"=:B1 AND
                  "FREQUENCY_TYPE"<>(-1)))
    PLAN_TABLE_OUTPUT
       2 - access("$nso_col_1"="KEMPUTILITY"."DTTM"("END_DT","END_TM"))
       6 - filter("A2"."EQ_NAME"='BILAN_MAZOUT_BFOE' AND "A2"."IS_AD_HOC"<>(-1))
       7 - filter("EQ_NAME"='BILAN_MAZOUT_BFOE' AND "IS_AD_HOC"<>1)
       8 - filter("PROC_ID"=:B1 AND "FREQUENCY_TYPE"<>(-1))

  • Improve the query performance

    Hi, I need help for improving query performance..
    I have a query in which i am joining 2 tables, join after some aggregation. Both table has more than 50 million record. 
    There is no index created on these table,both tables are loaded after truncation. So is it required to create index on this table before joining? The query status was showing 'suspended' since it was running for long time. For temporary purpose, i just executed
    the query multiple times by changing month filter each times. 
    How can i improve this instead of adding month filter and running multiple times

    Hi Nikkred,
    According to your description, you are joining 2 table which contain more than 50 million records. Now what you want is improving query performance, right?
    Query tuning is not an easy task. Basically it depends on three factors: your degree of knowledge, the query itself and the amount of optimization required. So in your scenario, you can post your detail query, so that you can get more help. Besides, you
    can create index on your table which can improve the performance. Here are some links about performance tuning tips for you reference.
    http://www.mssqltips.com/sql-server-tip-category/9/performance-tuning/
    http://www.infoworld.com/d/data-management/7-performance-tips-faster-sql-queries-262
    Regards,
    Charlie Liao
    TechNet Community Support

  • How to improve the extactor performance?

    Dear all,
    I have an ODS with active data (1,180,000Records).
    I also have an InfoCube. It has a characteristices C1 with 5 navigation attributes.
    For C1, it has 4,300,000 records in the master.
    I have tried to load data to InfoCube from the ODS.
    But it takes more than 12 hours, how can I improve the performance or is there any bug in BW for so many records extracting?
    Thank you very much.
    Seven

    dear Seven,
    do you try to delete infocube's index first ?
    infocube->manage->performance->delete indexes
    take a look oss note
    130253-General tips on uploading transaction data to BW
    Tip 8:
    When you load large quantities of data in InfoCubes, you should delete
    the secodary indexes before the loading process and then recreate them afterwards if the following applies: The number of the records that are loaded is big in comparison to the number of records that already exist in the (uncompressed) F fact table. For non-transactional InfoCubes, you must delete the indexes to be able to carry out parallel loading.
    Tip 9:
    When you load large quantities of data in an InfoCube, the number range buffer should be increased for the dimensions that are likely to have a high number of data sets.
    To do this, proceed as follows. Use function module RSD_CUBE_GET to find the object name of the dimension that is likely to have a high number of data sets.
    Function module settings:
    I_INFOCUBE = 'Infocube name'
    I_OBJVERS = 'A'
    I_BYPASS_BUFFER = 'X'
    The numbers for the dimensions are then contained in table 'E_T_DIME', column 'NUMBRANR'. If you enter 'BID' before this number, you get the relevant number range (for example BID0000053).
    You can use Transaction SNRO (-> ABAP/4 Workbench -> Development --> Other tools --> Number ranges) to display all number ranges for the dimensions used in BW if you enter BID*. You can use the object name that was determined beforehand to find the required number range.
    By double-clicking this line, you get to the number range maintenance. Choose Edit -> Set-up buffering -> Main memory, to define the 'No. of numbers in buffer'.
    Set this value to 500, for example. The size depends on the expected data quantity in the initial and in future (delta) uploads.

  • How to get the query name from  portal report name

    Hi Experts ,
    I am given a portal report Name and asked to do changes to the queries of that ,so how do i get the query name ,
    Thanks in Advance
    Nitya

    Hi Nithya,
    You can get the technical name by selecting the role in the portal where the report is enclosed you will generally find the report in description then identify the report you are looking then double click on that you will get a window pop-up in that you will have details tab there click on the details the your Query technical name will be displayed.
    EX : zqry_w001 Then replace W with Q and seach in analyser or Designer.
    Regards
    Amar.

  • How to improve the query

    Guys,
    I have the following query which seem to take quite a lot of time. About 7 hours
    It is spending significant amount of time on doing the max(dt), as the table has about a million records. Any know a better way of writing this query ?
    SELECT   ABS (amount) AS amount, date_from
        FROM (SELECT   SUM
                          (  k$forex.rate (f$evaluation.eval_ccy,
                                           :b5,
                                           f$card.date_until,
                                           f$card.date_until
                           * (  f$evaluation.eval_market_value
                              + DECODE (k$structure.total_includes_accrued,
                                        1, NVL
                                             (f$evaluation.eval_accrued_interests,
                                              0
                                        0
                          ) AS amount,
                       f$card.date_from AS date_from
                  FROM f$card, f$evaluation, k$portfolio, k$structure
                 WHERE f$evaluation.STRUCTURE = k$structure.ID
                   AND f$card.ID = f$evaluation.card
                   AND f$card.portfolio = k$portfolio.ID
                   AND f$card.fee = :b4
                   AND k$portfolio.fee_profile = :b3
                   AND f$card.date_from >= :b2
                   AND f$card.date_until <= :b1
                   AND f$evaluation.off_balance_sheet = 0
                   AND f$evaluation.asset = 2
                   AND f$evaluation.eval_market_value IS NOT NULL
                   AND f$evaluation.dte =
                            (SELECT MAX (e2.dte)
                               FROM f$evaluation e2
                              WHERE e2.card = f$card.ID AND e2.dte <= TRUNC (:b6))
              GROUP BY f$card.date_from)
    ORDER BY date_from
    TKPROF outout shows
    call     count       cpu    elapsed       disk      query    current        rows
    Parse     1598      0.07       0.07          0          0          0           0
    Execute   2398      1.19       1.20          0          0          0           0
    Fetch     4326   2687.62    2630.06        326   17291480          0        1929
    total     8322   2688.89    2631.34        326   17291480          0        1929
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 78  (PROSPERO)   (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT ORDER BY (cr=7485 pr=0 pw=0 time=1118615 us)
          1   VIEW  (cr=7485 pr=0 pw=0 time=1118575 us)
          1    HASH GROUP BY (cr=7485 pr=0 pw=0 time=1118566 us)
         45     HASH JOIN  (cr=7215 pr=0 pw=0 time=1093439 us)
         45      TABLE ACCESS BY INDEX ROWID F$EVALUATION (cr=7185 pr=0 pw=0 time=1085972 us)
       1263       NESTED LOOPS  (cr=7144 pr=0 pw=0 time=136812196 us)
          1        HASH JOIN  (cr=7136 pr=0 pw=0 time=1091026 us)
          1         TABLE ACCESS BY INDEX ROWID F$CARD (cr=7 pr=0 pw=0 time=112 us)
          4          NESTED LOOPS  (cr=5 pr=0 pw=0 time=200 us)
          1           TABLE ACCESS BY INDEX ROWID K$PORTFOLIO (cr=3 pr=0 pw=0 time=62 us)
          1            INDEX RANGE SCAN IDX_K$PORTFOLIO$FEE_PROFILE (cr=2 pr=0 pw=0 time=46 us)(object id 81335)
          2           INDEX RANGE SCAN IDX_F$CARD$PORTFOLIO (cr=2 pr=0 pw=0 time=11 us)(object id 81336)
       3106         VIEW  VW_SQ_1 (cr=7129 pr=0 pw=0 time=1092079 us)
       3106          HASH GROUP BY (cr=7129 pr=0 pw=0 time=1085865 us)
    806960           INDEX FAST FULL SCAN PK_F$EVALUATION (cr=7129 pr=0 pw=0 time=807031 us)(object id 53621)
       1261        INDEX RANGE SCAN IDX_F$EVALUATION$CARD (cr=8 pr=0 pw=0 time=1306 us)(object id 53622)
         15      TABLE ACCESS FULL K$STRUCTURE (cr=30 pr=0 pw=0 time=64 us)Thanks in advance

    Thanks for your help on this.. Do you know any links where i can gather more information on this ? I've now replaced the last bind with SYSDATE to make it easier. but not sure if it has any impact on the plan
    PLAN_TABLE_OUTPUT
    Plan hash value: 2614864167
    | Id  | Operation                           | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                             |     1 |    22 |  1470   (1)| 00:00:18 |
    |   1 |  SORT ORDER BY                      |                             |     1 |    22 |  1470   (1)| 00:00:18 |
    |   2 |   VIEW                              |                             |     1 |    22 |  1470   (1)| 00:00:18 |
    |   3 |    HASH GROUP BY                    |                             |     1 |    79 |  1470   (1)| 00:00:18 |
    |*  4 |     FILTER                          |                             |       |       |            |          |
    |*  5 |      HASH JOIN                      |                             |    73 |  5767 |    54   (2)| 00:00:01 |
    |*  6 |       TABLE ACCESS BY INDEX ROWID   | F$EVALUATION                |   291 | 10476 |    38   (0)| 00:00:01 |
    |   7 |        NESTED LOOPS                 |                             |    73 |  5256 |    45   (0)| 00:00:01 |
    |   8 |         NESTED LOOPS                |                             |     1 |    36 |     7   (0)| 00:00:01 |
    |   9 |          TABLE ACCESS BY INDEX ROWID| K$PORTFOLIO                 |     2 |    16 |     2   (0)| 00:00:01 |
    |* 10 |           INDEX RANGE SCAN          | IDX_K$PORTFOLIO$FEE_PROFILE |     2 |       |     1   (0)| 00:00:01 |
    |* 11 |          TABLE ACCESS BY INDEX ROWID| F$CARD                      |     1 |    28 |     3   (0)| 00:00:01 |
    |* 12 |           INDEX RANGE SCAN          | IDX_F$CARD$PORTFOLIO        |     2 |       |     1   (0)| 00:00:01 |
    |* 13 |         INDEX RANGE SCAN            | IDX_F$EVALUATION$CARD       |   406 |       |     3   (0)| 00:00:01 |
    |  14 |       TABLE ACCESS FULL             | K$STRUCTURE                 |    15 |   105 |     8   (0)| 00:00:01 |
    |  15 |      SORT AGGREGATE                 |                             |     1 |    12 |            |          |
    |* 16 |       TABLE ACCESS BY INDEX ROWID   | F$EVALUATION                |   406 |  4872 |    39   (0)| 00:00:01 |
    |* 17 |        INDEX RANGE SCAN             | IDX_F$EVALUATION$CARD       |   406 |       |     4   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("F$EVALUATION"."DTE"= (SELECT MAX("E2"."DTE") FROM "PROSPERO"."F$EVALUATION" "E2" WHERE
                  "E2"."CARD"=:B1 AND "E2"."DTE"<=TRUNC(SYSDATE@!)))
       5 - access("F$EVALUATION"."STRUCTURE"="K$STRUCTURE"."ID")
       6 - filter("F$EVALUATION"."OFF_BALANCE_SHEET"=0 AND "F$EVALUATION"."ASSET"=2 AND
                  "F$EVALUATION"."EVAL_MARKET_VALUE" IS NOT NULL)
      10 - access("K$PORTFOLIO"."FEE_PROFILE"=TO_NUMBER(:B3))
      11 - filter("F$CARD"."FEE"=TO_NUMBER(:B4) AND "F$CARD"."DATE_UNTIL"<=:B1 AND "F$CARD"."DATE_FROM">=:B2)
      12 - access("F$CARD"."PORTFOLIO"="K$PORTFOLIO"."ID")
      13 - access("F$CARD"."ID"="F$EVALUATION"."CARD")
      16 - filter("E2"."DTE"<=TRUNC(SYSDATE@!))
      17 - access("E2"."CARD"=:B1)
    39 rows selected.

  • How to improve the performance.

    Hi All,
    My question is , What and all we have to check to improve the Query performance . In our schema , all the tables are partitioned and all the tables have indexes . And the tables are not compressed , if i do table compression in my schema , will the query performance increase ? or Do i need to check any other things. Could you please share your ideas.
    Thanks
    Sree

    Hi Sree,
    all the things you mentioned, indexing, partitioning and compression, are not necessarily related to performance or can be both advantageous and deleterious to performance. Just putting them in place is no guarantee of performance.
    Indexing speeds up access but slows down DML. Partitioning is used just as much (if not more) for archival management as it is for performance. Compression is used primarily for space management but has the added benefit of possibly speeding up scan type queries but can slow down DML. Just using these is no guarantee of performance, and can be a hindrance.
    The normal recommendations for performance begin with database design and implementation. Design and implement your database to sound relational principles.
    Design a normalised database to at least 3rd normal form.
    Use appropriate data types.
    Index appropriately, PK's, UK's and obvious FK's (I don't subscribe to the theory that all FK's should be indexed, but that's another argument) Err on the side of less is more.
    Use primarily simple tables, unless experience indicates other storage options such as partitioning or IOT's may be useful.
    Once the application starts to move beyond development then performance issues may start to become obvious, but the method here is to fix the problems. Simply implementing a feature such as partitioning across the whole database in the hope of avoiding future problems will probably be a waste of time at best and cause more issues than it solves at worst. This is not to say that performance is only considered in the last phases. Performance should be a consideration through the whole process, and can be achieved by sound and understood design, implementation and coding principles.
    Once the application moves to testing and implementation phases, then you may be surprised at how few performance issues there are, and even here there is an established performance and tuning methodology for diagnosis and resolution.
    There are whole books written and careers forged in the field of database performance, so it is beyond the scope of this rant to do anything more than give a broad brush stroke of the overall principles. But I hope this at least points you in the right direction.
    Regards
    Andre

Maybe you are looking for

  • WBS element withot Billing element

    Hi Is it possible to create sales order with  account assinment ' WBS element w/o Billing element' ?

  • Netctl and WPA2 Personal AES, problem with authentication

    I having trouble to connect to my WPA2 Personal 5ghz wireless AES (only) network using netctl. When I were using wicd it wasnt any problems. I assume there is something wrong in my profile but i cant find out what. My config /etc/netctl/wlp4s0-limbo2

  • Ability to keep track of new responses since my last one

    One of the things I really liked with the old discussions is that I could quickly and easily see which topics I had a comment in and which ones had new comments since my last one. I've not found a way to replicate that feature or found a widget that

  • How to read Sool file

    hi all can any1 tell me how should i read spool file i want to read tht spool and then send it to person automatically i can read that by sp01 but how to read that file in flat file or direcly send . Jimmy

  • XSL Image Display

    Good Morning, I have copied the default report layout and trying to insert an image into the file to display next to the header on the left. The code I am trying to insert into below: <fo:block xsl:use-attribute-sets="text text_2 text_0 #PAGE_HEADER_