What is query

< MODERATOR: please use help.sap.com or Google before posting a question like this.  If you have a specific question, come back and post it. >
hi all
tell me what is query?

Dear Sreekanth,
[SAP Query|http://help.sap.com/saphelp_nw04/helpdata/en/d2/cb3efb455611d189710000e8322d00/frameset.htm]: The SAP Query application is used to create reports not already contained in the default. It has been designed for users with little or no knowledge of the SAP programming language ABAP.
SAP Query offers users a broad range of ways to define reports and create different types of reports such as basic lists, statistics, and ranked lists.
The SAP Query comprises five components: Queries, InfoSet Query, InfoSets, User Groups and Translation/Query.
Classic reporting- the creation of lists, statistics and ranked lists- are covered by the InfoSet Query and Queries components. Other componentsu2019 range of functions cover the maintenance of InfoSets, the administration of user groups and also the translation of texts created in the SAP Query.
All data required by a user for a report can be read from various tables.
To define a report, you first have to enter individual texts, such as titles, and select the fields and options which determine the report layout. In the WYSIWYG mode, you can edit the lists using Drag & Drop and various toolbars.
Regards,
Naveen.

Similar Messages

  • What is query optimization and how to do it.

    Hi
    What is query optimization?
    Any link can any one provide me so that i can read and learn the techniques.
    Thanks
    Elias Maliackal

    THis is an excellent place to start: When your query takes too long ...

  • Any idea what this query is trying to do?

    Do you guys have any idea what this query(used for report generation) is used for? I don't understand the WHERE clause of this query especially what the Pipe operators(|| ' 21:00:00') are used for?
    SELECT c.course_id,
           mph.subject
    from   courses c, main_pgm_hdr mph
    where
    c.classid=mph.classid
        AND
        TO_CHAR(MPH.CLOSE_DATE, 'mm/dd/yyyy hh24:mi:ss') >= TO_CHAR(TRUNC(SYSDATE, 'MM') - 1, 'mm/dd/yyyy')
            || ' 21:00:00'
        AND TO_CHAR(MPH.CLOSE_DATE, 'mm/dd/yyyy hh24:mi:ss') <= TO_CHAR(SYSDATE, 'mm/dd/yyyy')
            || ' 20:59:59'Edited by: user10450365 on Jan 13, 2009 7:11 PM

    They are trying to get the data having CLOSE_DATE between 20:59:59 and 21:00:00 for the present day. But the way its done is wrong. They have converted the date into Char and they are comparing. Its absolutely incorrect.
    One way to do it would be
    SELECT DISTINCT (PS.CARR_ID)                                  AS CARRIER  ,
         MPH.SHPMT_NBR                                          AS TRAILER  ,
         MPH.SHPMT_NBR                                          AS SHPMT_NBR,
         TO_CHAR(MPH.CREATE_DATE_TIME, 'mm/dd/yyyy hh24:mi:ss') AS LOADED   ,
         TO_CHAR(MPH.CLOSE_DATE, 'mm/dd/yyyy hh24:mi:ss')       AS FINALIZED
       FROM PARCL_SERV PS,
         MANIF_PARCL_HDR MPH
      WHERE PS.MANIF_TYPE = MPH.MANIF_TYPE
        AND MPH.CLOSE_DATE >= TRUNC(sysdate)+(21/24)
        AND MPH.CLOSE_DATE <= TRUNC(sysdate)+((20/24)+(59/1440)+(59/86400))Edited by: Karthick_Arp on Jan 13, 2009 1:27 AM

  • What SQL query I need for this?

    I need to execute a SQL query but I don't know how.
    To illustrate it, please take a look at some example data:
        ARTICLEID SOLDON    
        1         2005-12-31
        1         2005-11-31
        1         2005-10-31
        1         2005-09-31
        1         2005-08-31
        1         2005-07-31
        1         2005-06-31
        1         2005-05-31
        1         2005-04-31
        1         2005-03-31
        1         2005-02-31
        1         2005-01-31
        1         2004-12-31
        1         2004-11-31
        2         2005-12-31
        2         2005-11-31
        2         2005-10-31
        2         2005-09-31 This is a piece of the sales data for the articles (sales history).
    Lets assume that today is the date 2005-12-31.
    Two requirements for the query:
    1. Get the sales data for the last 12 months.
    2. Get only the sales data for articles where there is sales data since at least 6 months.
    The result in my example should look like this:
        ARTICLEID SOLDON    
        1         2005-12-31
        1         2005-11-31
        1         2005-10-31
        1         2005-09-31
        1         2005-08-31
        1         2005-07-31
        1         2005-06-31
        1         2005-05-31
        1         2005-04-31
        1         2005-03-31
        1         2005-02-31
        1         2005-01-31 What is the SQL which I need to accomplish this query?

    To get all the information from the last 12 months
    you will have to use date manipulation.
    SELECT add_months(sysdate, -12) from
    dual;This gives you the date 12 months ago.
    So you will have to select your date between then and
    the current date.If I do this I will get this data:
        ARTICLEID SOLDON    
        1         2005-12-31
        1         2005-11-31
        1         2005-10-31
        1         2005-09-31
        1         2005-08-31
        1         2005-07-31
        1         2005-06-31
        1         2005-05-31
        1         2005-04-31
        1         2005-03-31
        1         2005-02-31
        1         2005-01-31
        2         2005-12-31
        2         2005-11-31
        2         2005-10-31
        2         2005-09-31 But I want this data:
        ARTICLEID SOLDON    
        1         2005-12-31
        1         2005-11-31
        1         2005-10-31
        1         2005-09-31
        1         2005-08-31
        1         2005-07-31
        1         2005-06-31
        1         2005-05-31
        1         2005-04-31
        1         2005-03-31
        1         2005-02-31
        1         2005-01-31 I am no native English speaker. What didn't you understand in the two requirements?
    Here are my two requirements for the query:
    1. Get the sales data for the last 12 months.
    2. But get ONLY the sales data for articles where there is sales data since AT LEAST 6 months.
    The result can contain as many IDs as you want if the two requirements are met. Its not a trivial SQL statement for me. Please remember that the above data are only for illustration. They are just an example.
    There should be a SQL statement for this.
    Please tell me if you don't understand my problem. I will try to explain it in a better way if I can.

  • What is query and reporting

    hai sap gurus..
    what is difference the query and reporting
    in an interview i have been asked.
    how many queries and how many reporting  did you created.

    Hi Naren,
    SAP Query
    An ABAP Workbench tool that enables users without knowledge of the ABAP programming language to define and execute their own reports.
    In ABAP Query, you enter texts and select fields and options to determine the structure of the reports. Fields are selected from functional areas and can be assigned a sequence by numbering.
    ABAP Query offers the following types of reports:
    Basic lists
    Statistics
    Ranked lists
    Query used for short run based i.e the format of the report output you can change frequently.
    Report (CA) 
    A compilation of data for a company or group of companies in the form of a table or list.
    An evaluation is the result of executing a report. It can be either displayed on the screen or sent to a printer.
    Here ABAP programing language will be used to execute the report.
    Report used for long run it is standard output we can,t change frequently.
    I hope it will clear for you,
    Regards,
    Murali.

  • Can someone explain what a Query View is ?

    Is this some special view if the query?

    Take a look to OSS Note 634790 'Brief definition of terms - Query, Workbook, View'
    Hope it helps!
    Bye,
    Roberto
    and in help
    http://help.sap.com/saphelp_nw04/helpdata/en/f1/0a555de09411d2acb90000e829fbfe/content.htm

  • What is query block?

    hi guys,
    i searched a lot about "query block" on google, but dosent find any satisfying article which will explain me basic concept.
    i would like to know about the query block parameter we specified when we use optimization query hints in oracle.
    how they can be used? where we can find them?
    any suggestions?
    thanks and regards
    VD

    QB_NAME hint lets you give a name to your query block, which you can then use in other hints (either in the same query block, or in an outer query block).
    See a rather complex example at Jonathan Lewis's [scratchpad blog|http://jonathanlewis.wordpress.com/2007/06/25/qb_name/]. That should make it clearer (look at the create table t3 SQL for the example - don't worry at first about understanding the specific point Jonathan is trying to make). See that he has named the inner QB (the inline view) using
    /*+ qb_name(inline) */and then the outer query has some complex hints:
    create table t3
    as
    select
         /*+
              qb_name(main)
              merge(@inline)
              leading(@SEL$8FA4BC11 t2@main t1@inline)
              full(t2@main)
              full(t1@inline)
              use_hash(@SEL$8FA4BC11 t1@inline)
              no_swap_join_inputs(@SEL$8FA4BC11 t1@inline)
              pq_distribute(@SEL$8FA4BC11 t1@inline hash hash)
         v1.n1,
            ...which names the outer block as main, then refers to both @main and @inline in the various hints.
    HTH
    Regards Nigel

  • What's query or function will help me for this issus please ?

    Hi
    I attached here jpg which will show you there 3 table A , b And c
    And i needed know how i can make Function or any thing else for know which i add into cell question mark
    Pls look into this attached and advise me
    Take your time
    http://imageshack.us/f/853/examplees.jpg/
    Edited by: 964035 on Oct 9, 2012 1:09 AM
    Edited by: 964035 on Oct 9, 2012 2:26 AM

    Below the cods for create the table and insert some data as example
    I needed know how i can Booking The item from stock for each order and how i can know the status of item it's will be available or not so we should do the order purchase for item
    Create Table Color (                                   
         Order Number(5) Not Null ,                              
         Color Varchar2(8) Not Null ,                              
         Qty Number (8) Not Null )                              
    Create Table Recipe (                                   
         Color Varchar2(8) Not Null ,                              
         Item Varchar2(8) Not Null ,                              
         Recipe Number (8) Not Null )                              
    Create Table Stock (                                   
         Item Varchar2(8) Not Null Constraint Item_PK PRIMARY KEY,               
         Qty_Of_Stock Number (8) Not Null )               
    Insert into Color VALUES ( 1500 ,     Red'     ,10);
    Insert into Color VALUES ( 1500 ,     yellow'     ,15);
    Insert into Color VALUES ( 1500 ,     Green'     ,8);
    Insert into Color VALUES ( 1500 ,     Blue'     ,7);
    Insert into Color VALUES ( 1600 ,     yellow'     ,6);
    Insert into Color VALUES ( 1600 ,     Green'     ,7);
    Insert into Color VALUES ( 1600 ,     Blue'     ,8);
    Insert into Color VALUES ( 1700 ,     Red'     ,10);
    Insert into Color VALUES ( 1700 ,     Blue'     ,15);
    Insert into Color VALUES ( 1800 ,     Green'     ,16);
    Insert into Color VALUES ( 1800 ,     yellow'     ,9);
    Insert into Recipe VALUES ( 'Red','A',0.25);
    Insert into Recipe VALUES ( 'Red','B',0.3);
    Insert into Recipe VALUES ( 'Red','C',0.2);
    Insert into Recipe VALUES ( 'Red','D',0.25);
    Insert into Recipe VALUES ( 'Yellow','C',0.1);
    Insert into Recipe VALUES ( 'Yellow','D',0.3);
    Insert into Recipe VALUES ( 'Yellow','E',0.2);
    Insert into Recipe VALUES ( 'Yellow','F',0.4);
    Insert into Recipe VALUES ( 'Green','C',0.25);
    Insert into Recipe VALUES ( 'Green','D',0.35);
    Insert into Recipe VALUES ( 'Green','E',0.2);
    Insert into Recipe VALUES ( 'Green','G','0.1);
    Insert into Recipe VALUES ( 'Green','H',0.1);
    Insert into Recipe VALUES ( 'Blue','A',0.35);
    Insert into Recipe VALUES ( 'Blue','B',0.35);
    Insert into Recipe VALUES ( 'Blue','C',0.3);
    Insert into Stock VALUES ( 'A',50);               
    Insert into Stock VALUES ( 'B',60);               
    Insert into Stock VALUES ( 'C',40);               
    Insert into Stock VALUES ( 'D',30);               
    Insert into Stock VALUES ( 'E',20);               
    Insert into Stock VALUES ( 'F',10);               
    Insert into Stock VALUES ( 'H',25);               
    Insert into Stock VALUES ( 'G',15);               
    Take your time my dear ...

  • What is the sql query for the real time reports Resource Stats?

    Does anyone know what the query is that the real time report tool uses for the Resource Stats page?  Trying to develop a custom report that displays similar information that is updated regularly.

    Hi,
    009 wrote:
    Hi Frank,
    Just wanted your opinion on the above given SQLI'm not sure I understand it.
    I added some more formatting to help me read it:
    SELECT      A
    ,     CASE WHEN LAG(A,1) OVER (ORDER BY A) IS NULL
              OR A=LAG(A,1) OVER (ORDER BY A)
              THEN LAG(B,1) OVER (ORDER BY A)
         END B_LAG
    ,     B
    FROM     (
         SELECT A, B
         FROM
              SELECT '1'A,'Apple' B FROM DUAL UNION ALL
              SELECT '1'A,'cat'B FROM DUAL UNION ALL
              SELECT '2'A,'bat'B FROM DUAL UNION ALL
              SELECT '3'A,'rat'B FROM DUAL UNION ALL
              SELECT '2'A,'yellow'B FROM DUAL UNION ALL
              SELECT '1'A,'pin'B FROM DUAL
         CONNECT BY PRIOR A=B
         ORDER BY A
    );What is the purpose of the CONNECT BY in what you have so far?
    Is the idea that you will add another CONNECT BY query, using
    CONNECT BY  b_lag  = PRIOR b?
    Do you think that will be better than using ROW_NUMBER?
    Will it work if (a, b) is not unique?

  • What is this query doing???

    for(rset = conn.executeQ(OurQuery); rset.next(); items.put(rset.getString(1), rset.getString(2)));
    rset = ResultSet
    conn= Connection
    Items --> private Hashtable items;
    here is OurQuery
    OurQuery = " \t \tselect LABEL_CODE ,NVL(INITCAP(DECODE(" + lang + ",1,DESC_EN_TX,DESC_AR_TX)),'N... AVAIL') " + "\t from FORM_USER_LABEL \t\t\t\t\t\t\t" + " \twhere UPPER(FORM_ID) \t\t= UPPER('" + v_frm + "')" + " \tAND USER_ID \t\t= " + v_user;
    here is the executeQ method
    public synchronized ResultSet executeQ(String s)
    throws SQLException
    Object obj = null;
    ResultSet resultset = null;
    try
    if(conn.isClosed())
    ConnectDB();
    stmt = conn.createStatement();
    resultset = stmt.executeQuery(s);
    catch(Exception exception)
    System.out.println(exception);
    return resultset;
    }

    I can say only what the query does
    First i will spit this query into an understandable manner.
    1.select LABEL_CODE , // This selects the label code
    2.NVL(INITCAP(DECODE(" + lang + ",1,DESC_EN_TX,DESC_AR_TX)),'N... AVAIL') // The above part of the query uses three functions one is NVL,INITCAP and Decode.
    The NVL will convert when the query returns NULL to an user provided value i.e "N... AVAIL".
    The InitCAP will convert the Starting letters of each word to Upper case
    The Decode works like an if condition i.e
    Decode(value,condition,if condition is true,if condition is false)
    In your example, where decode works like
    if(lang == 1)
    DESC_EN_TX
    else
    DESC_AR_TX
    and the remaining part of the query it converts to upper case and compares with a where condition.
    if you still have doubts, first check for Decode option in oracle tutorial how it works then check the query.
    it is not necessary to keep the "\t" inside the query

  • Trying to create a query that shows Sales Order/Invoice Totals as well as Paid/Outstanding/Available Down Payments

    Currently working on SAP B1 v8.82
    I'm looking to generate a query that will give an overall report for a given customer that shows Sales Order No, Invoice No, Sales Order Total, Invoice Total, Amount Paid on Invoice, Amount Remaining on Invoice, Down Payments Available, Open on Sales Order.
    I'm not sure what the best way to select the columns in bold above.  Invoice Total should be self-explanatory.  Amount Paid should be any down payments or applied payments on the invoice.  The balance due on the invoice (which seems to be T0.DocTotal if I'm not mistaken) should = 'Invoice Total' - 'Amount Paid on Invoice'. In the Down Payments Available column I want the total amount of money on the account or on down payments that aren't tied to a Sales Order.  If a client overpaid in the past for instance and there's a credit on their account, then it should contribute to this sum.  Open on Sales Order should be pretty easy.  I guess it's just the sum of everything that is still open on the Sales Order.  I'm just not sure what the best way to sum all the un-delivered freight, tax, and line items is.  Here's what my query looks like so far.
    SELECT DISTINCT T4.[DocNum] [Sales Order No],
    T0.DocNum [Invoice No],
    T4.DocTotal [Sales Order Total]
    T0.DocTotal [Amount Outstanding],
    FROM OINV T0
    INNER JOIN INV1 T1 ON T0.DocEntry = T1.DocEntry
    INNER JOIN DLN1 T2 ON T1.BaseEntry = T2.DocEntry AND T1.BaseLine = T2.LineNum
    INNER JOIN RDR1 T3 ON T2.BaseEntry = T3.DocEntry AND T2.BaseLine = T3.LineNum
    INNER JOIN ORDR T4 ON T3.DocEntry = T4.DocEntry
    INNER JOIN OSLP T5 ON T4.SlpCode = T5.SlpCode
    WHERE T0.CardName Like '%%[%0]%%'
    GROUP BY T4.DocNum, T0.DocNum, T0.DocTotal, T4.DocTotal
    I tried doing a little searching around for queries similar to what I need, but I could find exactly what I was looking for and I'm very unfamiliar with OJDT, JDT1, and ITR1 tables which I think might be important to finding unapplied payments...

    Thanks.  There's a few problems though.
    1)  It seems that OINV DocTotal != Balance Due.  I'm seeing a number of invoices where there was a balance due, but we applied additional money (either we took another incoming payment and applied it or applied money from the account balance, etc.) and yet it still shows a total.
    2)  It's pulling incoming payments from different customers.  I think this is because the table was joined based on "RCT2 T4 on T4.[DocEntry]  =  T3.[DocNum] and T4.[InvoiceId] = T2.[LineNum]"  In one example I have 2 incoming payments 446 and 614.  Both have the DocEntry 542, but one relates to A/R Invoice 542 (for a different client) while the other relates to Down Payment Invoice 542.  *I was able to fix this by adding WHERE T5.CardCode = [%0]*
    3)  I'm going to work with this a little bit and see if I can alter it to make it work for me.  Basically this query falls a little short on the following:
    -  Doesn't include incoming payments that aren't linked to a down payment invoice.
    -  Does not give the Invoice Total (I'd like to know how much of the SO was invoiced.  DocTotal seems to give me Amount Invoiced - Down Payments.  I'm not sure the best way to get this number.  Maybe I could do the sum of each line * tax + freight)
    -  Does not give the outstanding amount on an invoice.  The ARtotal [DocTotal] column gives me how much was owed when the invoice was created, but it doesn't tell me what is currently owed.
    -  Lastly it may complicate the query too much and could be left off, but it would be nice to see if they have any money from credits or incoming payments that has not been applied.  Perhaps this would be easily accomplished by simply pulling in their account balance.

  • Please Help: query matching string value in WHERE clause

    Hi Everyone,
    I am trying to query customers that has matching first and last name but, I am getting result of every customer that has first and last name. Here is what my query looks like:
    SELECT * FROM  CUSTOMERS WHERE
    CUSTOMER_FNAME IN
    'JOHN', 'MIKE'
    AND CUSTOMER_LNAME IN
    'DOE', 'MILLER'
    ); I am trying to query customer name that is JOHN DOE and MIKE MILLER but, i get result of all names that has the first/last names not exact match. Is there way i can do that get exact match?
    Thanks,
    SM

    Frank Kulash wrote:
    Hi,
    chris227 wrote:
    SMCR wrote:
    Thanks everyone for your help!
    There are two correct answers, I am using following:I just see one, it's Franks.If fname never contains a '~' (or if lname never contains a '~") then
    {code}
    where fname||'~'||lname in ('JOHN~DOE', 'MIKE~MILLER');
    {code}
    will work.Yes, I realized that. For the purpose i am using i will not have any issue with '~'
    I did however changed it up a little, here is how it looks like:
    {code}
    SELECT CUSTOMER_ID, CUSTOMER_FNAME, CUSTOMER_LNAME, DATE_OF_BIRTH
    FROM CUSTOMERS
    WHERE (CUSTOMER_FNAME||'~'||CUSTOMER_LNAME, DATE_OF_BIRTH) IN
    (('JOHN~DOE'), (TO_DATE('20130101', 'YYYYMMDD'))),
    (('MIKE~MILLER'), (TO_DATE('20130101', 'YYYYMMDD')))
    {code}

  • Lengthy query

    Hi,
    when i am firing below mentioned query it takes more than an hour,but when i put inner join instead of left outer join it fetches immediately
    can someone explain me what this query says why it is taking some much time
    thanks
    SELECT
    TOTAL.product_key ,
    TOTAL.msrepl_tran_version ,
    to_timestamp(TOTAL.time_stamp,'yyyy-mm-dd hh24:mi:ss.FF') AS time_stamp ,
    TOTAL.aux1_changed_on_dt ,
    TOTAL.aux2_changed_on_dt ,
    TOTAL.aux3_changed_on_dt ,
    TOTAL.aux4_changed_on_dt ,
    TOTAL.UPD_INS
    FROM
    SELECT stg_excp.product_key ,
    stg_excp.msrepl_tran_version ,
    stg_excp.aux1_changed_on_dt ,
    stg_excp.aux2_changed_on_dt ,
    stg_excp.aux3_changed_on_dt ,
    stg_excp.aux4_changed_on_dt ,
    CASE
    WHEN ods.product_key IS NULL
    AND ods.timestamp IS NULL
    THEN 'INS'
    ELSE 'UPD'
    END AS UPD_INS
    FROM (SELECT
    /*+ use_hash(rs,prodhdr) */
    rs.product_key ,
    rs.msrepl_tran_version ,
    rs.aux1_changed_on_dt ,
    rs.aux2_changed_on_dt ,
    rs.aux3_changed_on_dt ,
    rs.aux4_changed_on_dt
    FROM (SELECT stg.product_key ,
    msrepl_tran_version ,
    stg.time_stamp ,
    aux1_changed_on_dt ,
    aux2_changed_on_dt ,
    aux3_changed_on_dt ,
    aux4_changed_on_dt ,
    excpt.insert_date insert_date ,
    row_number() over(PARTITION BY stg.product_key,stg.time_stamp ORDER BY excpt.insert_date DESC) rnum
    FROM stg_lkp_var_prodauditdata stg
    LEFT OUTER JOIN
    (SELECT SUBSTR(primary_key_value, instr(primary_key_value, ' |@|PRODUCT_KEY', 1)
    + LENGTH(' |@|PRODUCT_KEY :'), (instr(primary_key_value, ' |@|',
    instr(primary_key_value, ' |@|PRODUCT_KEY', 1)
    + LENGTH(' |@|PRODUCT_KEY :') + 1))
    -(instr(primary_key_value, ' |@|PRODUCT_KEY', 1)
    + LENGTH(' |@|PRODUCT_KEY :'))) AS product_key,
    SUBSTR(primary_key_value, instr(primary_key_value, ' |@|TIME_STAMP', 1)
    + LENGTH(' |@|TIME_STAMP :'), (instr(primary_key_value, ' |@|',
    instr(primary_key_value, ' |@|TIME_STAMP', 1)
    + LENGTH(' |@|TIME_STAMP :') + 1))
    -(instr(primary_key_value, ' |@|TIME_STAMP', 1)
    + LENGTH(' |@|TIME_STAMP :'))) AS time_stamp ,
    insert_date
    FROM mes_ods.ods_exception_table
    WHERE TABLE_NAME = UPPER('ods_lkp_var_prodauditdata')
    AND processed = 'NO'
    excpt
    ON NVL(excpt.product_key, '`@') = stg.product_key
    AND SUBSTR(TO_CHAR(to_timestamp(excpt.time_stamp,'yyyy-mm-dd hh24:mi:ss.FF'),'mmddyyyyhh24missff'),1, 17) =
    SUBSTR(TO_CHAR(to_timestamp(stg.time_stamp, 'yyyy-mm-dd hh24:mi:ss.FF'), 'mmddyyyyhh24missff'), 1, 17)
    WHERE (
    changed_on_dt > to_date('05/25/2013 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
    OR
    excpt.product_key IS NOT NULL
    AND excpt.time_stamp IS NOT NULL
    ) rs
    INNER JOIN mes_ods.ods_lkp_var_prodhdr prodhdr
    ON rs.product_key = prodhdr.product_key
    WHERE rs.rnum = 1) stg_excp
    LEFT OUTER JOIN (select * from mes_ods.ods_lkp_var_prodauditdata
    where timestamp between trunc(last_day(add_months(sysdate, -7))+1) and systimestamp) ods
    ON stg_excp.product_key = ods.product_key
    AND SUBSTR(to_char(to_timestamp(stg_excp.time_stamp, 'yyyy-mm-dd hh24:mi:ss.FF'), 'mmddyyyyhh24missff'), 1,17)
    = SUBSTR(to_char(ods.timestamp,'mmddyyyyhh24missff'), 1,17)
    )TOTAL
    thanks
    Edited by: 896398 on May 29, 2013 3:38 AM

    896398 wrote:
    Hi,
    thanks for reply
    i identified the query which is taking lots of time but what has to be done here in this case.help me with the steps
    LEFT OUTER JOIN
    (SELECT SUBSTR(primary_key_value, instr(primary_key_value, ' |@|PRODUCT_KEY', 1)
    + LENGTH(' |@|PRODUCT_KEY :'), (instr(primary_key_value, ' |@|',
    instr(primary_key_value, ' |@|PRODUCT_KEY', 1)
    + LENGTH(' |@|PRODUCT_KEY :') + 1))
    -(instr(primary_key_value, ' |@|PRODUCT_KEY', 1)
    + LENGTH(' |@|PRODUCT_KEY :'))) AS product_key,
    SUBSTR(primary_key_value, instr(primary_key_value, ' |@|TIME_STAMP', 1)
    + LENGTH(' |@|TIME_STAMP :'), (instr(primary_key_value, ' |@|',
    instr(primary_key_value, ' |@|TIME_STAMP', 1)
    + LENGTH(' |@|TIME_STAMP :') + 1))
    -(instr(primary_key_value, ' |@|TIME_STAMP', 1)
    + LENGTH(' |@|TIME_STAMP :'))) AS time_stamp ,
    insert_date
    FROM mes_ods.ods_exception_table
    WHERE TABLE_NAME = UPPER('ods_lkp_var_prodauditdata')
    AND processed = 'NO'In his reply he quoted
    How to Post a SQL statement tuning request
    HOW TO: Post a SQL statement tuning request - template posting
    which you obviously ignored. you haven't done any of the things mentioned in that post.
    If you can't be be bothered to provide the requested information, how do you expect us to help?

  • Where would you check performance of webi? query is taking long time to run

    Hello All,
    In the bex query world running on portal you were able to go to sm50 and check what the query is doing and where it is taking a long time or atleast you were able to see the processes runing.
    Where would you check the running processes when you are running a webi query, we are trying to write a webi report which is on universe which is created on bex query. The report is very simple just two fields and an mandatory variable which is coming from bex query (have defined the variable in bex query). When we exeute the query it is taking a long time just spinning and I am not getting any data back, on the same query before even hitting the run query button, I am trying to put a object in query filters and set the filter as In list from Value(s) from list and it is taking forever to set that filter.
    Can we go to CMC or BW backend and check anywhere we are using sap authentication, I see the number of sessions in CMC but that is it.
    Thanks for help in advance.

    Thank you both for the replies.
    How would I get the MDX that is generated by the query, I remember there is a note for starting the MDX logging. Can you please let em know how would I get the MDX statement. Thanks.
    Gowtham - What is the optimal array fetch size that needs to set for the universes, can you explain bit more about array fetch size?
    All our universes are on BEx queries designed in SAP BW in that case does the array fetch size matter and array bind size matter? I had read this in oneof the universe designer manuals for OLAP universes The Array fetch size, Array bind size, and Login timeout parameters are not used for OLAP connections
    Thanks again for replies.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

Maybe you are looking for

  • During a demo today could not use Messages for video chat or screen share - what dumb thing was I doing wrong?

    I was giving a demo / class today for some people in a volunteer group I work for. They all got new MacBook Airs or MacBook Pros, and I got a MacBook Pro because I am going to be going into explain how to use them. Most everythng went well, but one r

  • Illustrator CS6 does not save PNGs properly

    I have a problem with Illustrator CS6: anytime I try to export file to PNG, it saves a file of exactly 102 kb and the file is essentially empty. Is it a bug in Illustrator, or only I have such problems?

  • How to display PropertyuFF1Au300CPARENTHu300D on BPC for Excel

    Hi. Can the dimension property:PARENTH Is it possible to show the dimension property:PARENTH on BPC for Excel ? I try to use 「EvPRO」.but it didn't work. Are there special function for property:PARENTH ? Takuo

  • RH9 topics do not show in design view

    Hi, I am currently putting up a documentation project for my customer, but I have an issue with created topics. My structure is as follows: welcome topic map      subtopics      map           subtopics           map                subtopics      map

  • Flattened and Cloned, Edits Not Saved

    I flattened an image and made a duplicate layer to clone on, which I was able to do. Though after flattening are layers really possible? Then I did some more cloning without making  a duplicate layer. When I saved the image-- I also get the Replace o