Query using conncet by prior

Hi guys,
We have this below table with three coloumns
Id             Value                 Parentid
1234         6                                      
1235         7                      1234
1239         8                      1235
1338         9                      1337
1237         8                      1239 in this table if i give 1237 in the where condition or any kind of input... query should return 6,7,8,8
it should find out the value of the parentid and get it's value and travers untill the parentid is null
please help me in writing the above query
Edited by: Depakjan on Apr 20, 2011 10:51 AM
Edited by: Depakjan on Apr 20, 2011 10:52 AM

with Table1 as (
select 1234 Id , 6 value, null Parentid from dual union all
select 1235,         7,                      1234 from dual union all
select 1239,         8,                      1235 from dual union all
select 1338,         9,                      1337 from dual union all
select 1237,         8,                      1239 from dual
select value
from table1
start with id = 1237
connect by prior parentid=id

Similar Messages

  • Query tuning for query using connect by prior

    I have written following query to fetch the data. The query is written in this format because there are multiple rows, which make one recrd and we need to bring that record into one row.
    For one CAT(commented here), this query takes around 4 minutes and fetches 6900 records but when it runs for 3 CAT, it takes 17 mins.
    I want to tune this as this has to run for 350 CAT values.
    It is doing FTS on the main table. I tried to use different hints like PARALLEL, APPEND (in insert) but nothing worked.
    The cost of the query is 51.
    Any help/suggestions will be appreciated.
    SELECT DISTINCT MIN(SEQ) SEQ,
    PT, APP, IT, LR, QT,CD, M_A_FLAG,
    STRAGG(REM) REM, -- aggregates the data from different columns to one which is parent
    CAT
    FROM (WITH R AS (SELECT CAT, SEQ, PT, M_A_FLAG, IT, LR,QT,CD, REM, APP
    FROM table1
    WHERE REC = '36' AND M_A_FLAG = '1'
    --AND CAT = '11113')
    SELECT CAT, SEQ,
    CONNECT_BY_ROOT PT AS PT,
    CONNECT_BY_ROOT APP AS APPL,
    M_A_FLAG,
    CONNECT_BY_ROOT IT AS IT,
    CONNECT_BY_ROOT LR AS LR,
    CONNECT_BY_ROOT QT AS QT,
    CONNECT_BY_ROOT CD AS CD,
    REM
    FROM R A
    START WITH PT IS NOT NULL
    CONNECT BY PRIOR SEQ + 1 = SEQ
    AND PRIOR CAT = CAT
    AND PT IS NULL)
    GROUP BY PT, APP, IT,LR, QT, CD, M_A_FLAG, CAT
    ORDER BY SEQ;
    Thanks.
    Edited by: user2544469 on Feb 11, 2011 1:12 AM

    The following threads detail the approach and information required.
    Please gather relevant info and post back.
    How to post a SQL tuning request - HOW TO: Post a SQL statement tuning request - template posting
    When your query takes too long - When your query takes too long ...

  • Hierarchical query with out using Connect by prior

    Hi Guys,
    I am supporting a product which is enterprise based and only allowd to write queries which are ANSII standard.
    I have an requirement like If I provide the child I need to know all the parents till  highest level.
    My table structure is like below
    Table_name :  Org_unit
    Columns are
    Org_unit_id name desc  parent_org_unit_id
    I wil pass the org_unit id and want to list all the parents of the chile org_unit_id and it has to be accomplished without  using connec by prior.
    Please suggest me some ideas and aprroches
    I am using Orcle 11g version

    Hi,
    960593 wrote:
    Hi Guys,
    I am supporting a product which is enterprise based and only allowd to write queries which are ANSII standard.
    I have an requirement like If I provide the child I need to know all the parents till  highest level.
    My table structure is like below
    Table_name :  Org_unit
    Columns are
    Org_unit_id name desc  parent_org_unit_id
    I wil pass the org_unit id and want to list all the parents of the chile org_unit_id and it has to be accomplished without  using connec by prior.
    Please suggest me some ideas and aprroches
    I am using Orcle 11g version
    The data model you posted (org_unit_id as primary key, parent_org_unit_id as foreign key to the same table for the parent, when there is a parent) is called the Adjacency Model because it keeps track of which nodes are adjacent (or next to) each other.
    I'm familiar with 2 other ways to model hierarchies: the Nested Sets Model, and what I call the Lineage Model.  I'll show how to find a given node's ancestors (in hierarchical order) in each model.  Neither the Nested Sets nor the Lineage Model requires CONNECT BY or recursive WITH clauses to work.
    The following table contains all the columns necessary for using each of these 3 models:
         EMPNO        MGR ENAME      LINEAGE                    NS_LOW NS_HIGH
          7839            KING       /7839/                          1      28
          7698       7839 BLAKE      /7839/7698/                     2      13
          7499       7698 ALLEN      /7839/7698/7499/                3       4
          7900       7698 JAMES      /7839/7698/7900/                5       6
          7654       7698 MARTIN     /7839/7698/7654/                7       8
          7844       7698 TURNER     /7839/7698/7844/                9      10
          7521       7698 WARD       /7839/7698/7521/               11      12
          7782       7839 CLARK      /7839/7782/                    14      17
          7934       7782 MILLER     /7839/7782/7934/               15      16
          7566       7839 JONES      /7839/7566/                    18      27
          7902       7566 FORD       /7839/7566/7902/               19      22
          7369       7902 SMITH      /7839/7566/7902/7369/          20      21
          7788       7566 SCOTT      /7839/7566/7788/               23      26
          7876       7788 ADAMS      /7839/7566/7788/7876/          24      25
    The Lineage Model keeps track of all of a given nodes ancestors, so if all you need to find are the primary keys of a given node, it's really trivial: it's all in the lineage column.  If you want to find more information about those ancestors, then you can do a self-join, like this:
    SELECT    a.empno, a.ename, a.lineage
    FROM      emp  a
    JOIN      emp  d   ON  d.lineage  LIKE '%/' || a.empno || '/%'
    WHERE     d.ename  IN ('ADAMS')
    ORDER BY  d.ename
    ,         a.lineage
    Output:
         EMPNO ENAME      LINEAGE
          7839 KING       /7839/
          7566 JONES      /7839/7566/
          7788 SCOTT      /7839/7566/7788/
          7876 ADAMS      /7839/7566/7788/7876/
    The Nested Sets model is harder to understand.
    Imagine everyone in the hierarchy standing on a wide staircase, as if for a group picture; everyone on the same level standing on the same step.  Everyone is holding up an umbrella that is wide enough to cover himself and all the people who are under him in the hierarchy.  The people with no underlings have small umbrellas, denoted like this "<-SMITH->", and peole that manage others have bigger umbrellas, like this: <-------- JONES -------->.  So the group picture might look like this:
    <-------------------------------------------- KING --------------------------------------------->
      <---------------------- BLAKE --------------------->  <-- CLARK -->  <-------- JONES -------->
       <-ALLEN-> <-JAMES-> <-MARTIN-> <-TURNER-> <-WARD->    <-MILLER->     <-- FORD--> <--SCOTT-->
                                                                             <-SMITH->   <-ADAMS->
    Each parent's umbrella covers all of his descendants (children, grandchildren, etc.), and nobody else.
    Now draw vertical lines trom the edges of each umbrella downwards, and number those lines from left to right:
    <-------------------------------------------- KING ------------------------------------------------>
    |                                                                                                  |
    | <---------------------- BLAKE --------------------->  <-- CLARK ->   <-------- JONES ----------> |
    | |                                                  |  |          |   |                         | |
    | |<-ALLEN-> <-JAMES-> <-MARTIN-> <-TURNER-> <-WARD->|  |<-MILLER->|   |<-- FORD--> <--SCOTT---> | |
    | ||       | |       | |        | |        | |      ||  ||        ||   ||         | |          | | |
    | ||       | |       | |        | |        | |      ||  ||        ||   ||<-SMITH->| |<-ADAMS-> | | |
    | ||       | |       | |        | |        | |      ||  ||        ||   |||       || ||       | | | |
                                               1 1      11  11        11   112       22 22       2 2 2 2
    1 23       4 5       6 7        8 9        0 1      23  45        67   890       12 34       5 6 7 8
    The numbers corresponding to the left arnd right edges of each umbrella are what I called ns_low and ns_high in the table.  Each employyes ns_low and ns_high numbers will be inside the range of each of his ancestors ns_low and ns_high.
    To find the ancestors of a given node in the nested set model you can do this:
    SELECT   a.empno, a.ename, a.ns_low, a.ns_high
    FROM   emp  a
    JOIN      emp  d  ON  d.ns_low  BETWEEN  a.ns_low
                                    AND      a.ns_high
    WHERE     d.ename  IN ('ADAMS')
    ORDER BY  d.ename
    ,         a.ns_low
    Both the Lineage and Nested Sets models are good for tree structures only, whereas the Adjacency Model can handle other kinds of graphs, including graphs with loops.
    Both the Lineage and Nested Sets models can be very difficult to maintain if the hierarchy is re-organized.
    I'd like to repeat some of the warnings that others have made.  You could write separate code for each system (Oracle, SQL Server, ...) that you want to run in, and the code for each system will be more or less different.  You're looking for some code that will get the same results in all systems.  That code will be more complicated that the most complicated of the single-system versions, and it will be sloweer than the slwoest of the single-system versions.  You're giving up a lot of functionality, and probably also ease of maintenance, by writing code that has to work on multiple systems without changes.
    Here's how I created the emp table shown above from scott.emp:
    CREATE TABLE    emp
    AS
    WITH    connect_by_results  AS
        SELECT  empno, mgr, ename
        ,       LEVEL   AS lvl
        ,       ROWNUM  AS r_num
        ,       SYS_CONNECT_BY_PATH (empno, '/') || '/'   AS lineage
        FROM scott.emp
        START WITH mgr IS NULL
        CONNECT BY mgr = PRIOR empno
        ORDER SIBLINGS BY ename
    SELECT empno, mgr, ename, lineage
    ,       (2 * r_num) - lvl            AS ns_low
    ,       (2 * r_num) + ( 2 * (
                                    SELECT  COUNT (*)
                                    FROM    connect_by_results
                                    WHERE   lineage  LIKE '%/' || cbr.empno || '/%'
                        - (lvl + 1)      AS ns_high
    FROM connect_by_results   cbr
    This relies on the fact that the hierarchy in scott.emp has only one root (that is, a node with no parent).  Computing the Nested Sets numbers is a little more complicated if you can have multiple roots.

  • Crosstab query using pure SQL only

    Hi all,
    Found a lot of threads on crosstab, but none seems to address what I need. I need to perform crosstab query using pure SQL only & the number of columns are dynamic. From a query, I obtained the below table:
    Name Date Amount
    Alex 2005-06-10 1000
    Alex 2005-06-20 1000
    Alex 2005-07-10 1000
    Alex 2005-07-20 1000
    Alex 2005-08-10 1000
    Alex 2005-08-20 1000
    John 2005-06-10 2000
    John 2005-06-20 2000
    John 2005-07-10 2000
    John 2005-07-20 2000
    John 2005-08-10 2000
    John 2005-08-20 2000
    And I need to transform it into:
    Name 06-2005 07-2005 08-2005
    Alex 2000 2000 2000
    John 4000 4000 4000
    Reason for the columns being dynamic is because they'll be a limit on the date ranges to select the data from. I'd have a lower & upper bound date say June-2005 to August-2005, which explains how I got the data from the above table.
    Please advise.
    Thanks!

    Hi,
    I couldn't resist the intellectual challenge of a pure SQL solution for a pivot table with a dynamic number of columns. As Laurent pointed out, a SQL query can only have a fixed number of columns. You can fake a dynamic number of columns, though, by selecting a single column containing data at fixed positions.
    <br>
    <br>
    If it were me, I'd use a PL/SQL solution, but if you must have a pure SQL solution, here is an admittedly gruesome one. It shows the sum of all EMP salaries per department over a date range defined by start and end date parameters (which I've hardcoded for simplicity). Perhaps some of the techniques demonstrated may help you in your situation.
    <br>
    <br>
    set echo off
    set heading on
    set linesize 100
    <br>
    select version from v$instance ;
    <br>
    set heading off
    <br>
    column sort_order noprint
    column sal_sums format a80
    <br>
    select -- header row
      1        as sort_order,
      'DEPTNO' as DEPTNO ,
      sys_connect_by_path
        ( rpad
            ( to_char(month_column),
              10
          ' | '
        ) as sal_sums
    from
        select
          add_months( first_month, level - 1 ) as month_column
        from
          ( select
              date '1981-01-01' as first_month,
              date '1981-03-01' as last_month,
              months_between( date '1981-03-01', date '1981-01-01' ) + 1 total_months
            from dual
        connect by level < total_months + 1
      ) months
    where
      connect_by_isleaf = 1
    connect by
      month_column = add_months( prior month_column, 1 )
    start with
      month_column = date '1981-01-01'
    union all
    select -- data rows
      2 as sort_order,
      deptno,
      sys_connect_by_path( sum_sal, ' | ' ) sal_sums
    from
      select
        dept_months.deptno,
        dept_months.month_column,
        rpad( to_char( nvl( sum( emp.sal ), 0 ) ), 10 ) sum_sal
      from
          select
            dept.deptno,
            reporting_months.month_column
          from
            dept,
            ( select
                add_months( first_month, level - 1 ) as month_column
              from
                ( select
                    date '1981-01-01' as first_month,
                    date '1981-03-01' as last_month,
                    months_between( date '1981-03-01', date '1981-01-01' ) + 1 total_months
                  from
                    dual
              connect by level < total_months + 1
            ) reporting_months
        ) dept_months,
        emp
      where
        dept_months.deptno = emp.deptno (+) and
        dept_months.month_column = trunc( emp.hiredate (+), 'MONTH' )
      group by
        dept_months.deptno,
        dept_months.month_column
    ) dept_months_sal
    where
      month_column = date '1981-03-01'
    connect by
      deptno = prior deptno and
      month_column = add_months( prior month_column, 1 )
    start with
      month_column = date '1981-01-01'
    order by
      1, 2
    <br>
    VERSION
    10.1.0.3.0
    <br>
    DEPTNO      | 81-01-01   | 81-02-01   | 81-03-01
    10          | 0          | 0          | 0
    20          | 0          | 0          | 0
    30          | 0          | 2850       | 0
    40          | 0          | 0          | 0
    <br>
    Now, if we substitute '1981-03-01' with '1981-06-01', we see 7 columns instead of 4
    <br>
    DEPTNO      | 81-01-01   | 81-02-01   | 81-03-01   | 81-04-01   | 81-05-01   | 81-06-01
    10          | 0          | 0          | 0          | 0          | 0          | 2450
    20          | 0          | 0          | 0          | 2975       | 0          | 0
    30          | 0          | 2850       | 0          | 0          | 2850       | 0
    40          | 0          | 0          | 0          | 0          | 0          | 0
    <br>To understand the solution, start by running the innermost subquery by itself and then work your way outward.

  • ABAP query using logical database KDF is not populating custom fields

    Hi Experts ,
    I created two following queries
    1.       VENDORCATKDF – uses KDF logical database
    2.       VENDORCATLFA1 – uses table = LFA1
    I’m pulling the same information in both queries:
    ·         Vendor Number
    ·         Country
    ·         Vendor Name
    ·         Vendor Category  (custom fields added to LFA1)
    The results for the query that uses the logical database KDF is incorrect.  It doesn’t pull in the flag on the custom field LFA1-ZMRO.   Even though the logical database KDF is made up of the table LFA1 and has these fields. 
    Is there something that can be done – so that all of these “custom” category fields under LFA1 (such as LFA1-ZZMRO) – get pulled into queries – when we use the logical database KDF ?

    Hi,
    I have got the error removed by ensuring that fields from one table are a part of one line ( taking help of ruler) only. But the underlying problem remains, the output is not ALV but List output.
    I do not think having additional fields in the query is reason for this.
    Is it bcoz iI am adjusting the output length of columns to ensure no hierarchical error ?
    Can we not have a query using LDB which is shown as SAP List?
    Regards,
    Garima.

  • MODEL clause using CONNECY BY PRIOR

    Hello.
    I have the following table, CONTEXT_MAPPING:
    Name Null? Type
    ID NOT NULL NUMBER(38)
    CONTEXT_ITEM NOT NULL VARCHAR2(30)
    ID_1 NUMBER(38)
    ID_2 NUMBER(38)
    ID_3 NUMBER(38)
    ID_4 NUMBER(38)
    ID_5 NUMBER(38)
    It's a self referencing table, i.e. ID_1 will refer to an ID from another row in CONTEXT_MAPPING, and the same for ID_2/3/4/5.
    Easily illustrated through the following hierarchical data:
    ID CONTEXT_ITEM ID1 ID2 ID3 ID4 ID5
    1 P_DMA_NDA_ID
    2 P_DS_NDA_ID 1
    3 P_AST_NDA_ID 2 1
    4 P_AGI_ID 3
    5 P_ASG_NDA_ID 3
    6 P_NTS_FACTS 5 5
    7 P_IDE_VALUE 2 1
    8 P_EIT_VALUE 2 1
    9 P_TRI_TABLE 4 6 7 8
    10 P_TRI 9
    11 P_PRICE1 6 10 6
    12 P_PRICE2 6 10 6
    What I want to do, is for any context item, to identify ALL of its dependencies throughout the tree.
    For example:
    P_PRICE2 has a link to P_NTS_FACTS (ID1 & ID3 = 6) and P_TRI (ID2 = 10)
    P_NTS_FACTS has a link to P_ASG_NDA_ID (ID1 & ID2 = 5), which in turn links to P_AST_NDA_ID (ID1 = 3)...
    P_TRI has a link to P_TRI_TABLE (ID1 = 9), which in turn links to multiple context items (ID1 = 4, ID2 = 6 etc.) ....
    ....and so on, until we get to the "root" record, P_DMA_NDA_ID.
    So, to see the complete dependency tree for P_PRICE2, I would expect to see a hierarchical result-set like this:
    ID CONTEXT_ITEM ID1 ID2 ID3 ID4 ID5
    12 P_PRICE2 6 10 6
    10 P_TRI 9
    9 P_TRI_TABLE 4 6 7 8
    4 P_AGI_ID 3
    6 P_NTS_FACTS 5 5
    5 P_ASG_NDA_ID 3
    3 P_AST_NDA_ID 2 1
    7 P_IDE_VALUE 2 1
    8 P_EIT_VALUE 2 1
    2 P_DS_NDA_ID 1
    1 P_DMA_NDA_ID
    Ideally I want to do this in a single SQL statement - I've tried using CONNECT BY PRIOR in conjunction with LEVEL to do this, but it only performs a hierarchical join for a single child-parent relationship, and I need this to work for up to five children.
    Was starting to wonder if I could use the MODEL clause in conjunction with CONNECT BY PRIOR to achieve this - does anyone have any idea whether this type of recursion is possible?
    Thanks,
    Ray

    it only performs a hierarchical join for a single child-parent relationshipBeg to differ.
    Oracle Database 10g Release 10.2.0.2.0 - Production
    SQL> CREATE TABLE context_mapping (
      2     id NUMBER(38),
      3     context_item VARCHAR2(30),
      4     id_1 NUMBER(38),
      5     id_2 NUMBER(38),
      6     id_3 NUMBER(38),
      7     id_4 NUMBER(38),
      8     id_5 NUMBER(38));
    Table created.
    SQL> INSERT INTO context_mapping VALUES (1, 'P_DMA_NDA_ID', NULL, NULL, NULL, NULL, NULL);
    1 row created.
    SQL> INSERT INTO context_mapping VALUES (2, 'P_DS_NDA_ID', 1, NULL, NULL, NULL, NULL);
    1 row created.
    SQL> INSERT INTO context_mapping VALUES (3, 'P_AST_NDA_ID', 2, 1, NULL, NULL, NULL);
    1 row created.
    SQL> INSERT INTO context_mapping VALUES (4, 'P_AGI_ID', 3, NULL, NULL, NULL, NULL);
    1 row created.
    SQL> INSERT INTO context_mapping VALUES (5, 'P_ASG_NDA_ID', 3, NULL, NULL, NULL, NULL);
    1 row created.
    SQL> INSERT INTO context_mapping VALUES (6, 'P_NTS_FACTS', 5, 5, NULL, NULL, NULL);
    1 row created.
    SQL> INSERT INTO context_mapping VALUES (7, 'P_IDE_VALUE', 2, 1, NULL, NULL, NULL);
    1 row created.
    SQL> INSERT INTO context_mapping VALUES (8, 'P_EIT_VALUE', 2, 1, NULL, NULL, NULL);
    1 row created.
    SQL> INSERT INTO context_mapping VALUES (9, 'P_TRI_TABLE', 4, 6, 7, 8, NULL);
    1 row created.
    SQL> INSERT INTO context_mapping VALUES (10, 'P_TRI', 9, NULL, NULL, NULL, NULL);
    1 row created.
    SQL> INSERT INTO context_mapping VALUES (11, 'P_PRICE1', 6, 10, 6, NULL, NULL);
    1 row created.
    SQL> INSERT INTO context_mapping VALUES (12, 'P_PRICE2', 6, 10, 6, NULL, NULL);
    1 row created.
    SQL> SELECT DISTINCT id, context_item, id_1, id_2, id_3, id_4, id_5
      2  FROM   context_mapping
      3  START WITH id = 12
      4  CONNECT BY id IN (PRIOR id_1, PRIOR id_2, PRIOR id_3, PRIOR id_4, PRIOR id_5)
      5  ORDER BY id DESC;
            ID CONTEXT_ITEM                         ID_1       ID_2       ID_3       ID_4       ID_5
            12 P_PRICE2                                6         10          6
            10 P_TRI                                   9
             9 P_TRI_TABLE                             4          6          7          8
             8 P_EIT_VALUE                             2          1
             7 P_IDE_VALUE                             2          1
             6 P_NTS_FACTS                             5          5
             5 P_ASG_NDA_ID                            3
             4 P_AGI_ID                                3
             3 P_AST_NDA_ID                            2          1
             2 P_DS_NDA_ID                             1
             1 P_DMA_NDA_ID
    11 rows selected.
    SQL>

  • Getting an error when i am execution a BI query using ABAP.

    Hi Expert,
    I am getting an error when i am execution a BI query using ABAP. Its Giving me this Error "The Info Provider properties for GHRGPDM12 are not the same as the system default" and in the error analysis it saying as bellow.
    Property Data Integrity has been set differently to the system default.
    Current setting: 0 for GHRGPDM12
    System default: u2019 7 u2018
    As I am very new to BI and have very limited knowledge, so I am not able to understand this problem. Can any one help me to resolving this issue. Previously it as working fine, I am getting this error last 2 days.
    when i am debugging , I am getting error from
    create instance of cl_rsr_request
    CREATE OBJECT r_request
    EXPORTING
    i_genuniid = p_genuniid.
    this FM. Its not able to create the object. Can any one please help me out.
    Thanks in advance.
    Regards
    Satrajit

    Hi,
    I am able to solve this problem
    Regards
    Satrajit

  • Error while accessing Query using Query Analyzer

    dear experts...
    while accesiing the query using query analayzer...
    we are getting below error...
    What has happened?
    URL http://xxx.xxx.xxx.xx:XXXX/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcom.sap.pct!2fplatform_add_ons!2fcom.sap.ip.bi!2fiViews!2fcom.sap.ip.bi.bex call was terminated because the corresponding service is not available.
    Note
    The termination occurred in system BI1 with error code 404 and for the reason Not found.
    The selected virtual host was 0 .
    What can I do?
    Please select a valid URL.
    If you do not yet have a user ID, contact your system administrator.
    ErrorCode:ICF-NF-http-c:001-u:ANAND-l:E-i:PSRCCPRDA003_BI1_00-v:0-s:404-r:Notfound
    HTTP 404 - Not found
    Your SAP Internet Communication Framework Team
    thanks for helping me...
    anand

    Hi friends,,
    instead of getting the link like
    http://128.222.125.57:9000/sap/bw/bex?cmd=ldoc&infocube=ZMC_SRH1&query=AGINGV21A&sap-language=EN(working link)
    am getting below link...which is getting an error....
    http://128.222.125.57:9000/sap/bw/://:/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcom.sap.pct!2fplatform_add_ons!2fcom.sap.ip.bi!2fiViews!2fcom.sap.ip.bi.bex?QUERY=AGINGV21A
    above marked is the unwanted thing so how can i change my link in query designer???
    please suggest me firneds

  • Unable to create new query using query manager

    Hi friends,
    I have been trying to create a query using query manager for couple of hours but still not able to.I am following the instructions given in oracle Peopletools 8.52 : Peoplesoft query
    Chapter - creating new queries.
    Below are the steps I am going through to create new query:
    Step - I open the component Reporting Tools-> Query->Query Manager , I don't get the tabbed pages one for search an existing query and another for create new query.
    Please follow this link to see how the page is displayed in first step:
    http://uploadpic.org/v.php?img=EvMvVAXX1E
    Step 2 - When I click on create new query link, I am redirected to a page where it asks for record name to add in the query. But, even this page is not displayed as how it supposed to be
    http://uploadpic.org/v.php?img=GzHh3f6krU
    Step 3 - Following the above step, when I click on Person record to add into my query I am asked to select the fields that I want to display in the output.
    But I somehow do not get the proper tabbed pages where individual pages are there to add the attributes to complete the query like adding multiple records, fields, query, expressions, prompt...etc
    http://uploadpic.org/v.php?img=Wbbla3Q3jE
    I am neither able to select multiple records in my query nor able to customize my query to get the desired results.
    Below is the query that I want to create using query manager:
    SELECT P.EMPLID,P.BIRTHDATE, N.NAME, A.ADDRESS1, A.ADDRESS2, A.CITY
    FROM PS_PERSON P, PS_NAMES N, PS_ADDRESSES A
    WHERE P.EMPLID = N.EMPLID AND
    N.EMPLID = A.EMPLID AND
    P.BIRTHDATE BETWEEN to_date('1990/1/1','yyyy/mm/dd') and to_date('1991/1/1','yyyy/mm/dd');

    Hi,
    As I cannot access your screenshot by the blocking of company firewall rules.
    I'm guessing currently you are using PT 8.52.00 version, correct?
    This should be a bug, that you need to apply the 8.52.0X patch, not sure which patch fix this, you can apply the latest one to solve this issue for no tab page of query manager. (PT 8.52.06)
    Hope this helps.
    Thanks,
    Saxon SI

  • In DBI , how to find out the Source Query used for the Report

    Hi All,
    How to find out the Source Query used to display the data in the DBI Reports or Dashboards. We can get it in Apps Front end by Going to Help and Record Histroty. But DBI Runs in Internet Explorer so i dont know how to get the source query ( SELECT Query ) Used.
    In IE we have View --> Source . But that does not help since it gives the HTML Coding and not the SELECT Query used.
    If anyone has ever worked on it...Please help me in finding it.
    Thanks,
    Neeraj Shrivastava

    Hi neeraj,
    You can see the query used to display reports.Follow these steps to get the query.
    1)Login to oracle apps
    2)Select "Daily Business Intelligence Administrator" responsiblity.
    3)Now click on "Enable/Disable Debugging" (Now u enabled debugging)
    4)now open the report which you want to see the query of
    5)In view source it displays query along with the bind varilables.
    Feel free to ping me if you have any doubts
    thanks
    kittu

  • How do i know if my query using  SORT_AREA_SIZE   or temporary tablespace ?

    Good Morning  Everyone !
    My DB version is 10.2.0.1
    I have large table  exactly 3 million records.
    SQL> select count(*) from tab1;
    COUNT(*)
       300000
    SQL> select * from tab1 order by no DESC;
    sorting  ... in process
    300000 rows selected.
    in Terminal 2 : I tried to find  sorting details -   ( No rows  selected - why ? )
    SQL> select USERNAME , USER , TABLESPACE , SQL_ID from v$tempseg_usage  ;
    no rows selected
    SQL> /
    no rows selected
    When i google i have seen this ;
    If  Oracle cannot do the sort in memory  (SORT_AREA_SIZE initialisation parameter), space will be allocated in a temporary tablespace for doing the sort operation.
    REF_LINK : TEMPORARY Tablespaces and TEMPFILES | Oracle FAQ
    MY DOUBT QUESTION :   How do i know if my query using  SORT_AREA_SIZE   or temporary tablespace ?
    Thanks in advance.

    @ JohnWatson
    I have seen some articles from ORA - FAQ. Good.
    SQL> select USERNAME , USER , TABLESPACE , SQL_ID from v$tempseg_usage;
    USERNAME                       USER   TABLESPACE                      SQL_ID
       SCOTT                               SYS          TEMP                            fh9vqgyd6m0d1
    PGA management means that sorting only 300000 rows  may well occur in memory
                Is this (3 million rows) -  standard  value for 10g version ?
    Thanks JohnWatson

  • How can I perform this kind of range join query using DPL?

    How can I perform this kind of range join query using DPL?
    SELECT * from t where 1<=t.a<=2 and 3<=t.b<=5
    In this pdf : http://www.oracle.com/technology/products/berkeley-db/pdf/performing%20queries%20in%20oracle%20berkeley%20db%20java%20edition.pdf,
    It shows how to perform "Two equality-conditions query on a single primary database" just like SELECT * FROM tab WHERE col1 = A AND col2 = B using entity join class, but it does not give a solution about the range join query.

    I'm sorry, I think I've misled you. I suggested that you perform two queries and then take the intersection of the results. You could do this, but the solution to your query is much simpler. I'll correct my previous message.
    Your query is very simple to implement. You should perform the first part of query to get a cursor on the index for 'a' for the "1<=t.a<=2" part. Then simply iterate over that cursor, and process the entities where the "3<=t.b<=5" expression is true. You don't need a second index (on 'b') or another cursor.
    This is called "filtering" because you're iterating through entities that you obtain from one index, and selecting some entities for processing and discarding others. The white paper you mentioned has an example of filtering in combination with the use of an index.
    An alternative is to reverse the procedure above: use the index for 'b' to get a cursor for the "3<=t.b<=5" part of the query, then iterate and filter the results based on the "1<=t.a<=2" expression.
    If you're concerned about efficiency, you can choose the index (i.e., choose which of these two alternatives to implement) based on which part of the query you believe will return the smallest number of results. The less entities read, the faster the query.
    Contrary to what I said earlier, taking the intersection of two queries that are ANDed doesn't make sense -- filtering is the better solution. However, taking the union of two queries does make sense, when the queries are ORed. Sorry for the confusion.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • SAP / ABAP Query - using logical database

    Hi ,
    We have a mandate to implement SAP Query using only Logical Databases (LDB ) .
    We understand that there are several issues using this approach .
    1 ) Paralled tables in MM need to be displayed on separate lines .
    2 ) Statistics based on fields from 2 different tables cannot be produced eg: EKPO ( PO Number )  and EKKN . (Cost Center )
    Please share your experiences .
    Thank you in advance .
    Kishore
    Kishore

    Adeel,
    I do appreciate your experience and respect you for knowledge on SAP Query. 
    Joining tables seem simple to IT experts but not so for end users.  SAP Query is an end-user tool and users seem to need a simple user-friend, drag-and-drop solution to extract information using SAP Query based on a pre-defined infoset.  Various SAP conferences have been advocating the use of SAP delivered Logical Databses to create infosets in order to harness the various advantages of LDBs. 
    In MM there are several LDBs (e.g. ELM, EBM etc) to create queries on EKPO and EKKN.  The problem arises when you use the same LDB to extract information from more than the above two tables e.g. EKPO, EKET, EKKN and EKBE.  SAP expects you to display fields on multiple lines and also does not allow producing statistics based on fields from two parallel tables say EKKN and EKBE.  Moreover, multi-line reports cannot be produced in the ALV/SLV format.  
    We are also looking for best practise solutions in providing SAP Query to MM or FI users based on SAP delivered Logical Databases.  
    Pascal

  • Please help to re-write this query using exists or with

    Hi please help to re-write this query using exists or with, i need to write same code for 45 day , 90 days and so on but sub query condition is same for all
    SELECT SUM (DECODE (t_one_mon_c_paid_us, 0, 0, 1)) t_two_y_m_mul_ca_
    FROM (SELECT SUM (one_mon_c_paid_us) t_one_mon_c_paid_us
    FROM (
    SELECT a.individual_id individual_id,
    CASE
    WHEN NVL
    (b.ship_dt,
    TO_DATE ('05-MAY-1955')
    ) >= SYSDATE - 45
    AND a.country_cd = 'US'
    AND b.individual_id in (
    SELECT UNIQUE c.individual_id
    FROM order c
    WHERE c.prod_cd = 'A'
    AND NVL (c.last_payment_dt,
    TO_DATE ('05-MAY-1955')
    ) >= SYSDATE - 745)
    THEN 1
    ELSE 0
    END AS one_mon_c_paid_us
    FROM items b, addr a, product d
    WHERE b.prod_id = d.prod_id
    AND d.affinity_1_cd = 'ADH'
    AND b.individual_id = a.individual_id)
    GROUP BY individual_id)
    Edited by: user4522368 on Aug 23, 2010 9:11 AM

    Please try and place \ before and after you code \Could you not remove the inline column select with the following?
    SELECT a.individual_id individual_id
         ,CASE
            when b.Ship_dt is null then
              3
            WHEN b.ship_dt >= SYSDATE - 90
              3
            WHEN b.ship_dt >= SYSDATE - 45
              2
            WHEN b.ship_dt >= SYSDATE - 30
              1
          END AS one_mon_c_paid_us
    FROM  items           b
         ,addr            a
         ,product         d
         ,order           o
    WHERE b.prod_id       = d.prod_id
    AND   d.affinity_1_cd = 'ADH'
    AND   b.individual_id = a.individual_id
    AND   b.Individual_ID = o.Individual_ID
    and   o.Prod_CD       = 'A'             
    and   NVL (o.last_payment_dt,TO_DATE ('05-MAY-1955') ) >= SYSDATE - 745
    and   a.Country_CD    = 'US'

  • Execute a query using ABAP  (XSLT transformation issue)

    Hello,
    I made the steps from this blog (part I, II and III).
    /people/durairaj.athavanraja/blog/2005/12/05/execute-bw-query-using-abap-part-iii
    When trying to run the XSLT transformation, I got the message that : XML invalid source file.
    I am not sure what are the steps for running a transformation, or running it for this case ,maybe something it's not ok. I just run it, did not provide any information.
    Any suggestions ? Did anyone use the function module described in this blog ?
    Thank you very much in advance.

    try giving
    CALL TRANSFORMATION (`ID`)
    SOURCE meta = meta_data[]
    output = <ltable>[]
    RESULT XML xml_out
    OPTIONS xml_header = 'NO'.
    and check - sometimes the codepages configured in the BW system tend to cause an issue... I am not sure if the syntax is right though - but you are basically trying to bypass any encoding that is happening in the query transformation....
    http://www.sapetabap.com/ovidentia/index.php?tg=fileman&sAction=getFile&inl=1&id=4&gr=Y&path=ABAP%2FABAPENANGLAIS&file=ABAP-XML+Mapping.pdf&idf=41
    Edited by: Arun Varadarajan on May 18, 2009 11:28 PM

Maybe you are looking for

  • AFS grid value/valuation type not picked in sales order cost estimate

    Hi Experts, We have different valuation types in the material master. This is part of the AFS, where each grid value is a valuation type. While making a sales order cost estimate and in one sales order item, we have 3 grid values (valuation types), s

  • IDVD crash in middle of creation

    I have a project created in imovie that got transferred to idvd. It is a fairly big movie so I opted to burn a double layer disc. After the audio is encoded (which takes quite a long time), it prepares to burn the disc, then ejects the disc before an

  • Nokia C3 - unable to delete call log and some mess...

    Hi, I wonder if you can help me, I have tried to delete my call log from my phone however I get an "invalid selection" error message.  The logs do not have any time and date next to the names and I am unable to delete.  Also I have tried to delete so

  • Printing in Portrait from iCal

    This is a problem in iCal - you can't print from iCal in portrait mode - that's really lame Apple. I found a work around for the issue but why hasn't Apple fixed this??? I expect this kind of behavior from Microsoft but not from Apple, I should not h

  • Imovie/idvd/osx horror flicks!

    help - we recently purchased os x 10.4.4, and we run that alongside idvd version v2.1, and i movie v3.0.3 - it all worked fine with our previous operating system, but now we are having a massive problem transferring from imovie to idvd - the dvd just