Need help with cartesian join

Hi,
I have data like this
create table dd1
(col1 varchar2(20))
insert into dd1
values('P1')
insert into dd1
values('P2')
insert into dd1
values('P3')
insert into dd1
values('P4')
Col1
P1
P2
P3
P4
I need result like this:
Col1 col2
P1 P2
P1 P3
P1 P4
P2 P3
P2 P4
P3 P4
Is there any way to this in Sql?
Thanks

dd_ram wrote:
Hi,
I have data like this
create table dd1
(col1 varchar2(20))
insert into dd1
values('P1')
insert into dd1
values('P2')
insert into dd1
values('P3')
insert into dd1
values('P4')
Col1
P1
P2
P3
P4
I need result like this:
Col1 col2
P1 P2
P1 P3
P1 P4
P2 P3
P2 P4
P3 P4
Is there any way to this in Sql?
ThanksHi,
I have something for you who is more simple like previous example (probably the simplest answer from this forums):
HR: XE > select * from dd1 a, dd1 b
  2  where a.col1<b.col1;
COL1                 COL1
P1                   P2
P1                   P3
P1                   P4
P2                   P3
P2                   P4
P3                   P4
6 rows selected.
HR: XE > Regards,
Ion
Edited by: user111444777 on Sep 26, 2009 7:12 PM
Edited by: user111444777 on Sep 26, 2009 7:18 PM

Similar Messages

  • Need help with self join query

    Hello,
    I have table A with the following data
    oid parent_oid
    10 4
    4 2
    2 2
    12 6
    6 6
    parent_oid is the parent of oid. I'd like a query that shows the final parent of the oid. The result should show the following
    oid final parent
    10 2
    4 2
    2 2
    12 6
    6 6
    I'm using Oracle 10g. I'm familiar with self joins, but that alone will not do the job. Thanks!

    Hi,
    arizona9952 wrote:
    ... I'm familiar with self joins, but that alone will not do the job.You're absolutely right!
    A 2-way self join would work for rows have no parent, or rows that are directly connected to their final ancestor (such as oid=4), but not for anything farther away.
    A 3-way self-join would work for one more level away from the final row, but no more. That would be enough for the small set of sample data that you posted, but it would not work if you added a new row with parent_id=10.
    An N-way self-join would work for up to N+1 levels, but no more.
    You need something that can go any number of levels, such as CONNECT BY:
    SELECT     CONNECT_BY_ROOT oid     AS oid
    ,     parent_oid          AS final_parent
    FROM     a
    WHERE     CONNECT_BY_ISLEAF     = 1
    CONNECT BY     oid     = PRIOR parent_oid
         AND     oid     != parent_oid
    ;Edited by: Frank Kulash on Feb 22, 2010 7:09 PM
    Upon sober reflection, I think that a Top-Down query, like the one below, would be more efficient than a Bottom-Up query, like the one above:
    SELECT     oid
    ,     CONNECT_BY_ROOT     parent_oid     AS final_parent
    FROM     a
    START WITH     parent_oid     = oid
    CONNECT BY     parent_oid     = PRIOR oid
         AND     oid          != PRIOR oid
    ;

  • Need help with query joining several tables into a single return line

    what i have:
    tableA:
    puid, task
    id0, task0
    id1, task1
    id2, task2
    tableB:
    puid, seq, state
    id0, 0, foo
    id0, 1, bar
    id0, 2, me
    id1, 0, foo
    id2, 0, foo
    id2, 1, bar
    tableC:
    puid, seq, date
    id0, 0, 12/21
    id0, 1, 12/22
    id0, 2, 12/22
    id1, 0, 12/23
    id2, 0, 12/22
    id2, 1, 12/23
    what i'd like to return:
    id0, task0, 12/21, 12/22, 12/22
    id1, task1, 12/23, N/A, N/A
    id2, task2, 12/22, 12/23, N/A
    N/A doesn't mean return the string "N/A"... it just means there was no value, so we don't need anything in this column (null?)
    i can get output like below through several joins, however i was hoping to condense each "id" into a single line...
    id0, task0, 12/21
    id0, task0, 12/22
    id0, task0, 12/23
    id1, task1, 12/23
    is this possible fairly easily?
    Edited by: user9979830 on Mar 29, 2011 10:53 AM
    Edited by: user9979830 on Mar 29, 2011 10:58 AM

    Hi,
    Welcome to the forum!
    user9979830 wrote:
    what i have:...Thanks for posting that so clearly!
    Whenever you have a question, it's even better if you post CREATE TABLE and INSERT statements for your sample data, like this:
    CREATE TABLE     tablea
    (       puid     VARCHAR2 (5)
    ,     task     VARCHAR2 (5)
    INSERT INTO tablea (puid, task) VALUES ('id0',  'task0');
    INSERT INTO tablea (puid, task) VALUES ('id1',  'task1');
    INSERT INTO tablea (puid, task) VALUES ('id2',  'task2');
    CREATE TABLE     tablec
    (       puid     VARCHAR2 (5)
    ,     seq     NUMBER (3)
    ,     dt     DATE          -- DATE is not a good column name
    INSERT INTO tablec (puid, seq, dt) VALUES ('id0',  0,  DATE '2010-12-21');
    INSERT INTO tablec (puid, seq, dt) VALUES ('id0',  1,  DATE '2010-12-22');
    INSERT INTO tablec (puid, seq, dt) VALUES ('id0',  2,  DATE '2010-12-22');
    INSERT INTO tablec (puid, seq, dt) VALUES ('id1',  0,  DATE '2010-12-23');
    INSERT INTO tablec (puid, seq, dt) VALUES ('id2',  0,  DATE '2010-12-22');
    INSERT INTO tablec (puid, seq, dt) VALUES ('id2',  1,  DATE '2010-12-23');This way, people can re-create the problem and test their ideas.
    It doesn't look like tableb plays any role in this problem, so I didn't post it.
    Explain how you get the results from that data. For example, why do you want this row in the results:
    PUID  TASK  DT1        DT2        DT3
    id0   task0 12/21/2010 12/22/2010 12/22/2010rather than, say
    PUID  TASK  DT1        DT2        DT3
    id0   task0 12/22/2010 12/21/2010 12/22/2010? Does 12/21 have to go in the first column because it is the earliest date, or is it because 12/21 is related to the lowest seq value? Or do you even care about the order, just as long as all 3 dates are shown?
    Always say what version of Oracle you're uisng. The query below will work in Oracle 9 (and up), but starting in Oracle 11, the SELECT ... PIVOT feature could help you.
    i can get output like below through several joins, however i was hoping to condense each "id" into a single line... Condensing the output, so that there's only one line for each puid, sounds like a job for "GROUP BY puid":
    WITH     got_r_num     AS
         SELECT     puid
         ,     dt
         ,     ROW_NUMBER () OVER ( PARTITION BY  puid
                                   ORDER BY          seq          -- and/or dt
                           )         AS r_num
         FROM    tablec
    --     WHERE     ...     -- If you need any filtering, put it here
    SELECT       a.puid
    ,       a.task
    ,       MIN (CASE WHEN r.r_num = 1 THEN r.dt END)     AS dt1
    ,       MIN (CASE WHEN r.r_num = 2 THEN r.dt END)     AS dt2
    ,       MIN (CASE WHEN r.r_num = 3 THEN r.dt END)     AS dt3
    ,       MIN (CASE WHEN r.r_num = 4 THEN r.dt END)     AS dt4
    FROM       tablea    a
    JOIN       got_r_num r  ON   a.puid  = r.puid
    GROUP BY  a.puid
    ,            a.task
    ORDER BY  a.puid
    ;I'm guessing that you want the dates arranged by seq; that is, for each puid, the date related to the lowest seq comes first, regardless of whther that date is the earliest date for that puid or not. If that's not what you need, then change the analytic ORDER BY clause.
    This does not assume that the seq values are always consecutive integers (0, 1, 2, ...) for each puid. You can skip, or even duplicate values. However, if the values are always consecutive integers, starting from 0, then you could simplify this. You won't need a sub-query at all; just use seq instead of r_num in the main query.
    Here's the output I got from the query above:
    PUID  TASK  DT1        DT2        DT3        DT4
    id0   task0 12/21/2010 12/22/2010 12/22/2010
    id1   task1 12/23/2010
    id2   task2 12/22/2010 12/23/2010As posted, the query will display the first 4 dts for each puid.
    If there are fewer than 4 dts for a puid, the query will still work. It will leave some columns NULL at the end.
    If there are more than 4 dts for a puid, the query will still work. It will display the first 4, and ignore the others.
    There's nothing special about the number 4; you could make it 3, or 5, or 35, but whatever number you choose, you have to hard-code that many columns into the query, and always get that many columns of output.
    For various ways to deal with a variable number of pivoted coolumns, see the following thread:
    PL/SQL
    This question actually doesn't have anything to do with SQL*Plus; it's strictly a SQL question, and SQL questions are best posted on the "SQL and PL/SQL" forum:
    PL/SQL
    If you're not sure whether a question is more of a SQL question or a SQL*Plus question, then post it on the SQL forum. Many more people pay attention to that forum than to this one.

  • Need help with outer joins

    I have the following table structure,
    Table - 1_
    ID | Information
    1 | abcadskasasa
    2 | asdasdasdasd
    3 | saeqdfdvsfcsc
    Table - 2_
    ID | PID
    1 | 12
    1 | 13
    2 | 14
    1 | 15
    1 | 16
    2 | 12
    Table - 3_
    ID | PARID
    1 | 12
    2 | 14
    1 | 15
    Now I want to select for each ID in table 1, the count of number of PID from table 2 and count of number of PARID from table 3.
    Desired output:_
    ID | COUNT_PID | COUNT_PARID
    1 | 4 | 2
    2 | 2 | 1
    3 | 0 | 0
    Could anyone please help me out with this. I am trying to make use of outer joins, but as I work mostly on the front end so, not able to come up with a proper solution for the above.
    Thanks in advance,
    Tejas

    Hi, Tejas,
    You might have been doing the outer join correctly.
    There's another problem here: joining table_1 to two other tables with which it has a one-to-many relationship.
    If you were joining table_1 to just one other table, you could say:
    SELECT       t1.id
    ,       COUNT (t2.pid)     AS count_pid
    FROM              table_1     t1
    LEFT OUTER JOIN  table_2     t2     ON     t1.id     = t2.id
    GROUP BY  t1.id
    ORDER BY  t1.id;You could have done the exact same thing with table_3 instead of table_2.
    But you can't do the same thing with both table_2 and table_3 at the same time: that would be like cross-joining table_2 and table_3. Instead of showing id=1 having count_pid=4 and count_parid=2, you would get cout_pid=8 and count_parid=8 (since 8 = 4 * 2).
    You can do a separate GROUP BY on (at least) one of the tables.
    This gets the right results. In the main query, there is only one one-to-many relationship.
    WITH  t3_summary     AS
         SELECT    id
         ,       COUNT (parid)     AS count_parid
         FROM       table_3
         GROUP BY  id
    SELECT       t1.id
    ,       COUNT (t2.pid)     AS count_pid
    ,       MAX (t3.count_parid)     AS count_parid
    FROM              table_1     t1
    LEFT OUTER JOIN  table_2     t2     ON     t1.id     = t2.id
    LEFT OUTER JOIN      t3_summary     t3     ON     t1.id     = t3.id
    GROUP BY  t1.id
    ORDER BY  t1.id;

  • Need help with Update Join Query

    Hello, I am trying to update PID of #child table with PID of #parent table if "lastname & firstname are matches in both table" but my update query is giving some error. Please help and correct the update query. I am also trying to remove any
    blank space from starting and ending.
    drop table #parent,#child
    create table #parent (PID varchar(10), lastname varchar(50), firstname varchar(50))
    insert into #parent values ('100','Josheph','Sumali')
    insert into #parent values ('400','Karen','Hunsa')
    insert into #parent values ('600','Mursan  ','  Terry')
    create table #child (PID varchar(10), lastname varchar(50), firstname varchar(50))
    insert into #child values ('2','Josheph ','Sumali   ')
    insert into #child values ('5','Karen','Kunsi')
    insert into #child values ('6','Mursan ','Terry ')
    Update #child
    set PID = p.PID
    from #child C Join
    #parent p ON c.LTRIM(RTRIM(lastname) = p.LTRIM(RTRIM(lastname)
    AND c.LTRIM(RTRIM(firstname) = p.LTRIM(RTRIM(firstname)
    /* Requested Output */
    PID        lastname      firstname
    100        Josheph       Sumali
    600        Mursan        Terry

    create table #parent (PID varchar(10), lastname varchar(50), firstname varchar(50))
    insert into #parent values ('100','Josheph','Sumali')
    insert into #parent values ('400','Karen','Hunsa')
    insert into #parent values ('600','Mursan ',' Terry')
    create table #child (PID varchar(10), lastname varchar(50), firstname varchar(50))
    insert into #child values ('2','Josheph ','Sumali ')
    insert into #child values ('5','Karen','Kunsi')
    insert into #child values ('6','Mursan ','Terry ')
    Merge #child as t
    Using #parent as p ON (LTRIM(RTRIM(t.lastname)) = LTRIM(RTRIM(p.lastname))
    AND LTRIM(RTRIM(t.firstname)) = LTRIM(RTRIM(p.firstname)) )
    When Matched Then
    Update
    set PID = p.PID;
    update #child
    Set lastname=LTRIM(RTRIM(lastname)), firstname= LTRIM(RTRIM(firstname));
    update #parent
    Set lastname=LTRIM(RTRIM(lastname)), firstname = LTRIM(RTRIM(firstname));
    select * from #child
    select * from #parent
    drop table #parent,#child

  • Need help with a JOIN query

    Hi,
    I am looking for the following result:
    PROVIDER_ID     SPECIALTY     BUCKET     CODE     RATING     FEE
         1     FP                 EM     100     9     20
         1     FP                 SP     300     15     0
         1     INFUS                 EM     100     3     20
         1     INFUS                 EM     200     6     15The base tables are provided below in the with clause. What I am trying to do is, where "code" matches show the fee from t1 else show 0.
    I tried a few variotions but just can't seem to get it right.
    with t1 as
    select 1 as provider_id, 100 as code, 20 as fee, 10 as default_multiplier from dual union all
    select 1 as provider_id, 200 as code, 15 as fee, 30 as default_multiplier from dual
    , t2 as
    select 100 as code, 'INFUS' AS specialty, 'EM' AS bucket, 3 as rating from dual union all
    select 200 as code, 'INFUS' AS specialty, 'EM' AS bucket, 6 as rating from dual union all
    select 100 as code, 'FP' AS specialty, 'EM' AS bucket, 9 as rating from dual union all
    select 300 as code, 'FP' AS specialty, 'SP' AS bucket, 15 as rating from dual
    SELECT   t1.provider_id
           , t2.specialty
           , t2.bucket
           , t2.code
           , t2.rating
           , t1.fee
    FROM     t1, t2
    ORDER BY 1, 2, 3Any help will be appreciated.
    Thanks.

    Could I possibly add one more twist to this.
    The current result:
    PROVIDER_ID SPECI BU       CODE     RATING        FEE
              1 FP    EM        100          9         20
              1 FP    SP        300         15          0
              1 INFUS EM        100          3         20
              1 INFUS EM        200          6         75
              2 FP    EM        100          9         40
              2 FP    SP        300         15          0
              2 INFUS EM        100          3         40
              2 INFUS EM        200          6          0
              3 FP    EM        100          9          0
              3 FP    SP        300         15          0
              3 INFUS EM        100          3          0
              3 INFUS EM        200          6         60The current code:
    with t1 as
    select 1 as provider_id, 100 as code, 20 as fee, 10 as default_multiplier from dual union all
    select 1 as provider_id, 200 as code, 75 as fee, 10 as default_multiplier from dual union all
    select 1 as provider_id, 500 as code, 75 as fee, 10 as default_multiplier from dual union all
    select 2 as provider_id, 100 as code, 40 as fee, 20 as default_multiplier from dual union all
    select 3 as provider_id, 200 as code, 60 as fee, 30 as default_multiplier from dual
    , t2 as
    select 100 as code, 'INFUS' AS specialty, 'EM' AS bucket, 3 as rating from dual union all
    select 200 as code, 'INFUS' AS specialty, 'EM' AS bucket, 6 as rating from dual union all
    select 100 as code, 'FP' AS specialty, 'EM' AS bucket, 9 as rating from dual union all
    select 300 as code, 'FP' AS specialty, 'SP' AS bucket, 15 as rating from dual
    , t3 as
    SELECT   t1.provider_id
           , t2.specialty
           , t2.bucket
           , t2.code
           , t2.rating
           , CASE t1.code
                WHEN t2.code
                   THEN t1.fee
                ELSE 0
             END AS fee
           , RANK () OVER (PARTITION BY t1.provider_id, t2.specialty, t2.bucket, t2.code ORDER BY CASE t1.code
                 WHEN t2.code
                    THEN t1.fee
                 ELSE 0
              END DESC) AS the_rank
        FROM t1
           , t2
    SELECT DISTINCT provider_id
                  , specialty
                  , bucket
                  , code
                  , rating
                  , fee
               FROM t3
              WHERE the_rank = 1
           ORDER BY 1
                  , 2
                  , 3
                  , 4I added the code 500 to t1. Howevere, I don't want this code to show up in the result becuase it is does not exist in t2.
    I also added code 200 to t1, to see if that shows properly.
    I had to do the rank, becuase I need the proper count of rows.
    In reality the row counts are:
    t1: 20 Million
    t2: 10 Thousand.
    So, I would like to avoid doing DISTINCT and ANALYTIC functions on this large set, so any better way of achiving the same result, with a better statement.
    Thanks.

  • Need help with inner join and distinct rows

    Hey Guys,
    i have
    1) BaseEnv Table 
    2) Link Table
    3) BaseData Table
    Link table has three columns Id,BaseEnvId,BaseDataId 
    the BaseEnvID is unique in the table where as BaseDataId can be repeated i.e multile rows of BaseEnv Table can point to same BaseData  table row
    Now i want to do  BaseEnvTable inner join Link Table inner join BaseData Table and select 5 columsn ; Name,SyncName,Version,PPO,DOM  from the BaseData table.. the problem is that after i do the inner join I get duplciate records..
    i want to eliminate the duplicate records , can any one help me here

    Please post DDL, so that people do not have to guess what the keys, constraints, Declarative Referential Integrity, data types, etc. in your schema are. Learn how to follow ISO-11179 data element naming conventions and formatting rules. Temporal data should
    use ISO-8601 formats. Code should be in Standard SQL as much as possible and not local dialect. 
    This is minimal polite behavior on SQL forums. Now we have to guess and type, guess and type, etc. because of your bad manners. 
    CREATE TABLE Base_Env
    (base_env_id CHAR(10) NOT NULL PRIMARY KEY,
    Think about the name Base_Data; do you have lots of tables without data? Silly, unh? 
    CREATE TABLE Base_Data
    (base_data_id CHAR(10) NOT NULL PRIMARY KEY,
    Your Links table is wrong in concept and implementation. The term “link” refers to a pointer chain structure used in network databases and makes no sense in RDBMS. There is no generic, magic, universal “id” in RDBMS! People that do this are called “id-iots”
    in SQL slang. 
    We can model a particular relationship in a table by referencing the keys in other tables. But we need to know if the relationship is 1:1, 1:m, or n:m. This is the membership of the relationship. Your narrative implies this: 
    CREATE TABLE Links
    (base_env_id CHAR(10) NOT NULL UNIQUE
       REFERENCES Base_Env (base_env_id),
     base_data_id CHAR(10) NOT NULL
       REFERENCES Base_Data (base_data_id));
    >> The base_env_id is unique in the table where as base_data_id can be repeated I.e multiple rows of Base_Env Table can point [sic] to same Base_Data table row. <<
    Again, RDBMS has no pointers! We have referenced an referencing tables. This is a fundamental concept. 
    That narrative you posted has no ON clauses! And the narrative is also wrong. There is no generic “name”, etc. What tables were used in your non-query? Replace the ?? in this skeleton: 
    SELECT ??.something_name, ??.sync_name, ??.something_version, 
           ??.ppo, ??.dom
    FROM Base_Env AS E, Links AS L, Base_Data AS D
    WHERE ?????????;
    >> I want to eliminate the duplicate records [sic], can any one help me here?<<
    Where is the sample data? Where is the results? Please read a book on RDBMS so you can post correct SQL and try again. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Need help with conditional join....

    Table 1 - Student
    ID# Name
    1 # A
    2 # B
    3 # C
    4 # D
    Table2 - Marks
    ID Marks Display
    1 # 10 # Y
    1 # 20 # Y
    1 # 14 # N
    2 # 12 # N
    2 # 13 # N
    3 # 12 # Y
    Result...Need query to do this?..
    Want to join above two tables and display marks as X when there is no ID in table marks or there is ID but all marked with display as 'N'...if there is one or more
    marked with Y then display with marks..
    I am using oracle 11i.
    ID NAme      Marks
    1 # A     #     10
    1 # A     #     20
    2 # B # X
    3 # C # 12
    4 # D # X

    Or, using ANSI join syntax:
    with Table1 as (
                    select 1 id,'A' name from dual union all
                    select 2,'B' from dual union all
                    select 3,'C' from dual union all
                    select 4,'D' from dual
         Table2 as (
                    select 1 id,10 marks,'Y' display from dual union all
                    select 1,20,'Y' from dual union all
                    select 1,14,'N' from dual union all
                    select 2,12,'N' from dual union all
                    select 2,13,'N' from dual union all
                    select 3,12,'Y' from dual
    -- end of on-the-fly data sample
    select  t1.id,
            name,
            nvl(to_char(marks),'X') marks
      from      Table1 t1
            left join
                Table2 t2
              on (
                      t2.id = t1.id
                  and
                      display = 'Y'
      order by id
            ID NAME                                                    MARKS
             1 A                                                       10
             1 A                                                       20
             2 B                                                       X
             3 C                                                       12
             4 D                                                       X
    SQL> SY.

  • Need help with outer join filter.

    Need a little help filtering a resultset and I cant seem to find a proper way to do this.
    /*table*/
    create table invoice( farinvc_invh_code varchar2(100),
                                  farinvc_item varchar2(100),
                                  farinvc_po varchar2(100)
       create table po(
            supplier_number varchar2(60),
            supplier_invoice_no varchar2(60),
            po_number varchar2(60),
            run_date varchar2(60),
            PO_LINE_NUMBER varchar2(60) );
    /*data*/
    INSERT INTO "INVOICE" (FARINVC_INVH_CODE, FARINVC_ITEM, FARINVC_PO_ITEM) VALUES ('I0554164', '1', 'P0142245');
    INSERT INTO "INVOICE" (FARINVC_INVH_CODE, FARINVC_ITEM, FARINVC_PO_ITEM) VALUES ('I0554164', '3', 'P0142245');
    INSERT INTO "INVOICE" (FARINVC_INVH_CODE, FARINVC_ITEM, FARINVC_PO) VALUES ('I0554165', '1', 'P0142246');
    INSERT INTO "INVOICE" (FARINVC_INVH_CODE, FARINVC_ITEM, FARINVC_PO) VALUES ('I0554165', '2', 'P0142246');
    INSERT INTO "PO" (SUPPLIER_NUMBER, SUPPLIER_INVOICE_NO, PO_NUMBER, RUN_DATE, PO_LINE_NUMBER) VALUES ('914100121', '529132260', 'P0142245', '21-NOV-12', '1');
    INSERT INTO "PO" (SUPPLIER_NUMBER, SUPPLIER_INVOICE_NO, PO_NUMBER, RUN_DATE, PO_LINE_NUMBER) VALUES ('914100121', '529137831', 'P0142245', '21-NOV-12', '3');
    INSERT INTO "PO" (SUPPLIER_NUMBER, SUPPLIER_INVOICE_NO, PO_NUMBER, RUN_DATE, PO_LINE_NUMBER) VALUES ('914100121', '529137831', 'P0142245', '21-NOV-12', '2');
    INSERT INTO "PO" (SUPPLIER_NUMBER, SUPPLIER_INVOICE_NO, PO_NUMBER, RUN_DATE, PO_LINE_NUMBER) VALUES ('914100122', '145678', 'P0142246', '22-NOV-12', '1');
    INSERT INTO "PO" (SUPPLIER_NUMBER, SUPPLIER_INVOICE_NO, PO_NUMBER, RUN_DATE, PO_LINE_NUMBER) VALUES ('914100122', '145679', 'P0142246', '22-NOV-12', '2');query im executing.
    SELECT  farinvc_invh_code,
                    supplier_number,
                    supplier_invoice_no,
                    farinvc_item,
                    farinvc_po ,
                    po_number,
                    run_date,
                    PO_LINE_NUMBER
            FROM INVOICE, PO
            WHERE PO_NUMBER = FARINVC_PO(+)
            AND FARINVC_ITEM(+) = PO_LINE_NUMBER
            result
    "FARINVC_INVH_CODE"           "SUPPLIER_NUMBER"             "SUPPLIER_INVOICE_NO"         "FARINVC_ITEM"                "FARINVC_PO"                  "PO_NUMBER"                   "RUN_DATE"                    "PO_LINE_NUMBER"             
    "I0554165"                    "914100122"                   "145678"                      "1"                           "P0142246"                    "P0142246"                    "22-NOV-12"                   "1"                          
    "I0554165"                    "914100122"                   "145679"                      "2"                           "P0142246"                    "P0142246"                    "22-NOV-12"                   "2"                          
    "I0554164"                    "914100121"                   "529132260"                   "1"                           "P0142245"                    "P0142245"                    "21-NOV-12"                   "1"                          
    "I0554164"                    "914100121"                   "529137831"                   "3"                           "P0142245"                    "P0142245"                    "21-NOV-12"                   "3"                          
    ""                            "914100121"                   "529137831"                   ""                            ""                            "P0142245"                    "21-NOV-12"                   "2"                           this is a much larger table and I took an excerpt in order to keep things clear and understanding. I would like to filter this result set to only show the lines that have have the po numbers are the same and line are the same but there is an extra item. in other words like such.
    "FARINVC_INVH_CODE"           "SUPPLIER_NUMBER"             "SUPPLIER_INVOICE_NO"         "FARINVC_ITEM"                "FARINVC_PO"                  "PO_NUMBER"                   "RUN_DATE"                    "PO_LINE_NUMBER"             
    "I0554164"                    "914100121"                   "529132260"                   "1"                           "P0142245"                    "P0142245"                    "21-NOV-12"                   "1"                          
    "I0554164"                    "914100121"                   "529137831"                   "3"                           "P0142245"                    "P0142245"                    "21-NOV-12"                   "3"                          
    ""                            "914100121"                   "529137831"                   ""                            ""                            "P0142245"                    "21-NOV-12"                   "2"                          

    fair enough frank lets add some extra data to the tables.
    for example.
    INSERT INTO "INVOICE" (FARINVC_INVH_CODE, FARINVC_ITEM, FARINVC_PO) VALUES ('I0554167', '1', 'P0142447')
    INSERT INTO "INVOICE" (FARINVC_INVH_CODE, FARINVC_ITEM, FARINVC_PO) VALUES ('I0554167', '2', 'P0142447')
    INSERT INTO "PO" (SUPPLIER_NUMBER, SUPPLIER_INVOICE_NO, PO_NUMBER, RUN_DATE, PO_LINE_NUMBER) VALUES ('914100123', 'INV1', 'P0142247', '25-NOV-12', '1')
    INSERT INTO "PO" (SUPPLIER_NUMBER, SUPPLIER_INVOICE_NO, PO_NUMBER, RUN_DATE, PO_LINE_NUMBER) VALUES ('914100123', 'INV2', 'P0142247', '25-NOV-12', '2')
    INSERT INTO "PO" (SUPPLIER_NUMBER, SUPPLIER_INVOICE_NO, PO_NUMBER, RUN_DATE, PO_LINE_NUMBER) VALUES ('914100123', 'INV3', 'P0142247', '25-NOV-12', '3')if we run the query above we get
    "FARINVC_INVH_CODE"           "SUPPLIER_NUMBER"             "SUPPLIER_INVOICE_NO"         "FARINVC_ITEM"                "FARINVC_PO"                  "PO_NUMBER"                   "RUN_DATE"                    "PO_LINE_NUMBER"             
    "I0554164"                    "914100121"                   "529132260"                   "1"                           "P0142245"                    "P0142245"                    "21-NOV-12"                   "1"                          
    "I0554164"                    "914100121"                   "529137831"                   "3"                           "P0142245"                    "P0142245"                    "21-NOV-12"                   "3"                          
    ""                            "914100121"                   "529137831"                   ""                            ""                            "P0142245"                    "21-NOV-12"                   "2"                          
    ""                            "914100123"                   "INV2"                        ""                            ""                            "P0142247"                    "25-NOV-12"                   "2"                          
    ""                            "914100123"                   "INV1"                        ""                            ""                            "P0142247"                    "25-NOV-12"                   "1"                          
    ""                            "914100123"                   "INV3"                        ""                            ""                            "P0142247"                    "25-NOV-12"                   "3"                           where really what im trying to target is pos and lines that have a match in both tables and there is additional items from the po table that do not have have a match from the invoice table. Furthermore i would only like to see the items that are repeating invoice numbers, in other words my result here should still only be, because "SUPPLIER_INVOICE_NO" is repeating and it does find a match for the po in the invoice table but yet there is an item with the same invoice and po in the po table that does not have a match in the invoice table.
    "FARINVC_INVH_CODE"           "SUPPLIER_NUMBER"             "SUPPLIER_INVOICE_NO"         "FARINVC_ITEM"                "FARINVC_PO"                  "PO_NUMBER"                   "RUN_DATE"                    "PO_LINE_NUMBER"             
    "I0554164"                    "914100121"                   "529132260"                   "1"                           "P0142245"                    "P0142245"                    "21-NOV-12"                   "1"                          
    "I0554164"                    "914100121"                   "529137831"                   "3"                           "P0142245"                    "P0142245"                    "21-NOV-12"                   "3"                          
    ""                            "914100121"                   "529137831"                   ""                            ""                            "P0142245"                    "21-NOV-12"                   "2"                           hope that makes sense, and thanks for working with me on this.
    Edited by: mlov83 on Dec 12, 2012 5:53 AM

  • Need help with query join...

    Hi all, i have the below query, it has too many where clauses and want to seperate it into joins perhaps, how can i do this please suggest some syntax.
    Thanks,
    Alex
    CODE:
    “select surname,award.name as award_name,module.name as module_name,attemptno as attempt_no from student,award,studentaward,module,studentenrollment where studentenrollment.attemptno = 2 and studentenrollment.studentid = student.studentid and studentenrollment.moduleid = module.moduleid and studentaward.studentid = student.studentid and studentaward.awardid = award.awardid”

    select surname,award.name as award_name,module.name as module_name,attemptno as attempt_no
    from student
    join studentaward on studentaward.studentid = student.studentid
    join award on studentaward.awardid = award.awardid
    join studentenrollment on studentenrollment.studentid = student.studentid
    join module on studentenrollment.moduleid = module.moduleid
    where studentenrollment.attemptno = 2

  • Need help with Berkeley XML DB Performance

    We need help with maximizing performance of our use of Berkeley XML DB. I am filling most of the 29 part question as listed by Oracle's BDB team.
    Berkeley DB XML Performance Questionnaire
    1. Describe the Performance area that you are measuring? What is the
    current performance? What are your performance goals you hope to
    achieve?
    We are measuring the performance while loading a document during
    web application startup. It is currently taking 10-12 seconds when
    only one user is on the system. We are trying to do some testing to
    get the load time when several users are on the system.
    We would like the load time to be 5 seconds or less.
    2. What Berkeley DB XML Version? Any optional configuration flags
    specified? Are you running with any special patches? Please specify?
    dbxml 2.4.13. No special patches.
    3. What Berkeley DB Version? Any optional configuration flags
    specified? Are you running with any special patches? Please Specify.
    bdb 4.6.21. No special patches.
    4. Processor name, speed and chipset?
    Intel Xeon CPU 5150 2.66GHz
    5. Operating System and Version?
    Red Hat Enterprise Linux Relase 4 Update 6
    6. Disk Drive Type and speed?
    Don't have that information
    7. File System Type? (such as EXT2, NTFS, Reiser)
    EXT3
    8. Physical Memory Available?
    4GB
    9. Are you using Replication (HA) with Berkeley DB XML? If so, please
    describe the network you are using, and the number of Replica’s.
    No
    10. Are you using a Remote Filesystem (NFS) ? If so, for which
    Berkeley DB XML/DB files?
    No
    11. What type of mutexes do you have configured? Did you specify
    –with-mutex=? Specify what you find inn your config.log, search
    for db_cv_mutex?
    None. Did not specify -with-mutex during bdb compilation
    12. Which API are you using (C++, Java, Perl, PHP, Python, other) ?
    Which compiler and version?
    Java 1.5
    13. If you are using an Application Server or Web Server, please
    provide the name and version?
    Oracle Appication Server 10.1.3.4.0
    14. Please provide your exact Environment Configuration Flags (include
    anything specified in you DB_CONFIG file)
    Default.
    15. Please provide your Container Configuration Flags?
    final EnvironmentConfig envConf = new EnvironmentConfig();
    envConf.setAllowCreate(true); // If the environment does not
    // exist, create it.
    envConf.setInitializeCache(true); // Turn on the shared memory
    // region.
    envConf.setInitializeLocking(true); // Turn on the locking subsystem.
    envConf.setInitializeLogging(true); // Turn on the logging subsystem.
    envConf.setTransactional(true); // Turn on the transactional
    // subsystem.
    envConf.setLockDetectMode(LockDetectMode.MINWRITE);
    envConf.setThreaded(true);
    envConf.setErrorStream(System.err);
    envConf.setCacheSize(1024*1024*64);
    envConf.setMaxLockers(2000);
    envConf.setMaxLocks(2000);
    envConf.setMaxLockObjects(2000);
    envConf.setTxnMaxActive(200);
    envConf.setTxnWriteNoSync(true);
    envConf.setMaxMutexes(40000);
    16. How many XML Containers do you have? For each one please specify:
    One.
    1. The Container Configuration Flags
              XmlContainerConfig xmlContainerConfig = new XmlContainerConfig();
              xmlContainerConfig.setTransactional(true);
    xmlContainerConfig.setIndexNodes(true);
    xmlContainerConfig.setReadUncommitted(true);
    2. How many documents?
    Everytime the user logs in, the current xml document is loaded from
    a oracle database table and put it in the Berkeley XML DB.
    The documents get deleted from XML DB when the Oracle application
    server container is stopped.
    The number of documents should start with zero initially and it
    will grow with every login.
    3. What type (node or wholedoc)?
    Node
    4. Please indicate the minimum, maximum and average size of
    documents?
    The minimum is about 2MB and the maximum could 20MB. The average
    mostly about 5MB.
    5. Are you using document data? If so please describe how?
    We are using document data only to save changes made
    to the application data in a web application. The final save goes
    to the relational database. Berkeley XML DB is just used to store
    temporary data since going to the relational database for each change
    will cause severe performance issues.
    17. Please describe the shape of one of your typical documents? Please
    do this by sending us a skeleton XML document.
    Due to the sensitive nature of the data, I can provide XML schema instead.
    18. What is the rate of document insertion/update required or
    expected? Are you doing partial node updates (via XmlModify) or
    replacing the document?
    The document is inserted during user login. Any change made to the application
    data grid or other data components gets saved in Berkeley DB. We also have
    an automatic save every two minutes. The final save from the application
    gets saved in a relational database.
    19. What is the query rate required/expected?
    Users will not be entering data rapidly. There will be lot of think time
    before the users enter/modify data in the web application. This is a pilot
    project but when we go live with this application, we will expect 25 users
    at the same time.
    20. XQuery -- supply some sample queries
    1. Please provide the Query Plan
    2. Are you using DBXML_INDEX_NODES?
    Yes.
    3. Display the indices you have defined for the specific query.
         XmlIndexSpecification spec = container.getIndexSpecification();
         // ids
         spec.addIndex("", "id", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         spec.addIndex("", "idref", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         // index to cover AttributeValue/Description
         spec.addIndex("", "Description", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_SUBSTRING, XmlValue.STRING);
         // cover AttributeValue/@value
         spec.addIndex("", "value", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         // item attribute values
         spec.addIndex("", "type", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         // default index
         spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
         // save the spec to the container
         XmlUpdateContext uc = xmlManager.createUpdateContext();
         container.setIndexSpecification(spec, uc);
    4. If this is a large query, please consider sending a smaller
    query (and query plan) that demonstrates the problem.
    21. Are you running with Transactions? If so please provide any
    transactions flags you specify with any API calls.
    Yes. READ_UNCOMMITED in some and READ_COMMITTED in other transactions.
    22. If your application is transactional, are your log files stored on
    the same disk as your containers/databases?
    Yes.
    23. Do you use AUTO_COMMIT?
         No.
    24. Please list any non-transactional operations performed?
    No.
    25. How many threads of control are running? How many threads in read
    only mode? How many threads are updating?
    We use Berkeley XML DB within the context of a struts web application.
    Each user logged into the web application will be running a bdb transactoin
    within the context of a struts action thread.
    26. Please include a paragraph describing the performance measurements
    you have made. Please specifically list any Berkeley DB operations
    where the performance is currently insufficient.
    We are clocking 10-12 seconds of loading a document from dbd when
    five users are on the system.
    getContainer().getDocument(documentName);
    27. What performance level do you hope to achieve?
    We would like to get less than 5 seconds when 25 users are on the system.
    28. Please send us the output of the following db_stat utility commands
    after your application has been running under "normal" load for some
    period of time:
    % db_stat -h database environment -c
    % db_stat -h database environment -l
    % db_stat -h database environment -m
    % db_stat -h database environment -r
    % db_stat -h database environment -t
    (These commands require the db_stat utility access a shared database
    environment. If your application has a private environment, please
    remove the DB_PRIVATE flag used when the environment is created, so
    you can obtain these measurements. If removing the DB_PRIVATE flag
    is not possible, let us know and we can discuss alternatives with
    you.)
    If your application has periods of "good" and "bad" performance,
    please run the above list of commands several times, during both
    good and bad periods, and additionally specify the -Z flags (so
    the output of each command isn't cumulative).
    When possible, please run basic system performance reporting tools
    during the time you are measuring the application's performance.
    For example, on UNIX systems, the vmstat and iostat utilities are
    good choices.
    Will give this information soon.
    29. Are there any other significant applications running on this
    system? Are you using Berkeley DB outside of Berkeley DB XML?
    Please describe the application?
    No to the first two questions.
    The web application is an online review of test questions. The users
    login and then review the items one by one. The relational database
    holds the data in xml. During application load, the application
    retrieves the xml and then saves it to bdb. While the user
    is making changes to the data in the application, it writes those
    changes to bdb. Finally when the user hits the SAVE button, the data
    gets saved to the relational database. We also have an automatic save
    every two minues, which saves bdb xml data and saves it to relational
    database.
    Thanks,
    Madhav
    [email protected]

    Could it be that you simply do not have set up indexes to support your query? If so, you could do some basic testing using the dbxml shell:
    milu@colinux:~/xpg > dbxml -h ~/dbenv
    Joined existing environment
    dbxml> setverbose 7 2
    dbxml> open tv.dbxml
    dbxml> listIndexes
    dbxml> query     { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }
    dbxml> queryplan { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }Verbosity will make the engine display some (rather cryptic) information on index usage. I can't remember where the output is explained; my feeling is that "V(...)" means the index is being used (which is good), but that observation may not be accurate. Note that some details in the setVerbose command could differ, as I'm using 2.4.16 while you're using 2.4.13.
    Also, take a look at the query plan. You can post it here and some people will be able to diagnose it.
    Michael Ludwig

  • Need help with almost completed plugin engine project

    Hi all,
    For a while now I have been working on a plugin engine. After a few iterations, the engine is similar to the Eclipse engine, in that plugins use extension points and extensions to allow contributions. Unlike the eclipse engine I have added the ability for plugins to fire events through the engine and other plugins can add listeners, all through the plugin.xml manifest. Dependencies are mostly handled automatically at plugin load time (when extensions get resolved to extension points, and listeners get resolved to events). For the case where a plugin needs to use classes from another plugin, dependencies are also allowed to be declared. Like the eclipse engine, activation of plugins occurs the first time a class is used within the plugin's classpath, OR a plugin can be activated after it is loaded.
    What I need help with is testing, working on examples to provide with the engine project, and feedback/suggestions before we release the M1 build. I am asking for those that are interested in this type of work to volunteer to help where applicable and possible. I want to provide a solid plugin engine to the java community, one that is easy to use, works well, and is pretty effecient in terms of resource usage and performance.
    Of particular interest to me right at the moment is dealing with multiple versions. As I see it, the engine will be used within an application and as such plugins would be distributed with a specific application version. The plugin version itself is more of a notification as to what version a plugin is, although I imagine it will help when updating at runtime as well.
    Just a few other details of the engine. It handles (or will soon) dynamic load, unload and reload of plugins at runtime. Plugins can be distributed in an archive file format, we call .par (Plugin ARchive), with additional plugin filename extensions configurable at runtime. The plugins can be developed and deployed in an expanded directory format as they are in Eclipse as well, or in the archive format. In the archive format they do not need to be unzipped when deployed, and they can contain embeded jar/zip libraries. The engine handles finding and creating classes directly out of the .par file at runtime.
    Multiple locations to find plugins are configurable before the engine starts, and even after it starts more could be added to allow additional locations to find plugins. URLs are supported, and soon the HTTP protocol will be supported so that plugins can be downloaded and installed at runtime.
    The project can be found at www.sourceforge.net/projects/genpluginengine. If you would like to get involved and help out, please sign up on the dev mail list and send an email to introduce yourself to the rest of the members on the list.
    I'll also add that I am working on a Swing UI Framework built entirely from plugins. It provides a ready-to-launce UI application that developers can simply add their plugins to, extending various extension points of the framework to have menu items, toolbar buttons, status bar access, help and preferences dialog additions, file i/o choosers, tons of open-source components ready to use (or extend to add on to), and like Eclipse, hopefully... draggable window frames that can be dropped on any other frame to form a tabbed frame of windows. Some of this is a ways off, some is getting there now. Presently you can add menu items that do allow plugin activation when first clicked, so plugins can be loaded but not activated until needed. The Preference dialog works but is not completed, and a plugin that adds a plugin control panel to view all loaded plugins, activate them, load/unload/reload, view extension points, extensions, dependencies, etc is partially completed. The point is, to allow a ready to run UI framework in Swing with an easy path for developers to quickly build applications with. If you are interested in this, when you join the mail list and introduce yourself, indicate that you are interested in this as well, as we need help with plugin development for it and would appreciate more help here too.
    Look forward to some replies.

    Might I suggest setting up a project at a known project-site? I've seen your progress and questions posted here from time to time, but one of the drawbacks is that you have to fill each post with the entirity of your vision to explain what you're doing. That's a lot of text to read - and most folks will skip right over it.
    On the other hand, a well-crafted, good-looking project web-site, with appropriate links and docs and vision statements, diagrams, etc. will have more likelyhood of attracting volunteers. java.net and sourceforge.net are likely spots to set up shop. In addition, you get CVS and bug-tracking systems, which can be quite valuable in such a large-scale project where there are lots of pieces.

  • Need help with a query

    Hi,
    I need help with the following query. I want the balance (bal) with the latest exchange rate available.
    Sample table & data
    with
    FX_RATE as
    select 11 as id_date, 1 as id_curr, 47 as EXCH_rate from dual union
    select 12, 1, 48 from dual union
    select 13, 2, 54 from dual union
    select 14, 2, 55 from dual union
    select 15, 3, 56 from dual union
    select 15, 2, 49 from dual),
    TBL_NM as
    select 13 as p_date, 2 as p_curr, 200 as bal from dual union
    select 14, 2, 200 from dual union
    select 15, 2, 200 from dual union
    select 16, 2, 200 from dual union
    select 17, 2, 200 from dual union
    select 11, 5, 100 from dual
    select p_date, p_curr, bal * nvl(exch_rate,1) from TBL_NM T LEFT OUTER JOIN FX_RATE F1 on (id_curr = p_curr and F1.id_date = T.p_Date)In the above query for p_date 16 & 17 and p_curr 2 it returns just balance multiplied by exchange rate 1"default". But i want the balance to have data as per latest exchange rate which is of exchange rate 15.
    I tried this but returns error ORA-01799: a column may not be outer joined to a subquery ..
    with
    FX_RATE as
    select 11 as id_date, 1 as id_curr, 47 as EXCH_rate from dual union
    select 12, 1, 48 from dual union
    select 13, 2, 54 from dual union
    select 14, 2, 55 from dual union
    select 15, 3, 56 from dual union
    select 15, 2, 49 from dual),
    TBL_NM as
    select 13 as p_date, 2 as p_curr, 200 as bal from dual union
    select 14, 2, 200 from dual union
    select 15, 2, 200 from dual union
    select 16, 2, 200 from dual union
    select 17, 2, 200 from dual union
    select 11, 5, 100 from dual
    select p_date, p_curr, bal * nvl(exch_rate,1) from TBL_NM T LEFT OUTER JOIN FX_RATE F1
    on (id_curr = p_curr and F1.id_date = (select max(F2.id_date) from FX_RATE F2 where F2.id_curr = T.p_curr and F2.id_Date <=  T.p_date))Please advice on how i can achieve this ..

    The entire query wud be like this .. I've to incorporate in here
    CREATE MATERIALIZED VIEW MV_DUMMY
    BUILD IMMEDIATE
    REFRESH FORCE ON DEMAND
    AS
        SELECT T.ID_TSACTION_RELEASED                                                                                
        BAL.ID_CONTRACT_BALANCE                                                                                                                                                                                                                                                                           AS ID_CONTRACT_BALANCE,   
        T.N_REFERENCE_NUMBER                                                                                          
        T.INSTRUMENT_N_REFERENCE                                                                                      
        T.ITEM_NUMBER                                                                                                   
        T.EXTERNAL_SYSTEM_ID                                                                                            
        T.SEQUENCE_NUMBER                                                                                               
        T.ID_RELEASED_DATE                                                                                              
       ROUND(BAL.LC_AVAILABLE_BALANCE * NVL(FX1.EXCHANGE_RATE,1) / NVL(FX2.EXCHANGE_RATE,1) , 4)                           
        BAL.LIABILITY_BALANCE                                                                                              
        BAL.LIABILITY_BALANCE * NVL(FX3.EXCHANGE_RATE,1)                                                                   
        BAL.LIABILITY_CHANGE_USD                                                                                           
        BAL.MEMO_LIABILITY_BALANCE                                                                                         
        BAL.MEMO_LIABILITY_BALANCE * NVL(FX3.EXCHANGE_RATE,1)                                                              
        BAL.MEMO_LIABILITY_CHANGE_USD                                                                                      
        BAL.ORIGINAL_FACE_AMOUNT                                                                                           
        decode(T.TENOR_CODE,'Time','T','Sight','S','Split Sight Time','SST','Split Multiple Time','SMT',T.TENOR_CODE)
        CASE
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ILC','IB','AIR','STG','NLC','NP')
          THEN T.ID_LIABILITY_CIF
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ELC','EB','XLC','XP','EC','LN','SCF-AR')
          THEN T.Id_Beneficiary
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('TLC','IC','OA','SCF-AP')
          THEN T.ID_Applicant
        END PRIMARY_CUSTOMER_ID,
        CASE
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ILC','IB','AIR','STG','NLC','NP')
          THEN plbcif.EXTERNAL_SYSTEM_ID
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ELC','EB','XLC','XP','EC','LN','SCF-AR')
          THEN PBCIF.EXTERNAL_SYSTEM_ID
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('TLC','IC','OA','SCF-AP')
          THEN pappcif.EXTERNAL_SYSTEM_ID
        END PRIMARY_CUSTOMER_EXT_SYS_ID,
        CASE
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ILC','IB','AIR','STG','NLC','NP')
          THEN plbcif.CIF_NAME
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ELC','EB','XLC','XP','EC','LN','SCF-AR')
          THEN PBCIF.CIF_NAME
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('TLC','IC','OA','SCF-AP')
          THEN pappcif.CIF_NAME
        END PRIMARY_CUSTOMER_NAME,
        CASE
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ILC','IB','AIR','STG','NLC','NP')
          THEN plbbac.BAC
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ELC','EB','XLC','XP','EC','LN','SCF-AR')
          THEN pbbac.BAC
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('TLC','IC','OA','SCF-AP')
          THEN pappbac.BAC
        END PRIMARY_CUST_BAC_CODE,
        CASE
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ILC','IB','AIR','STG','NLC','NP')
          THEN nvl(plbmg.MARKET,'NOT APPLICABLE')
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ELC','EB','XLC','XP','EC','LN','SCF-AR')
          THEN nvl(pbmg.MARKET,'NOT APPLICABLE')
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('TLC','IC','OA','SCF-AP')
          THEN nvl(pappmg.MARKET,'NOT APPLICABLE')
        END PRIMARY_CUST_MARKET,
        CASE
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ILC','IB','AIR','STG','NLC','NP')
          THEN nvl(plbmg.SUB_MARKET,'NOT APPLICABLE')
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('ELC','EB','XLC','XP','EC','LN','SCF-AR')
          THEN nvl(pbmg.SUB_MARKET,'NOT APPLICABLE')
          WHEN GTSPROD.PRODUCT_CATEGORY IN ('TLC','IC','OA','SCF-AP')
          THEN nvl(pappmg.SUB_MARKET,'NOT APPLICABLE')
        END PRIMARY_CUST_SUB_MARKET
      FROM F_TSACTION_RELEASED T
      LEFT OUTER JOIN D_BAC_CODE BAC
      ON (T.BAC_CODE_LIABILITY = BAC.BAC_CODE)
      LEFT OUTER JOIN REF_BAC_SORT_CODE REF_BAC
      ON (T.BAC_CODE_LIABILITY = REF_BAC.BAC)
      LEFT OUTER JOIN F_CONTRACT_BALANCE BAL
      ON (T.ID_TSACTION_RELEASED = BAL.ID_TSACTION_RELEASED)
      LEFT OUTER JOIN D_MARKET_SEGMENT MG
      ON (T.ID_MARKET_SEGMENT = MG.ID_MARKET_SEGMENT)
      LEFT OUTER JOIN D_DATE DT
      ON (DT.ID_DATE = T.ID_RELEASED_DATE)
      LEFT OUTER JOIN D_DATE DB
      ON (DB.ID_DATE = BAL.ID_RELEASED_DATE)
      LEFT OUTER JOIN D_PROCESSING_UNIT PU
      ON (PU.ID_PROCESSING_UNIT = T.ID_PROCESSING_UNIT)
      LEFT OUTER JOIN D_BIR_PRODUCT BIRPROD
      ON (BIRPROD.ID_BIR_PRODUCT=T.ID_BIR_PRODUCT)
      LEFT OUTER JOIN D_GTS_PRODUCT_TYPE GTSPROD
      ON (GTSPROD.ID_GTS_PRODUCT_TYPE= T.ID_GTS_PRODUCT_TYPE)
      LEFT OUTER JOIN D_GTS_TSACTION_TYPE GTST
      ON (GTST.ID_GTS_TSACTION_TYPE = T.ID_GTS_TSACTION_TYPE)
      LEFT OUTER JOIN D_CURRENCY CCYT
      ON (CCYT.ID_CURRENCY = T.ID_TSACTION_CURRENCY)
      LEFT OUTER JOIN d_cif lcif
      ON (lcif.id_cif = T.id_liability_cif)
      LEFT OUTER JOIN d_cif lbcif
      ON (lbcif.id_cif = bal.id_liability_cif)
      LEFT OUTER JOIN d_cif bcif
      ON (bcif.id_cif = T.id_BENEFICIARY)
      LEFT OUTER JOIN d_cif icif
      ON (icif.id_cif = T.id_ISSUING_BANK)
      LEFT OUTER JOIN d_cif acif
      ON (acif.id_cif = T.id_ADVISING_BANK)
      LEFT OUTER JOIN d_cif appcif
      ON (appcif.id_cif = T.id_applicant)
      LEFT OUTER JOIN d_state astate
      ON (astate.id_state = acif.id_state)
      LEFT OUTER JOIN d_state bstate
      ON (bstate.id_state = bcif.id_state)
      LEFT OUTER JOIN d_state lstate
      ON (lstate.id_state = lcif.id_state)
      LEFT OUTER JOIN d_state lbstate
      ON (lbstate.id_state = lbcif.id_state)
      LEFT OUTER JOIN d_state istate
      ON (istate.id_state = icif.id_state)
      LEFT OUTER JOIN d_state appstate
      ON (appstate.id_state = appcif.id_state)
      LEFT OUTER JOIN D_TSACTION_SOURCE TSrc
      ON (T.ID_TSACTION_SOURCE = TSrc.ID_TSACTION_SOURCE)
      LEFT OUTER JOIN D_COUNTRY LCTRY
      ON (LCTRY.ID_COUNTRY = lcif.ID_COUNTRY)
      LEFT OUTER JOIN D_COUNTRY LBCTRY
      ON (LBCTRY.ID_COUNTRY = lbcif.ID_COUNTRY)
      LEFT OUTER JOIN D_COUNTRY BCTRY
      ON (BCTRY.ID_COUNTRY = bcif.ID_COUNTRY)
      LEFT OUTER JOIN D_COUNTRY ICTRY
      ON (ICTRY.ID_COUNTRY = icif.ID_COUNTRY)
      LEFT OUTER JOIN D_COUNTRY ACTRY
      ON (ACTRY.ID_COUNTRY = acif.ID_COUNTRY)
      LEFT OUTER JOIN D_COUNTRY APPCTRY
      ON (APPCTRY.ID_COUNTRY = appcif.ID_COUNTRY)
      LEFT OUTER JOIN D_COUNTRY PCTRY
      ON (PCTRY.ID_COUNTRY = T.ID_PRESENTER_COUNTRY)
      LEFT OUTER JOIN D_LOCATION LOC
      ON (LOC.ID_LOCATION = T.ID_PROCESSING_LOCATION)
      LEFT OUTER JOIN D_CURRENCY BCCYT
      ON (BCCYT.ID_CURRENCY = BAL.ID_LIABILITY_CURRENCY)
      LEFT OUTER JOIN D_CURRENCY BALCYT
      ON (BALCYT.ID_CURRENCY = BAL.ID_BALANCE_CURRENCY)
      LEFT OUTER JOIN d_liability_type li
      ON (li.id_liability_type = BAL.id_liability_type)
      LEFT OUTER JOIN d_cif plbcif
      ON (plbcif.id_cif = T.id_liability_cif)
      LEFT OUTER JOIN REF_BAC_SORT_CODE plbbac
      ON (plbcif.bac_code=plbbac.bac)
      LEFT OUTER JOIN D_MARKET_SEGMENT plbmg
      ON (plbbac.SORT_CODE=plbmg.MARKET_SEGMENT)
      LEFT OUTER JOIN d_cif pbcif
      ON (pbcif.id_cif = T.id_BENEFICIARY)
      LEFT OUTER JOIN REF_BAC_SORT_CODE pbbac
      ON (pbcif.bac_code=pbbac.bac)
      LEFT OUTER JOIN D_MARKET_SEGMENT pbmg
      ON (pbbac.SORT_CODE=pbmg.MARKET_SEGMENT)
      LEFT OUTER JOIN d_cif pappcif
      ON (pappcif.id_cif = T.id_applicant)
      LEFT OUTER JOIN REF_BAC_SORT_CODE pappbac
      ON (pappcif.bac_code=pappbac.bac)
      LEFT OUTER JOIN D_MARKET_SEGMENT pappmg
      ON (pappbac.SORT_CODE=pappmg.MARKET_SEGMENT)
      LEFT OUTER JOIN D_CURRENCY LOCALCYT
      ON (LOCALCYT.alpha_code = PU.local_ccy)
      LEFT OUTER JOIN D_BRANCH Branch              
      ON (T.ID_BRANCH  = Branch.ID_BRANCH )
      LEFT OUTER JOIN F_USD_FX_RATE_HISTORY FX1
      ON (BAL.ID_BALANCE_CURRENCY = FX1.ID_CURRENCY and FX1.ID_DATE = (select max(FX11.ID_DATE) from F_USD_FX_RATE_HISTORY FX11 where BAL.ID_BALANCE_CURRENCY = FX11.ID_CURRENCY and FX11.ID_DATE <= BAL.id_released_date))
      LEFT OUTER JOIN F_USD_FX_RATE_HISTORY FX2
      ON (LOCALCYT.ID_CURRENCY = FX2.ID_CURRENCY and FX2.ID_DATE = (select max(FX22.ID_DATE) from F_USD_FX_RATE_HISTORY FX22 where LOCALCYT.ID_CURRENCY = FX22.ID_CURRENCY and FX22.ID_DATE <= BAL.id_released_date))
      LEFT OUTER JOIN F_USD_FX_RATE_HISTORY FX3
      ON (BAL.ID_LIABILITY_CURRENCY = FX3.ID_CURRENCY and FX3.ID_DATE = (select max(FX33.ID_DATE) from F_USD_FX_RATE_HISTORY FX33 where BAL.ID_LIABILITY_CURRENCY = FX33.ID_CURRENCY and FX33.ID_DATE <= BAL.id_released_date))Note the lines
    ROUND(BAL.MN_AVAILABLE_BALANCE * NVL(FX1.EXCHANGE_RATE,1) / NVL(FX2.EXCHANGE_RATE,1) , 4)                           
    BAL.LIABILITY_BALANCE * NVL(FX3.EXCHANGE_RATE,1)                                                                   
    BAL.MEMO_LIABILITY_BALANCE * NVL(FX3.EXCHANGE_RATE,1)                 
    And
    LEFT OUTER JOIN F_USD_FX_RATE_HISTORY FX1
      ON (BAL.ID_BALANCE_CURRENCY = FX1.ID_CURRENCY and FX1.ID_DATE = (select max(FX11.ID_DATE) from F_USD_FX_RATE_HISTORY FX11 where BAL.ID_BALANCE_CURRENCY = FX11.ID_CURRENCY and FX11.ID_DATE <= BAL.id_released_date))
      LEFT OUTER JOIN F_USD_FX_RATE_HISTORY FX2
      ON (LOCAMNYT.ID_CURRENCY = FX2.ID_CURRENCY and FX2.ID_DATE = (select max(FX22.ID_DATE) from F_USD_FX_RATE_HISTORY FX22 where LOCAMNYT.ID_CURRENCY = FX22.ID_CURRENCY and FX22.ID_DATE <= BAL.id_released_date))
      LEFT OUTER JOIN F_USD_FX_RATE_HISTORY FX3
      ON (BAL.ID_LIABILITY_CURRENCY = FX3.ID_CURRENCY and FX3.ID_DATE = (select max(FX33.ID_DATE) from F_USD_FX_RATE_HISTORY FX33 where BAL.ID_LIABILITY_CURRENCY = FX33.ID_CURRENCY and FX33.ID_DATE <= BAL.id_released_date))Thsi is where I need to incorporate the change

  • Hi.. need help with a query

    hello guys :)
    I need help with this exercise:
    Find the most common cooking method in recipes that contain tomatoes.
    the 4 tables:
    t_recipes
    --recipe_no
    --recipe_name
    t_products
    --product_no
    -product_name [tomatoes,cucumbers, onions]
    t_integration
    --product_no
    --recipe_no
    t_cooking_mode
    recipe_no
    mode [Frying,baking]
    I am realy lost :S
    thank you

    Hi,
    851072 wrote:
    Thank you for your comment :)
    But... I didn`t understand the first partSorry, what part is that? You probably understood "Welcome to the forum!" It looks like you understood most of what I said, perhaps before I said it. You seem to be on the right track.
    Unlike talking to a co-worker in the next cube, exchanging messages with someone on this forum takes a fair amount of time, so it's worth the time it takes to explain things very clearly, more than you would in conversation. Post all the relevant information, and say exactly what the problem is.
    amm i tried to do this:
    select t_recipes.recipe_name, count(t_cooking_mode.cooking_mode)
    from (t_cooking_mode INNER JOIN t_integration ON t_cooking_mode.recipe_no = t_integration.recipe_no )
    INNER JOIN t_products ON t_integration.product_no = t_products.product_no)
    WHERE t_products.product_name = 'tomatoes'
    GROUP BY t_recipes.recipe_name
    help?Please help by posting whatever you know about the problem. For example, if there's a error message, post the complete error message, including the line number.
    Help the people who want to help you by formatting your code and making it easy to read and understand. This will help you, too.
    For example, here's exactly what you posted, with only the white-space changed to make parallel items, such as parentheses, line up nicely:
    select        t_recipes.recipe_name
    ,       count(t_cooking_mode.cooking_mode)
    from             (          t_cooking_mode
                INNER JOIN      t_integration      ON t_cooking_mode.recipe_no = t_integration.recipe_no
    INNER JOIN      t_products      ON t_integration.product_no = t_products.product_no
    WHERE       t_products.product_name      = 'tomatoes'
    GROUP BY  t_recipes.recipe_nameWhen you format your code like this, it can be very easy to spot errors like unbalanced parentheses. In this case, the ')' right before the WHERE clause has no matching '('. You don't need to use any parentheses at all in the FROM clause. You can simply say:
    FROM     t_cooking_mode
    JOIN      t_integration      ON t_cooking_mode.recipe_no = t_integration.recipe_no
    JOIN      t_products      ON t_integration.product_no = t_products.product_no
    WHERE     ...INNER JOIN is the default kind of join, which makes sense, since the majority of all joins are inner joins. I usually say JOIN (instead of INNER JOIN) because it makes the code more compact and easier to read (at least for me), but I won't be maintaining your code, so do what's best for you.
    What are you trying to find in this problem? Is it a recipe name or a cooking mode? If it's a cooking mode, the why do you have t_recipes.reciple_name in the SELECT (and GROUP BY) clause? Shouldn't you be using some other column, from some other table?
    If I understand the problem correctly, the t_recipes table is not needed in this problem. You won't necessarily use every table in every query.
    When you post formatted text on this site, type these 6 characters:
    \(small letters only, inside curly brackets) before and after each section of formatted text, to preserve spacing.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Need help with a currently "in-use" form we want to switch to Adobes hosting service

    Hi, I am in desperate need of help with some issues concerning several forms which we currently use a paid third party (not Adobe) to host and "re-distribute through email"...Somehow I got charged $14.95 for YOUR service, (signed up for a trial, but never used it)..and now I am paying for a year of use of the similar service which Adobe is in control of.  I might want to port my form distribution through Adobe in the hopes of reducing the errors, problems and hassles my customers are experiencing when some of them push our  "submit button". (and I guess I am familiar with these somewhat from reading what IS available in here, and I also know that, Adobe is working to alleviate some of these " submit"  issues, so let's don't start by going backwards, here) I need solutions now for my issues or I can leave it as is, If Adobe's solution will be no better for my end users...
    We used FormsCentral to code these forms and it works for the most part (if the end-user can co-operate, and thats iffy, sometimes), but I need help with how to make it go through your servers (and not the third party folks we use now), Not being cruel or racist here, but your over the phone "support techs" are about horrible & I cannot understand them or work with any of them, so I would definitely need someone who speaks English and can understand the nuances of programming these forms, to please contact me back. (Sorry, but both those attributes will be required to be able to help me, so, no "newbie-interns" or first week trainees are gonna cut it).... If you have anyone who fits the bill on those items and would be willing to help us, please contact me back at your earliest convenience. If we have to communicate here, I will do that & I can submit whatever we need to & to whoever we need to.
    I need to get this right and working for the majority of my users and on any platform and OS.
    You may certainly call me to talk about this, and I have given my number numerous times to your (expletive deleted) time wasting - recording message thingy. So, If it's not available look it up under [email protected]
    (and you will probably get right to me, unlike my and I'm sure most other folks',  "Adobe phone-in experiences")
    Thank You,
    Michael Corman
    VinylCouture
    Phenix City, Alabama  36869

    Well, thanks for writing back...just so you know...I started using Adobe products in 1987, ...yeah...back then...like Illustrator 1 & 9" B&W Macs ...John Warnock's Helvetica's....stuff like that...8.5 x 11 LaserWriters...all that good stuff...I still have some of it working on a mac...much of it was stuff I bought. some stuff I did not...I'm not a big fan of this "cloud" thing Adobe has foisted upon the creatives of the world...which I'm sure you can tell...but the functionality and usefulness of your software can not be disputed, so feel free to do whatever we will continue to pay for, ...I am very impressed with CC PS on the 64 bit PC and perhaps I will end up paying you the stipend that you demand for the other services.
    So  I guess that brings us to our problem.. a few years back and at the height of the recession and near bankruptcy myself,  I was damn lucky and hit on something and began a small arts and crafts supply service to sell my products online to a very "niche market" ...I had a unique product and still sell that product (plus others) online...My website is www.vinylcouture.com...Strange? Yes...but there is a market it seems, for everything now, and this is the market I service...Catagorically, these are 99%+ women that use these "adhesive, sticky backed vinyl products"  to make different "craft items" that are just way too various and numerous to go into... generally older women, women who are computer illiterate for the most part...and all this is irrelevant to my problem, but I want you to have every bit of background on this and especially the demographic we are dealing with, so we can get right to the meat of the problem.
    OK...So about two years ago, I decided to offer a "plain sheet" product of a plain colored "stick back" vinyl... it is available in multiple quantities of packs ( like 5 pieces, 10 pieces, 15 pieces, in a packi  & so on)...and if you are still on my site.. go to any  "GO RIGHT TO OUR ORDER PAGE"  button, scroll down a little...and then to the "PLAIN VINYL" section...you will see the Weebly website order process.) You can back out from here, I think,..but, anyway this product is available in 63 colors + or - a few. So then the problem is,  how do they select their individual colors within that (whatever) pack?... .
    So my initial idea was to enable a "selection form" for these "colors" that would be transmitted to me via email as 'part" of the "order process".. We tried getting our customers to submit a  " a list" ( something my competitiors still do, lol, poor bastards)......but that..is just unbelievable..I can't even begin to tell you what a freakin' nightmare that was...these people cannot even count to 10, much less any higher... figuring out what colors to list and send me... well, lets just say, it wasn't working......I had to figure out a better way...Something had to be done.
    So after thinking this all out,  and yeah...due to my total ignorance, i figured that we could make a form with Live Cycle Designer (Now Forms Central)...(back then something that was bundled with Adobe Acrobat Pro), I believe, and thats what this thing was authored in... and it would be all good...LOL!
    Well not so simple...as you well know, Adobe Acrobat would NOT LET YOU EMAIL anything from itself.....it just wouldn't work (and I know why, and all that hooey), but not being one to take NO for answer,.I started looking for a way to make my little gizmo work.. So I found this company that said they can "hijack" (re-direct actually) the request to email, bypass the wah-wah, and re-transmit it to the proper parties.....for less than $100 a year,  I think...its called http://pdf-fillableforms.com/.
    A nice gentleman named Joseph Silva helped us program the thing to go to his servers and back out. Please dont hassle them...I need them...for now..it basically does work...try it...you should get back a copy of the form that you filled out...good luck however,  if you're on MAC OSX or similar...
    I have included a copy of both of our forms (and feel free to fill it out and play with it)...just put test somewhere on it...(and you must include YOUR email or it will balk)..they are supposed to be mostly identical, except one seems to be twice as large....generating a 1.7 meg file upon submission, while the other one only generates a 600K file or so...thats another issue for another day or maybe you can advise on that also...
    OK so far so good......In our shop, once Grandma buys a 10 pack (or whatever), Only then she gets to the link on her receipt page ro the relevant "selection form" ,(this prevents "Filling and Sending"  with "no order" and "no payment", another early problem we had)... which they can click on and it will usually download and open up on their device if all goes well...Then our little form is supposed to be fillable and is supposed to ADD UP all the quantities, so grandma knows how many she is buying and so forth right on the fly,  and even while she changes her mind..., and IT'S LARGE so grandma can see it, and then it TOTALS it all up for them, ( cause remember, they can NOT add)..,  except there is a programming bug (mouse-click should be a mouse-up probably or something..) which makes you click in the blank spaces to get to a correct TOTAL...about 70-80% of our customers can enable all these features and usually the process completes without problems for them especially on PC's running Windows OS and Acrobat Reader X or XI...at least for most... Unfortunately it is still not the "seamless process" I would like or had envisioned for the other folks out there that do have trouble using our form....  Many folks report to us the following issues that we know of.  First of all it takes too much time to load up...We know its HUGE...is there anyway that you can see, to streamline this thing? I would love for it to be more compact...this really helps on the phones and pads as I'm sure you well know.
    Some just tell us,"it WON'T work"....I believe this is because they are totally out of it and dont even have Adobe Reader on their machine, & don't know how to get it ( yes, we provide the links).....or it's some ancient version....no one can stop this one...
    It almost always generates some kind ( at least one time)  of "error message" which we do warn them about..., telling one,  basically that "Acrobat doesnt even like this happening at all, and it could be detrimental to ones computer files", blah-blah...(this freaks grandma out really bad)...& usually they end up not even trying to send it...  and then I get calls that even you wouldn't believe...& If they DO nut up and push the Red "Submit Form" button, it will usually send the thing to us (and also back to them at the "required email address" they furnished on the form, thats what the folks at the "fillable forms place" do) so, if it's performing it's functions, why it is having to complain?. What are we doing wrong?....and how can I fix it?...Will re-compiling it or saving it as a newer version of "FormsCentral" correct any of these problems ?
    Ok, so that should keep you busy for a minute and we can start out with those problems...but the next thing is, how can I take advantage of YOUR re-direct & hosting services?, And will it get rid of the error messages, and the slowness, and the iOS incompatibilities ? (amazingly,  the last iOS Reader version worked almost OK.. but the newest version doesnt seem to work with my form on my iphone4)  If it will enable any version of the iOS to send my form correctly and more transparently, then it might be worth the money...$14.95 a MONTH you say. hmmmmm...Better be good.
    Another problem is, that I really don't need 5000 forms a month submitted. I think its like 70-100 or less....Got any plans for that?  Maybe I'm just not BIG ENOUGH to use Adobe's services, however in this case, I really don't care whose I do use as long as the product works most correctly for my customers as well as us. Like I said, If I'm doing the best I can, I won't change anything, and still use the other third party, If Adobe has a better solution, then i'm all for that as well. In the meantime, Thanks for any help you can provide on this...
    Michael Corman
    VinylCouture.com
    (706) 326-7911

Maybe you are looking for

  • Fax and phone interference

    The fax test says "not using cord supplied with printer,"  but I am using that cord as directed.   Problem--incoming phone calls are triggering the fax signal, and we can't hear the voice when we pick up after two rings.   How to separate the voice c

  • How to create Presentation variable for columns and to use it in Narrative?

    Hi All, Anybody know how to create presentation variable for a column (i.e., i need to create it in edit formula section itself). And then, I should use it in Narrative section to display that column value. Is it possible? Or do i need to use any oth

  • Not burning properly

    Hello, can anyone help me please? I have checked that my movies play before I burn them. Everything is good. When I have burned the DVD, the playback has not sound on two of the movies. There is a quick 1 seconds worth of the audio and then its gone.

  • Net Price does not reflect on Consignment PO even after maintaining info re

    Dear Experts, I have created consignment PO but the net price does not reflect on it. I have already maintained valid consignment info record for this material with reference to standard purchasing organization in ME11. Would appreciate your views ..

  • Can anyone convert this code to java......(urgent)

    hi everybody..... can anybody provide me a java code for the following problem...... i want that if user enters input like this: sam,john,undertaker,rock the output should be: 'sam','john','undertaker','rock' i.e , is converted to ',' and the string