Rows between 101 and 150

Hi,
I want the rows between 101 and 150 for all values
Select * from MQ where rownum between 101 and 150
In the above is query is not working.
and I tried this query too ,It is also not working
Select * from MQ where (select rownum from MQ were rownum between 101 and 150)
Here I am getting only Rownum.
Please help

http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/pseudocolumns009.htm#SQLRF00255
Select  *  from
    (select   rownum   my_rownum
             , MQ.* 
       from MQ
where my_rownum between 101 and 150

Similar Messages

  • Select row between x and y

    Hi,
    ;WITH CTE AS
    ( SELECT tblFuelCostDetails.TripId, tblFuelCostDetails.PurchaseDate, tblPetrolPumps.PumpName as Pump
    FROM           tblPetrolPumps INNER JOIN
    tblFuelCostDetails ON tblPetrolPumps.PumpId = tblFuelCostDetails.PumpId
    SELECT distinct TripID, PurchaseDate,DENSE_RANK() OVER (ORDER BY TripID) AS RowNumber,
        STUFF((SELECT ', '+Pump FROM CTE P2 WHERE P1.TripID = P2.TripID AND P1.PurchaseDate = P2.PurchaseDate
        FOR XML PATH('')),1,1,'') AS Pump    
        FROM CTE P1   
    This query output is,
    TripID
    PurchaseDate
    RowNumber
    Pump
    73
    1/15/2014
    1
     N.R, Masranga
    74
    1/16/2014
    2
     N.R
    75
    1/12/2014
    3
    JK
    76
    1/13/2014
    4
    YUJ
    77
    1/14/2014
    5
    UYI
    How can i get RowNumber between 2 and 4

    Please post DDL, so that people do not have to guess what the keys, constraints, Declarative Referential Integrity, data types, etc. in your schema are. Learn how to follow ISO-11179 data element naming conventions and formatting rules (you did not). Temporal
    data should use ISO-8601 formats (you failed this basic IT standard! Shame on you!). Code should be in Standard SQL as much as possible and not local dialect. 
    Did you know that putting “tbl-” is such a bad coding practice it has name? You actually Tibbled! 
    >> I am trying to load time dimension in a specific order <<
    A table has no ordering by definition. Let me say that again, so you will hear this: A table has no ordering by definition. You have completely missed the foundations of RDBMS. 
    Your code is also bad. Why did you rename “pump_name” (a valid data element name) to “pump” (vague and invalid)? Why do you think naming a CTE to “CTE” is informative? 
    Why are you destroying First Normal Form with XML??? Are you smarter than Dr. Codd? I would fire you for this kind of sloppy coding. 
    Do you know how rate SELECT DISTINCT is in a good schema? We usually use a key in the queries so there are not redundant rows. But since we have no DDL and no sample data, we cannot help you clean up this mess. Your query is simply this:
    SELECT F.trip_nbr, F.purchase_date, P.pump_name 
      FROM Petrol_Pumps AS P, Fuel_Cost_Details AS F
     WHERE P.pump_nbr = F.pump_nbr;
    Now pass the result set to a presentation layer, as per any tiered architecture. Since we have no logical rule for the subset you want, adding a DENSE_RANK() may not do what you want. Why order by trip_nbr? This is a tag number, not part of a sequence. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Difference between 101 and 102

    hi gurus,
    I have created an Import Purchasing Order .
    Then done Goods Receipt (MIGO-101) for this PO .
    Then done the cancellation of this Goods Receipt (MIGO-102).
    I am finding differences in Amount posted in foreign currency in MIGO-
    101 & MIGO-102.
    While displaying in Local currency , Amount posted for both transaction
    are exactly same .but in external currency there is a big difference
    for ex:
    101 stock amount 91.000eur
    102 stock amount 183.000 eur
    in usd the amount is the same.
    i now that the exchange rate from EUR to USD is different to rate from USD to EUR but the difference that i saw it's very large.
    Regards,
    vr

    Hi,
    what I found out is that the notebook itself is the same. But with the G20-102 you get some more cables:
    D-Connector - Scart Cable
    Video Monitor Cable
    Small Antenna Cable
    Antenna Cable
    Bye

  • Drag and Drop rows between tables in JSP

    I have to convert a swing application into a web project. One of the screens in my swing application uses drag and drop of records between two tables. The key is that when one of the rows is picked and dropped in the other table values get allocated to each column accordingly. I need to bring this functionality into a JSP. There is no constraint on using Javascripts. Please help me asap.
    Regards,
    KK

    Yes it is possible, but not so easy. If you'll be at OOW this year, you can come to my session (S301752, Monday 13:00-14:00) - I will show something similar there. If you're not so lucky, you can read the excellent blogpost of my colleague at [http://rutgerderuiter.blogspot.com/2008/07/drag-planboard.html]
    Cheers
    Roel

  • ROWS BETWEEN 12 PRECEDING AND CURRENT ROW

    I have a report with the last 12 months and for each month, I have to show the sales sum of the last 12 months. For example, for 01/2001, I have to show the sales sum from 01/2000 to 12/2000.
    I have tried this:
    SUM(sales)
    OVER (PARTITION BY product, channel
    ORDER BY month ASC
    ROWS BETWEEN 12 PRECEDING AND CURRENT ROW)
    The problem is: this calculation only considers the data that are in the report.
    For example, if my report shows the data from jan/2001 to dec/2001, then for the first month the calculation result only returns the result of jan/2001, for feb/2001, the result is feb/2001 + jan/2001.
    How can I include the data of the last year in my calculation???

    Hi,
    I couldn't solve my problem using Discoverer Plus functions yet...
    I used this function to return the amount sold last year:
    SUM("Amount Sold SUM 1") OVER(PARTITION BY Products.Prod Name ORDER BY TO_DATE(Times."Calendar Year",'YYYY') RANGE BETWEEN INTERVAL '1' YEAR PRECEDING AND INTERVAL '1' YEAR PRECEDING )
    The result was: it worked perfectly well when I had no condition; so it showed three months (1998, 1999, 2000) and two data points (Amount Sold, Amount Sold Last Year). The "Amount Sold Last Year" was null for 1998, as there aren't data for 1997.
    Then I created a condition to filter the year (Times."Calendar Year" = 1999), because I must show only one year in my report. Then I got the "Amount Sold" with the correct result and the "Amount Sold Last Year" with null values. As I do have data for 1998, the result didn't return the result I expected.
    Am I doing something wrong??

  • Error ORA-01841: (full) year must be between -4713 and +9999, and not be 0

    Hi Experts,
    I seem to be getting the error "Error ORA-01841: (full) year must be between -4713 and +9999, and not be 0" when my dates are in the year 2000. Here's my SQL:
    DROP TABLE PER_ALL_ASSIGNMENTS_M_XTERN;
    create table PER_ALL_ASSIGNMENTS_M_XTERN(
    PERSON_NUMBER                    VARCHAR2(30 CHAR),
    ASSIGNMENT_NUMBER VARCHAR2(30 CHAR),
    EFFECTIVE_START_DATE                DATE,
    EFFECTIVE_END_DATE           DATE,
    EFFECTIVE_SEQUENCE           NUMBER(4),
    ASS_ATTRIBUTE_CATEGORY           VARCHAR2(30 CHAR),
    ASS_ATTRIBUTE1 VARCHAR2(150 CHAR),
    ASS_ATTRIBUTE_NUMBER20 NUMBER,
    ASS_ATTRIBUTE_DATE1 DATE,
    ASS_ATTRIBUTE_DATE2 DATE,
    ASS_ATTRIBUTE_DATE3 DATE,
    ASS_ATTRIBUTE_DATE4 DATE,
    ASS_ATTRIBUTE_DATE5 DATE,
    ASS_ATTRIBUTE_DATE6 DATE,
    ASS_ATTRIBUTE_DATE7 DATE,
    ASS_ATTRIBUTE_DATE8 DATE,
    ASS_ATTRIBUTE_DATE9 DATE,
    ASS_ATTRIBUTE_DATE10 DATE,
    ASS_ATTRIBUTE_DATE11 DATE,
    ASS_ATTRIBUTE_DATE12 DATE,
    ASS_ATTRIBUTE_DATE13 DATE,
    ASS_ATTRIBUTE_DATE14 DATE,
    ASS_ATTRIBUTE_DATE15 DATE,
    ASG_INFORMATION_CATEGORY           VARCHAR2(30 CHAR),
    ASG_INFORMATION1 VARCHAR2(150 CHAR),
    ASG_INFORMATION_NUMBER20 NUMBER,
    ASG_INFORMATION_DATE1 DATE,
    ASG_INFORMATION_DATE2 DATE,
    ASG_INFORMATION_DATE3 DATE,
    ASG_INFORMATION_DATE4 DATE,
    ASG_INFORMATION_DATE5 DATE,
    ASG_INFORMATION_DATE6 DATE,
    ASG_INFORMATION_DATE7 DATE,
    ASG_INFORMATION_DATE8 DATE,
    ASG_INFORMATION_DATE9 DATE,
    ASG_INFORMATION_DATE10 DATE,
    ASG_INFORMATION_DATE11 DATE,
    ASG_INFORMATION_DATE12 DATE,
    ASG_INFORMATION_DATE13 DATE,
    ASG_INFORMATION_DATE14 DATE,
    ASG_INFORMATION_DATE15 DATE
    organization external
    ( default directory APPLCP_FILE_DIR
    access parameters
    ( records delimited by newline skip 1
         badfile APPLCP_FILE_DIR:'PER_ALL_ASSIGNMENTS_M_XTERN.bad'
    logfile APPLCP_FILE_DIR:'PER_ALL_ASSIGNMENTS_M_XTERN.log'
    fields terminated by ',' OPTIONALLY ENCLOSED BY '"'
    (PERSON_NUMBER,     
    ASSIGNMENT_NUMBER,
    EFFECTIVE_START_DATE CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    EFFECTIVE_END_DATE CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    EFFECTIVE_SEQUENCE,
    ASS_ATTRIBUTE_CATEGORY,
    ASS_ATTRIBUTE1,
    ASS_ATTRIBUTE_NUMBER20,
    ASS_ATTRIBUTE_DATE1 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE2 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE3 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE4 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE5 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE6 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE7 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE8 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE9 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE10 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE11 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE12 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE13 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE14 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASS_ATTRIBUTE_DATE15 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_CATEGORY,
    ASG_INFORMATION1,
    ASG_INFORMATION_NUMBER20,
    ASG_INFORMATION_DATE1 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE2 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE3 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE4 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE5 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE6 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE7 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE8 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE9 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE10 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE11 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE12 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE13 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE14 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    ASG_INFORMATION_DATE15 CHAR(20) DATE_FORMAT DATE MASK "DD-MON-YYYY"
    location ('PER_ALL_ASSIGNMENTS_M.csv')
    REJECT LIMIT UNLIMITED;
    ...and getting errors when data looks like the following:
    E040101,EE040101,*1-Aug-2000*,31-Dec-4712,1,,NDVC,YES,DE,SFC,N,STIP Plan - Pressure,,,,,,,E040101,,,,2080,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,5,,,,,,,,,,,,31113,31113,31113,31113,31113,31113,,,1-Jan-2012,31-Dec-2012,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
    ...error message:
    error processing column EFFECTIVE_START_DATE in row 19 for datafile /u05/dbadir/jmf/incident_logs/PER_ALL_ASSIGNMENTS_M.csv
    ORA-01841: (full) year must be between -4713 and +9999, and not be 0
    Thanks,
    Thai

    Here is a snippet of the bad file data:
    E040110,EE040110,01-Aug-00,31-Dec-12,1,,NDVC,YES,DE,SFC,N,STIP Plan - Pressure,,,,,,,E040110,,,,2080,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,5,,,,,,,,,,,,27667.2,27667.2,27667.2,27667.2,27667.2,27667.2,,,01-Jan-12,31-Dec-12,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
    E040100,EE040100,01-May-00,31-Dec-12,1,,NDVC,YES,DE,SFC,N,STIP Plan - Pressure,,,,,,,E040100,,,,2080,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,5,,,,,,,,,,,,31113,31113,31113,31113,31113,31113,,,01-Jan-12,31-Dec-12,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
    E040101,EE040101,01-Aug-00,31-Dec-12,1,,NDVC,YES,DE,SFC,N,STIP Plan - Pressure,,,,,,,E040101,,,,2080,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,5,,,,,,,,,,,,31113,31113,31113,31113,31113,31113,,,01-Jan-12,31-Dec-12,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
    E000916,EE000916,01-Oct-00,31-Dec-12,1,,NDVC,YES,NL,SFC-Commercial,E,SIP Plan - Control Technologies,,,,,,,E000916,21000000,555000,99000,2080,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,10,,,,,,,,,,,,34905.65,34905.65,34905.65,34905.65,34905.65,34905.65,,,01-Jan-12,31-Dec-12,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
    E000807,EE000807,03-Jan-00,31-Dec-12,1,,NDVC,YES,FR,SFC-Commercial,E,STIP Plan - Cross BU,,,,,,,E000807,,,,2080,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,3,,,,,,,,,,,,35448.56,35448.56,35448.56,35448.56,35448.56,35448.56,,,01-Jan-12,31-Dec-12,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
    E000851,EE000851,25-Apr-00,31-Dec-12,1,,NDVC,YES,FR,SFC-Commercial,E,SIP Plan - Control Technologies,,,,,,,E000851,21000000,555000,99000,2080,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,13,,,,,,,,,,,,43283.76,43283.76,43283.76,43283.76,43283.76,43283.76,,,01-Jan-12,31-Dec-12,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
    So, the successfully loaded data is being loaded where year=2003 in the CSV as year=0003. Even though the data is loaded fine the data is not correct. For some reason the external table is set to have the year truncated.

  • Diffrence between cpu and elapse time in tkprof

    Hi All
    i found huge diffrence between cpu and elapsed time in tkprof. can you please advice me on this issue.
    >call count cpu elapsed disk query current rows
    ==================================================
    Parse 1 0.12 1.36 2 11 0 0
    Execute 1 14.30 720.20 46548 190520 205 100
    Fetch 0 0.00 0.00 0 0 0 0
    ======================================================
    total 2 14.42 721.56 46550 190531 205 100
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 173 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times waited Max. Wait Total Waited
    ===========================================
    db file sequential read 46544 0.49 632.12
    db file scattered read 1 0.00 0.00
    my select statement
    SELECT cst.customer_id> ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.trx_date) / COUNT(cr.deposit_date))) avgdays
    > ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.due_date) / COUNT(cr.deposit_date))) avgdayslate
    > ,NVL(SUM(DECODE(SIGN(cr.deposit_date - ps.due_date),1, 1, 0)), 0) newlate
    > ,NVL(SUM( DECODE(SIGN(cr.deposit_date - ps.due_date),1, 0, 1)), 0) newontime
    >
    > FROM ar_receivable_applications_all ra
    > ,ar_cash_receipts_all cr
    > ,ar_payment_schedules_all ps
    > ,zz_ar_customer_summary_all cst
    > WHERE ra.cash_receipt_id = cr.cash_receipt_id
    > AND ra.apply_date BETWEEN ADD_MONTHS(SYSDATE, -12) AND SYSDATE
    > AND ra.status = 'APP'
    > AND ra.display = 'Y'
    > AND ra.applied_payment_schedule_id = ps.payment_schedule_id
    > AND ps.customer_id = cst.customer_id
    > AND NVL(ps.receipt_confirmed_flag,'Y') = 'Y'
    > group by cst.customer_id ;
    Thanks,
    Anu

    user653066 wrote:
    Hi All
    i found huge diffrence between cpu and elapsed time in tkprof. can you please advice me on this issue.
    call     count       cpu    elapsed       disk      query    current        rows
    ================================================================================
    Parse        1      0.12       1.36          2         11          0           0
    Execute      1     14.30     720.20      46548     190520        205         100
    Fetch        0      0.00       0.00          0          0          0           0
    ================================================================================
    total        2     14.42     721.56      46550     190531        205         100
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 173     (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on                      Times waited   Max. Wait  Total Waited
    ===========================================================================
    db file sequential read                     46544        0.49        632.12
    db file scattered read                          1        0.00          0.00
    SELECT  cst.customer_id
             ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.trx_date) / COUNT(cr.deposit_date))) avgdays
             ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.due_date) / COUNT(cr.deposit_date))) avgdayslate
             ,NVL(SUM(DECODE(SIGN(cr.deposit_date - ps.due_date),1, 1, 0)), 0)  newlate
             ,NVL(SUM( DECODE(SIGN(cr.deposit_date - ps.due_date),1, 0, 1)), 0) newontime
              FROM ar_receivable_applications_all ra
                  ,ar_cash_receipts_all           cr
                  ,ar_payment_schedules_all       ps
                  ,zz_ar_customer_summary_all cst
              WHERE ra.cash_receipt_id                 = cr.cash_receipt_id
              AND   ra.apply_date                BETWEEN ADD_MONTHS(SYSDATE, -12) AND SYSDATE
              AND   ra.status                          = 'APP'
              AND   ra.display                         = 'Y'
              AND   ra.applied_payment_schedule_id     = ps.payment_schedule_id
              AND   ps.customer_id                     = cst.customer_id          
              AND   NVL(ps.receipt_confirmed_flag,'Y') = 'Y'
              group by cst.customer_id    ;           Toon Koppelaars seems to have pinpointed the problem. Where are the 74 seconds unaccounted for seconds (I might have calculated it incorrectly, but I arrived at 88.08 seconds of unaccounted for time: 721.56 total - 1.36 parse - 632.12 db file sequential reads)?
    It is interesting that the maximum wait for a single block read reported by TKPROF is 0.49 seconds - this might be an indication of excessive competition for the server's CPU - processes are waiting in the CPU run queue, and therefore not on the CPU. As Toon indicated, 632.12 of the 721.56 seconds were spent waiting for single block reads to complete with 46,544 blocks read. Note also that the query executed at dep=1, and TKPROF may be providing misleading information about what actually happened during those executions. An example of misleading information:
    CREATE TABLE T11 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE TABLE T12 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE TABLE T13 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE TABLE T14 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE OR REPLACE TRIGGER HPM_T11 AFTER
    INSERT OR DELETE OR UPDATE OF C1 ON T11
    REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
    BEGIN
      IF INSERTING THEN
        INSERT INTO T12
        SELECT
          ROWNUM,
          DBMS_RANDOM.STRING('A',25)
        FROM
          DUAL
        CONNECT BY
          LEVEL <= 100;
      END IF;
    END;
    CREATE OR REPLACE TRIGGER HPM_T12 AFTER
    INSERT OR DELETE OR UPDATE OF C1 ON T12
    REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
    BEGIN
      IF INSERTING THEN
        INSERT INTO T13
        SELECT
          ROWNUM,
          DBMS_RANDOM.STRING('A',25)
        FROM
          DUAL
        CONNECT BY
          LEVEL <= 100;
      END IF;
    END;
    CREATE OR REPLACE TRIGGER HPM_T13 AFTER
    INSERT OR DELETE OR UPDATE OF C1 ON T13
    REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
    BEGIN
      IF INSERTING THEN
        INSERT INTO T14
        SELECT
          ROWNUM,
          DBMS_RANDOM.STRING('A',25)
        FROM
          DUAL
        CONNECT BY
          LEVEL <= 100;
      END IF;
    END;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_FIND_ME2';
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
    SET TIMING ON
    INSERT INTO T11 VALUES (1,'MY LITTLE TEST CASE');
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';The partial TKPROF output:
    INSERT INTO T11
    VALUES
    (1,'MY LITTLE TEST CASE')
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          8          0           0
    Execute      1      0.00       0.00          0       9788         29           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0       9796         29           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56 
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=9788 pr=7 pw=0 time=0 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    SQL ID : 6asaf110fgaqg
    INSERT INTO T12 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
      BY LEVEL <= 100
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.04       0.09          0          2        130         100
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.04       0.09          0          2        130         100
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 1)
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=9754 pr=7 pw=0 time=0 us)
        100   COUNT  (cr=0 pr=0 pw=0 time=0 us)
        100    CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
          1     FAST DUAL  (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
    SQL ID : db46bkvy509w4
    INSERT INTO T13 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
      BY LEVEL <= 100
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute    100      1.31       1.27          0         93      10634       10000
    Fetch        0      0.00       0.00          0          0          0           0
    total      101      1.31       1.27          0         93      10634       10000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 2)
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=164 pr=0 pw=0 time=0 us)
        100   COUNT  (cr=0 pr=0 pw=0 time=0 us)
        100    CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
          1     FAST DUAL  (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
    SQL ID : 6542yyk084rpu
    INSERT INTO T14 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
      BY LEVEL <= 100
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute  10001     41.60      41.84          0       8961      52859     1000000
    Fetch        0      0.00       0.00          0          0          0           0
    total    10003     41.60      41.84          0       8961      52859     1000000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 3)
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=2 pr=0 pw=0 time=0 us)
        100   COUNT  (cr=0 pr=0 pw=0 time=0 us)
        100    CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
          1     FAST DUAL  (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      log file switch completion                      2        0.07          0.07
    ********************************************************************************In the above note that the "INSERT INTO T11" is reported as completing in 0 seconds, but it actually required roughly 42 seconds - and that would be visible by manually reviewing the resulting trace file. Also note that the log file switch completion wait was not reported for the "INSERT INTO T11" even though it impacted the execution time.
    Back to the possibility of CPU starvation causing lost time. Another test with an otherwise idle server, followed by a second test with the same server having 240 other processes fighting for CPU resources (a simulated load).
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_QUERY_NO_LOAD';
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
    SET TIMING ON
    SELECT
      COUNT(*)
    FROM
      T14;
    SELECT
      SYSDATE
    FROM
      DUAL;
    SQL> SELECT
      2    COUNT(*)
      3  FROM
      4    T14;
      COUNT(*)
       1000000
    Elapsed: 00:00:01.37With no load the COUNT(*) completed in 1.37 seconds. The TKPROF output looks like this:
    SQL ID : gy8nw9xzyg3bj
    SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
      NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
      NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_0"),
      NVL(SUM(C2),:"SYS_B_1")
    FROM
    (SELECT /*+ NO_PARALLEL("T14") FULL("T14") NO_PARALLEL_INDEX("T14") */
      :"SYS_B_2" AS C1, :"SYS_B_3" AS C2 FROM "T14" SAMPLE BLOCK (:"SYS_B_4" ,
      :"SYS_B_5") SEED (:"SYS_B_6") "T14") SAMPLESUB
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.01       0.84        523        172          1           1
    total        3      0.01       0.84        523        172          1           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=172 pr=523 pw=0 time=0 us)
       8733   TABLE ACCESS SAMPLE T14 (cr=172 pr=523 pw=0 time=0 us cost=2 size=12 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         3        0.02          0.04
      db file parallel read                           1        0.31          0.31
      db file scattered read                         52        0.03          0.47
    SQL ID : 96g93hntrzjtr
    select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#,
      sample_size, minimum, maximum, distcnt, lowval, hival, density, col#,
      spare1, spare2, avgcln
    from
    hist_head$ where obj#=:1 and intcol#=:2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.06          2          2          0           0
    total        3      0.00       0.06          2          2          0           0
    Misses in library cache during parse: 0
    Optimizer mode: RULE
    Parsing user id: SYS   (recursive depth: 2)
    Rows     Row Source Operation
          0  TABLE ACCESS BY INDEX ROWID HIST_HEAD$ (cr=2 pr=2 pw=0 time=0 us)
          0   INDEX RANGE SCAN I_HH_OBJ#_INTCOL# (cr=2 pr=2 pw=0 time=0 us)(object id 413)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         2        0.02          0.04
    SELECT
      COUNT(*)
    FROM
      T14
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          1          1          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.03       0.43       6558       6983          0           1
    total        4      0.03       0.44       6559       6984          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56 
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=6983 pr=6558 pw=0 time=0 us)
    1000000   TABLE ACCESS FULL T14 (cr=6983 pr=6558 pw=0 time=0 us cost=1916 size=0 card=976987)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.02          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file scattered read                        111        0.02          0.38
      SQL*Net message from client                     2        0.00          0.00Note that TKPROF reported that it only required 0.44 seconds for the query to execute while the SQL*Plus timing indicate that it required 1.37 seconds for the SQL statement to execute. The SQL optimization (parse) with dynamic sampling query accounted for the remaining time, yet TKPROF provided no indication that this was the case.
    Now the query with 240 other processes competing for CPU time:
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_QUERY_WITH_LOAD';
    SELECT COUNT(*) FROM T14;
    SELECT
      SYSDATE
    FROM
      DUAL;
    SQL> SELECT COUNT(*) FROM T14;
      COUNT(*)
       1000000
    Elapsed: 00:00:59.03The query this time required just over 59 seconds. The TKPROF output:
    SQL ID : gy8nw9xzyg3bj
    SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
      NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
      NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_0"),
      NVL(SUM(C2),:"SYS_B_1")
    FROM
    (SELECT /*+ NO_PARALLEL("T14") FULL("T14") NO_PARALLEL_INDEX("T14") */
      :"SYS_B_2" AS C1, :"SYS_B_3" AS C2 FROM "T14" SAMPLE BLOCK (:"SYS_B_4" ,
      :"SYS_B_5") SEED (:"SYS_B_6") "T14") SAMPLESUB
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.28        423         69          0           1
    total        3      0.00       0.28        423         69          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=69 pr=423 pw=0 time=0 us)
       8733   TABLE ACCESS SAMPLE T14 (cr=69 pr=423 pw=0 time=0 us cost=2 size=12 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                         54        0.01          0.27
      db file sequential read                         2        0.00          0.00
    SQL ID : 7h04kxpa13w1x
    SELECT COUNT(*)
    FROM
    T14
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.03          1          1          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.06      58.71       6551       6983          0           1
    total        4      0.06      58.74       6552       6984          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56 
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=6983 pr=6551 pw=0 time=0 us)
    1000000   TABLE ACCESS FULL T14 (cr=6983 pr=6551 pw=0 time=0 us cost=1916 size=0 card=976987)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.02          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file scattered read                        110        1.54         58.59
      SQL*Net message from client                     1        0.00          0.00Note in the above that the max wait for the db file scattered read is 1.54 seconds due to the extra CPU competition - about 3 times longer than your max wait for a single block read. On your database platform with single block reads, it might be possible that the time in the CPU run queue is not always counted in the db file sequential read wait time or the CPU wait time - what if your operating system is slow at returning timing information to the database instance due to CPU saturation - this might explain the 74 (or 88) lost seconds.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
    Edited by: Charles Hooper on Aug 28, 2009 10:26 AM
    Fixing formatting problems

  • Query to get rows betwen x and y + table total row count

    Hi,
    I try get rows between e.g. 100 and 150 when I sort by some column
    My query is now like this
    SELECT  *
    FROM
      (SELECT a.*,
        row_number() OVER (ORDER BY OWNER ASC) rn,
        COUNT(1) over() mrn
      FROM (SELECT * FROM all_objects) a
    WHERE ROWNUM BETWEEN least(100,mrn) AND 150It is not very fast if I run that kind select from big table
    Any tips to optimize this query?
    Regards,
    Jari

    Hi,
    I'm so bad with SQL :(
    Here is sample where I need that kind query.
    http://actionet.homelinux.net/htmldb/f?p=100:504
    It looks standard Apex report but it is not.
    It is just test to use query to output html using htp package
    I send parameter start row, max rows, column I like order by.
    Now I do query for cursor
        /* Report query */
        l_sql := '
          SELECT * FROM(
            SELECT a.*, row_number() OVER (ORDER BY '
            || l_sort_col
            || ' '
            || l_sort_ord
            || ') rn, COUNT(1) over() mrn
            FROM(' || l_sql || ') a
          ) WHERE rn BETWEEN LEAST(' || l_start_row || ',mrn) AND ' || l_last_row
        ;Then I loop cursor and output html.
    You can order report by click heading and there is that pagination
    I did not know that I could find examples by "pagination query" =), thanks for that tip for both of you.
    I know Apex have reports ,
    but I need have features that Apex templates can not provide at least yet.
    So I have study that generating html tables using custom package instead Apex report should be best approach to get layouts I need.
    Regards,
    Jari
    Edited by: jarola on Jan 21, 2011 10:38 PM

  • Difference between STATIC and FLOW CONTROL in IKM

    Hello,
    Can someone explain the difference between STATIC and FLOW control in IKM ? What is the best situation to use them with some example ?
    Can we have more than one ODI Constraint at the target table.
    Thanks
    Edited by: cdmnagaraj on 19-Oct-2008 21:59

    Hi Nagaraj,
    Suppose your Lookup table -> column "VALUE" is 101.
    How many rows in your source table (column) have "Balance" = 101 ??
    I hope only One.
    From your Source table all the recors will be moved to your I$ . then from your I$ it will check for the condition
    and all the violated records will be moved to your E$
    please check your "Insert CK Errors" from your operator
    it will be like this....
    insert into E$_TargetTable
         ERR_TYPE,
         ERR_MESS,
         CHECK_DATE,
         ORIGIN,
         CONS_NAME,
         CONS_TYPE,
         col1,
         col2,
         balance
    select
         'F',
         sysdate,
         '(3367100)TESTPROJECTS.TestTable',
         'conditionTest',
         'CK',     
         col1,
         col2,
         balance
    from     I$_TargetTable CHI
    where     not      (
              Balance = (Select value from LOOKUP)
    So in your case it will check for balance not equal to 101 . and it will push those records into your E$ table
    Rathish

  • Difference between char and varchar, also the difference between varchar2

    Hi,
    Can anyone explain me the difference between char and varchar, and also the difference between varchar and varchar2...

    Varchar2 is variable width character data type, so if you define column with width 20 and insert only one character to tis column only, one character will be stored in database. Char is not variable width so when you define column with width 20 and insert one character to this column it will be right padded with 19 spaces to desired length, so you will store 20 characters in the dattabase (follow the example 1). Varchar data type from Oracle 9i is automaticlly promoted to varchar2 (follow example 2)
    Example 1:
    SQL> create table tchar(text1 char(10), text2 varchar2(10))
    2 /
    Table created.
    SQL> insert into tchar values('krystian','krystian')
    2 /
    1 row created.
    SQL> select text1, length(text1), text2, length(text2)
    2 from tchar
    3 /
    TEXT1 LENGTH(TEXT1) TEXT2 LENGTH(TEXT2)
    krystian 10 krystian 8
    Example 2:
    create table tvarchar(text varchar(10))
    SQL> select table_name,column_name,data_type
    2 from user_tab_columns
    3 where table_name = 'TVARCHAR'
    4 /
    TABLE_NAME COLUMN_NAME DATA_TYPE
    TVARCHAR TEXT VARCHAR2
    Best Regards
    Krystian Zieja / mob

  • What's difference between ASC and DESC index

    1 select count(*)
    2* from big_emp e where hiredate >= to_date('1980-01-01', 'YYYY-MM-DD') and hiredate <= to_date('1983-12-31', 'YYYY-MM-DD')
    COUNT(*)
    11971
    SQL> create index i_big_emp_hiredate on big_emp(hiredate);
    Index created.
    SQL> set autot trace
    SQL> select empno, ename, hiredate
    2 from big_emp e where hiredate >= to_date('1980-01-01', 'YYYY-MM-DD') and hiredate <= to_date('1983-12-31', 'YYYY-MM-DD') ;
    11971 rows selected.
    Execution Plan
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 11766 | 218K| 19 |
    |* 1 | TABLE ACCESS FULL| BIG_EMP | 11766 | 218K| 19 |
    SQL> drop index i_big_emp_hiredate;
    Index dropped.
    SQL> create index i_big_emp_hiredate on big_emp(hiredate desc);
    Index created.
    SQL> select empno, ename, hiredate
    2 from big_emp e where hiredate >= to_date('1980-01-01', 'YYYY-MM-DD') and hiredate <= to_date('1983-12-31', 'YYYY-MM-DD') ;
    11971 rows selected.
    Execution Plan
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 29 | 551 | 4 |
    | 1 | TABLE ACCESS BY INDEX ROWID| BIG_EMP | 29 | 551 | 4 |
    |* 2 | INDEX RANGE SCAN | I_BIG_EMP_HIREDATE | 53 | | 2 |
    i have 2 questions
    1. In "Expert one-on-one Oracle", Tom said, there is no deference between ASC and DESC index in case of one column because Oracle can just read in reverse order. but my test made me confused. why Oracle did "full table scan" only in ASC index???
    2. using "set autot trace" command. i believed the the "Rows" column mean the rows that Oracle access. Can you explain why the rows are 29(DESC) and 11766(ASC) in spite of the result is 11971. what is the exact meaning of "Rows" column in execution plan

    I think what you're seeing is a bug in the optimizer. If you had printed up the predicate section of the execution plan, this would be more obvious. I have the query:
    select *
    from   t1
    where  d1 between to_date('01-jan-2001')
              and     to_date('31-dec-2003')
    ;This returns one row per day for 3 years, and when a normal index is created on it, the optimizer calculates the correct cardinality and uses a sensible set of predicates. But when I use a descending index, this is what I get:
    Execution Plan
    Plan hash value: 1429545322
    | Id  | Operation                   | Name  | Rows  | Bytes | Cost  |
    |   0 | SELECT STATEMENT            |       |  1097 | 21940 |     2 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1    |  1097 | 21940 |     2 |
    |*  2 |   INDEX RANGE SCAN          | T1_I1 |     5 |       |     2 |
    Predicate Information (identified by operation id):
       2 - access(SYS_OP_DESCEND("D1")>=HEXTORAW('8798F3E0FEF8FEFAFF')  AND
                  SYS_OP_DESCEND("D1")<=HEXTORAW('879AFEF8FEF8FEFAFF') )
           filter(SYS_OP_UNDESCEND(SYS_OP_DESCEND("D1"))>=TO_DATE('2001-01-0
                  1 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND
                  SYS_OP_UNDESCEND(SYS_OP_DESCEND("D1"))<=TO_DATE('2003-12-31 00:00:00',
                  'yyyy-mm-dd hh24:mi:ss'))Note the introduction of the strange sys_op_descend() function - which is related to the descending index implemention, and the extra FILTER predicates which introduce a significant extra selectivity effect. The optimizer is double-counting on selectivity effects, and introducing extra factors of 1% and 5% (I haven't checked exact details) due to the functions applied to columns and the range-based predicates.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • What is the diffrence between ASCII and BIN mode

    Hello All,
    What is the diffrence between ASCII and BIN mode
    Regards,
    Lisa.

    'ASC' :
    ASCII format. The table is transferred as text. The conversion exits are
    carried out. The output format additionally depends on the parameters
    CODEPAGE, TRUNC_TRAILING_BLANKS, and TRUNC_TRAILING_BLANKS_EOL.
    'IBM' :
    ASCII format with IBM codepage conversion (DOS). This format correspond
    to the 'ASC' format when using target codepage 1103. This codepage is
    often used for data exchange by disc.
    'DAT' :
    Column-by-column transfer. With this format, the data is transferred as
    with ASC text. However, no conversion exists are carried out and the
    columns are separated by tab characters. This format creates files that
    can be uploaded again with gui_upload or ws_upload.
    'DBF' :
    The data is downloaded in dBase format. Because in this format the file
    types of the individual columns are included, import problems, for
    example, into Microsoft Excel can be avoided, especially when
    interpreting numeric values.
    'WK1' :
    The data is downloaded in Lotus 1-2-3 format.
    'BIN' :
    Binary format. The data is transferred in binary format. There is no
    formatting and no codepage conversion. The data is interpreted row by
    row and not formatted in columns. Specify the length of the data in
    parameter BIN_FILESIZE. The table should consist of a column of type X,
    because especially in Unicode systems the conversion of structured data
    into binary data leads to errors.

  • What is the difference between ActionEvent and SelectionEvent?

    Technical Environment:
    Oracle jDeveloper 11.1.1.4.0
    Windows XP
    I think there is something behind ActionEvent and SelectionEvent that makes my code doesn't work.
    Here is my problem:
    I have three tables. One is a master table, contains summary of something. If I select a row on this table, the second table will show some data, based on the selected row on the first table. Yup, this is a ViewLink, and it is working perfectly. Now, I want the same thing works between the second and third table. If I select a row in second table, the third table should show some data, based on the selected row on the second table. The third table is a ViewObject with some bind parameters. I need to assign values to these parameters, which I get the value from the selected row on the second table. How can I do this?
    I have tried to use ExecuteWithParams. Select the row from second table, get the values, pass it into ExecuteWithParams Form, execute and.. it works.. I get the result on the third table. Here is the code how I do that:
    public void updateTableEvent(ActionEvent actionEvent) {
    DCIteratorBinding iter = gen.getIteratorBinding("SearchView3Iterator");
    Row rw = iter.getCurrentRow();
    oracle.jbo.domain.Date strDate = (oracle.jbo.domain.Date) rw.getAttribute("Tgl");
    String strRkno = (String) rw.getAttribute("Rkno");
    oracle.jbo.domain.Number strRkId = (oracle.jbo.domain.Number) rw.getAttribute("Rkid");
    DCIteratorBinding iterParam = gen.getIteratorBinding("variables");
    iterParam.getBindingContainer().getVariableManager().setVariableValue("ExecuteWithParams_pRkId", strRkId);
    iterParam.getBindingContainer().getVariableManager().setVariableValue("ExecuteWithParams_pTgl", strDate);
    iterParam.getBindingContainer().getVariableManager().setVariableValue("ExecuteWithParams_pRkNo", strRkno);
    OperationBinding operationBinding = gen.getOperationBinding("ExecuteWithParams");
    Object result = operationBinding.execute();
    But when I tried the same thing on SelectionEvent, I got nothing.. no error and also no result. What I want is just select the row on the second table, and get the result on the third table, with no any button pressing. Here is the code:
    public void onRowSelect(SelectionEvent selectionEvent) {
    DCIteratorBinding iter = gen.getIteratorBinding("SearchView3Iterator");
    Row rw = iter.getCurrentRow();
    oracle.jbo.domain.Date strDate = (oracle.jbo.domain.Date) rw.getAttribute("Tgl");
    String strRkno = (String) rw.getAttribute("Rkno");
    oracle.jbo.domain.Number strRkId = (oracle.jbo.domain.Number) rw.getAttribute("Rkid");
    DCIteratorBinding iterParam = gen.getIteratorBinding("variables");
    iterParam.getBindingContainer().getVariableManager().setVariableValue("ExecuteWithParams_pRkId", strRkId);
    iterParam.getBindingContainer().getVariableManager().setVariableValue("ExecuteWithParams_pTgl", strDate);
    iterParam.getBindingContainer().getVariableManager().setVariableValue("ExecuteWithParams_pRkNo", strRkno);
    OperationBinding operationBinding = gen.getOperationBinding("ExecuteWithParams");
    Object result = operationBinding.execute();
    So.. what is actually happening here? What is the differece between ActionEvent and SelectionEvent? Why do they give me difference response?
    Thanks for any comments.
    Regards,
    Novan Ananda

    Hi,
    ActionEvent :
    A semantic event which indicates that a component-defined action occured. This high-level event is generated by a component (such as a commandbutton,commandLink) when the component-specific action occurs (such as being pressed).
    SelectionEvent :
    Event that is generated when the selection of a component changes.
    //you can also search in api's

  • What is the difference between exists and in

    hi all
    if i have these queries
    1- select ename from emp where ename in ( select ename from emp where empno=10)
    and
    2- select ename from emp where exists ( select ename from emp where empno=10)
    what is the difference between exists and in is that only when i use in i have to bring the field name or what.... i mean in a complex SQL queries is it will give the same answer
    Thanks

    You get two entirely different result sets that may be the same. Haah! What do I mean by that.
    SQL> select table_name from user_tables;
    TABLE_NAME
    BAR
    FOO
    2 rows selected.
    SQL> select table_name from user_tables where table_name in (select table_name from user_tables where table_name = 'FOO');
    TABLE_NAME
    FOO
    1 row selected.
    SQL> select table_name from user_tables where exists(select table_name from user_tables where table_name = 'FOO');
    TABLE_NAME
    BAR
    FOO
    2 rows selected.So, why is this? the WHERE EXISTS means 'if the next is true', much like where 1=1 being always true and 1=2 being always false. In this case, where exists could be TRUE or FALSE, depending on the subquery.
    WHERE EXISTS can be useful for something like testing if we have data, without actually having to return columns.
    So, if you want to see if an employee exists you might say
    SELECT 1 FROM DUAL WHERE EXISTS( select * from emp where empid = 10);
    If there is a row in emp for empid=10, then you get back 1 from dual;
    This is what I call an 'optimistic' lookup because the WHERE EXISTS ends as soon as there is a hit. It does not care how many - only that at least one exists. It is optimistic because it will continue processing the table lookup until either it hits or reaches the end of the table - for a non-indexed query.

  • SSO between Portal and Nakia.....problem with SSO... library not found..

    Hi Sdn's  and Nakisa tehnical experts,
    We have a Portal environment 7.02 , a Nakisa environment 3.0  (CE) and and HR backend environment 701 (604).
    We are busy setting up SSO between Portal and Nakisa via the, URL iview for the Org chart (http://<host>:<port>OrgChart/default.jsp).
    We have done as indicated in wiki:
    http://wiki.sdn.sap.com/wiki/display/ERPHCM/SAPSSOAuthenticationwithverify.pseusingSAPSSOEXT
    We are however stil having issues with the SSO and in the cds.log the following is being displayed:
    ++01 Aug 2011 13:11:42 ERROR com.nakisa.Logger  - com.mysap.sso.SSO2Ticket : Could not load library: sapsecu.dll - java.lang.Exception: MySapInitialize failed: rc= 14null++
    ++01 Aug 2011 13:11:42 ERROR com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : java.lang.Exception: MySapEvalLogonTicketEx failed: standard error= 9, ssf error= 0++
    ++01 Aug 2011 13:11:42 ERROR com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : Internal error (9) - No SSF error (0)++
    Can someone indicate what I am doing wrong?
    Regards Dries

    Hi Luke,
    thanks a lot for your help so far.
    I have created a root/XML folder under the diretory, and the path is now as follows:
    K:\usr\sap\NKP\J14\j2ee\cluster\apps\Nakisa\OrgChart\servlet_jsp\OrgChart\root\.system\Admin_Config\__000__Sasol_DEV_LIVE\.delta\root\XML
    It seems like it finds the verify.pse, but not the library, sapsecu.dll.
    My credentials.xml file is as follows:
    <credentials>
    <assembly name="SapSso"/>
      <info>
        <item name="PseFilePath">XML\verify.pse</item>
        <item name="SsfLibFilePath">XML\sapsecu.dll</item>
        <item name="PsePassword"></item>
        <item name="WindowsPlatform">64</item>
        <item name="TicketFile"></item>
        <item name="Base64decode">true</item>
       </info>
    </credentials>
    I however stilll get the following in the cds.log
    15 Aug 2011 13:59:53 INFO  com.nakisa.Logger  - Tenant ID: 000
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - LoginSettingsObject Load: 1719
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : Credential provider SapSso
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : Using cert: K:\usr\sap\NKP\J14\j2ee\cluster\apps\Nakisa\OrgChart\servlet_jsp\OrgChart\root\XML\verify.pse
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : Ticket is: AjExMDAgAA9wb3J0YWw6eXNzZWxhZ2OIABNiYXNpY2F1dGhlbnRpY2F0aW9uAQAIWVNTRUxBR0MCAAMwMDADAANEUDkEAAwyMDExMDgxNTExNDcFAAQAAAAICgAIWVNTRUxBR0P%2FAQQwggEABgkqhkiG9w0BBwKggfIwge8CAQExCzAJBgUrDgMCGgUAMAsGCSqGSIb3DQEHATGBzzCBzAIBATAiMB0xDDAKBgNVBAMTA0RQOTENMAsGA1UECxMESjJFRQIBADAJBgUrDgMCGgUAoF0wGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTEwODE1MTE0NzIwWjAjBgkqhkiG9w0BCQQxFgQUK13ubzFiQrY4H%2FLRk2ysyvPSvccwCQYHKoZIzjgEAwQuMCwCFF1W9d!tAjLvP8dnb1bs4XghaHSBAhQ9kd9N!bJubUWITtkzU!za96lxNg%3D%3D
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : Version of SAPSSOEXT: SAPSSOEXT 4
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : SCUE LIB base path is:
    15 Aug 2011 13:59:55 ERROR com.nakisa.Logger  - com.mysap.sso.SSO2Ticket : Could not load library: sapsecu.dll - java.lang.Exception: MySapInitialize failed: rc= 14null
    15 Aug 2011 13:59:55 ERROR com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : java.lang.Exception: MySapEvalLogonTicketEx failed: standard error= 9, ssf error= 0
    15 Aug 2011 13:59:55 ERROR com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : Internal error (9) - No SSF error (0)
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : User to authenticate null
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : Authentication provider SapSso
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : User authenticated null
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : Authentication row is {SapSsoTicket=AjExMDAgAA9wb3J0YWw6eXNzZWxhZ2OIABNiYXNpY2F1dGhlbnRpY2F0aW9uAQAIWVNTRUxBR0MCAAMwMDADAANEUDkEAAwyMDExMDgxNTExNDcFAAQAAAAICgAIWVNTRUxBR0P%2FAQQwggEABgkqhkiG9w0BBwKggfIwge8CAQExCzAJBgUrDgMCGgUAMAsGCSqGSIb3DQEHATGBzzCBzAIBATAiMB0xDDAKBgNVBAMTA0RQOTENMAsGA1UECxMESjJFRQIBADAJBgUrDgMCGgUAoF0wGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTEwODE1MTE0NzIwWjAjBgkqhkiG9w0BCQQxFgQUK13ubzFiQrY4H%2FLRk2ysyvPSvccwCQYHKoZIzjgEAwQuMCwCFF1W9d!tAjLvP8dnb1bs4XghaHSBAhQ9kd9N!bJubUWITtkzU!za96lxNg%3D%3D}
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : User population provider is Database
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - FunctionRunner : ensurePool : Current pool size:0
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - FunctionRunner : ensurePool : Current pool size:0
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - FunctionRunner.executeFunctionDirect: /NAKISA/RFC_REPORT took: 266ms
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - BAPI_SAP_OTFProcessor_Report :  WhereClause : ( (Userid is null) or (Userid='') ); Table : (SAP_UserPopulation); Dataelement : (UserPopulationInfo)
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : User populated
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : Role mapping provider is: SAP
    15 Aug 2011 14:00:00 ERROR com.nakisa.Logger  - SAPRoleMapping_SAP.MapRoles() : while trying to invoke the method java.lang.String.toUpperCase() of an object loaded from local variable 'value'
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : Login process finished with errors
    Any ideas? Should I maybe hardcode the location in the credentials.xml?
    Kind regards
    Dries Yssel

Maybe you are looking for

  • Can't import from home shared computer

    Okay here's the deal. My mom, who has her computer in the basement, got an iPad. Her computer didn't recognize it or find the driver. She uninstalled and reinstalled iTunes but keeps getting error messages that it didn't install correctly. A new unit

  • Page Display Resolution

    Windows 7.  Illustrator CS5.  Acroabt Pro X An artist created a document in Illustrator which has embedded screenshots.  Screenshots are 72 ppi and not scaled. He sent me the AI file and I saved it from Illustrator as a PDF and then viewed in Acrobat

  • Mystery accents... beginner question

    Hi, I'm new to logic and I'm wondering about some midi data that I've recorded; when I play it back I hear some unpleasing accents at seemingly random moments. I would've thought this would be due to the velocity of the notes, but I have the velocity

  • Infocube showing zero records...

    Hello All, I am loading data from flat file to ODS then to cube, after loading the data i can see the number of records transfered and added in the ods, but not in the infocube. Please help... Regards, MC

  • Run_report_object_proc procedure fails to compile

    Thanks all. For more information as some you guys have requested Os is windows reports developer version is 9.0.2.0.3 forms developer version is 9.0.2.9.0 I am compiling the following plsql code PROCEDURE run_report_object_proc(      report_id