Avg of Count

All,
I trying to create a Metric dash board. The Dashborad contains the Count of Trades & the Metric names as Columns. Trade date would be the dashboard prompt.
Now what i wanted to have is to create the Avg of the count for atleast 3 trade dates. then based on that avg & the current trade count i will display the indicator red or green.
The issue is i am not sure how can i calculate the avg of the count based on the 3 trade dates.
Please help

You must have a defined criteria for selecting the three dates you mentioned.
If that is the case, then you can calculate the avg count in the RPD using the ago function or in the request criteria, if obiee 11g.
Once the value is calculated, then it would be very easy to have a conditional formatting for the coloring.

Similar Messages

  • Count number of rows - question

    Hi,
    I have a quite simple SQL statement, and I need to count the nummer of rows which are grouped.
    select
    to_char(em_date_time,'DD-MON-YYYY') dt,
    session_id,
    location_name,
    sc_name,
    transaction_name,
    em_status_id,
    AVG(em_result_value) "MID_RESULT_VALUE"
    from
    tabel_test
    group by
    to_char(em_date_time,'DD-MON-YYYY'),
    session_id,
    location_name,
    sc_name,
    transaction_name,
    em_status_id
    Does anyone know how to do this?
    Thanks for help,
    Walter

    Something like that ?
    SQL> select deptno,
      2  decode(grouping(deptno),0,avg(sal),1,null) avg,
      3  count(*) from emp group by rollup(deptno)
      4  /
        DEPTNO        AVG   COUNT(*)
            10 1458,33333          3
            20     1107,5          5
            30 830,833333          6
                                  14Rgds.

  • BPM counter script

    just needed a manual bpm counter, partially to practice perl, but also cuz i dont like the audacity auto bpm counter.  I like to manually click to the beat... but thats just me.
    It works, but the output isnt rly pretty.  should i print another line that just displays the instantaneous bpm (between the two most recent clicks) or would that just be to much info?  In my opinion i think the avg bpm is all ya need, and if you screw your bpm up by missing a few beats just rerun the app.
    #! usr/bin/perl
    #Beat Counter Program
    #By William Gur
    #2010
    use strict;
    use warnings;
    use Time::HiRes;
    use 5.010;
    sub timediff{
    state @time;
    state $total_time;
    state $focus_element;
    my $timeval=$_[0];
    my $returnval;
    #return time elaped in mins. ex: .35 mins
    #use state to keep value between calls
    push @time, $timeval;
    if(!defined($total_time)){
    $total_time=0;
    if(!defined($focus_element)){
    $focus_element=0;
    if($focus_element>=1){
    $total_time=($total_time+($time[$focus_element]-$time[($focus_element-1)]));
    $returnval=($total_time/60);
    $focus_element+=1;
    return $returnval;
    else{
    $focus_element+=1;
    return 1;
    print "hit the return button to the beat!\n";
    print "press any other key to exit\n";
    my $input="", my $counter=0, my $time_now, my $time_elapsed=1;
    #get the local time as a start val so you know how long has passed
    #in total... compare that to number of clicks so far. get average bpm from this data.
    while($input eq ""){
    $input=<STDIN>;
    chomp($input);
    $time_now = Time::HiRes::time();
    $time_elapsed = &timediff($time_now);
    if($time_elapsed != 0 && $counter != 0){
    print $counter/$time_elapsed;
    print " avg bpm";
    $counter+=1;
    print "\ngoodbye\n";
    Comes in handy covering songs

    Not exactly what you are looking to do, but check out the following:
    1) You can see and adjust tempo and time signature from the large transport bar.
    2) You can add another tempo view to your arrange window by choosing VIEW→GLOBAL TRACK COMPONENTS→TEMPO from the pull down menu in the arrange window.
    3) You can also get a window with a list of tempo and time signatures used in a song by choosing OPTIONS→TEMPO→TEMPO LIST from the main menu bar.
    Any of this help?

  • An Average of a Count erroneously returns integers and not decimals

    I have a table which holds sales information showing the invoice number and the branch (e.g. 'New York', 'Boston', etc..)  that sold the items. Each row represents a product on an invoice. I am trying to find out the average number of lines per invoice
    for each branch. So my final result set might tell me that the 'New York' invoices had an average of 2.4 lines per invoice and the 'Boston' invoices had an average of 1.9 lines per invoice. Like this:
    New York, 2.4
    Boston, 1.9
    I've first written a subquery that counts the number of lines for each BRANCH & INVOICE combination for any date after 1/1/2015. Then I've put a query around that subquery which averages this count by Branch. The problem is, the entire query is only
    returning integers and not decimals. Why is this?
    Note that I tried casting the data type to DECIMAL and FLOAT but still get the same results.
    Here is the query:
    Select t1.Branch, cast(avg(t1.Count) as numeric(10,2))
    as Avg_Lines
    from
    (Select Branch, invoice, count(*) as Count
     from linprm
     where invdte>20150101
     group by Branch, invoice) t1
     group by t1.Branch

    To expand Scott's answer:
    Select t1.Branch, avg(cast(t1.Count as numeric(10,2)))
    as Avg_Lines
    from
    (Select Branch, invoice, count(*) as Count
    from linprm
    where invdte>20150101
    group by Branch, invoice) t1
    group by t1.Branch
    -- or
    Select t1.Branch, cast(avg(t1.Count) as numeric(10,2))
    as Avg_Lines
    from
    (Select Branch, invoice, cast(count(*) as numeric(10,2)) as Count
    from linprm
    where invdte>20150101
    group by Branch, invoice) t1
    group by t1.Branch
    Russel Loski, MCT, MCSE Data Platform/Business Intelligence. Twitter: @sqlmovers; blog: www.sqlmovers.com

  • Paint performance with JScrollPane very slow in jdk 1.4?

    I got a simple program that overrides paintComponent on a JPanel. Then draws lots of lines, rectangles and some strings. The panel is then added to a scrollpane.
    The scrolling is very smooth in java 1.3.1, but very slow in 1.4.2
    the paintComponent takes between 16ms and 30ms with java 1.3.1 but 70-200ms with java 1.4.2.
    I tried turning of antialising etc.. but no help. Whats the "improvement" they made in jdk 1.4?

    Ok I made a simple example, which draws around 5000 elements.
    Sourcecode is here: http://www.mcmadsen.dk/files/ScrollPaneTest.java
    I did several testruns on java 1.4.2 and java 1.3.1, heres the "avarage" result:
    Java 1.4.2:
         Current: 140ms High: 203ms Avg: 144ms Low: 125ms
    Java 1.3.1:
         Current: 62ms High: 219ms Avg: 68ms Low: 47ms
    The paintComponent() looks like this:
    public void paintComponent(Graphics g)
    super.paintComponent(g);
    long offset=System.currentTimeMillis();
    PaintElement paintElementTmp;
    for(int i=0;i<paintElements.size();i++)
    paintElementTmp=(PaintElement)paintElements.elementAt(i);
    g.setColor(paintElementTmp.getBackground());
    g.fillRect(paintElementTmp.getX(),paintElementTmp.getY(),paintElementTmp.getWidth(),paintElementTmp.getHeight());
    g.setColor(paintElementTmp.getForeground());
    g.drawString(paintElementTmp.getText(),paintElementTmp.getX(),paintElementTmp.getY());
    long done=System.currentTimeMillis();
    long current=done-offset;
    sum+=current;
    if(current>high)high=current;
    if(low>current)low=current;
    count++;
    System.out.println("Current: "+current+"ms High: "+high+"ms Avg: "+(sum/count)+"ms Low: "+low+"ms");
    I tried all the renderinghints, but no difference (from the default settings). Also the scrolling is very slow and stops all the time in java 1.4.
    Any ideas on how to get java 1.4 to perform as java 1.3.1?
    Thanks

  • HOW TO IMPROVE PERFORMANCE

    HI ALL ,,,
    MY SELECT STATEMENT IS LIKE THIS. IN SM30 ITS SHOWING THAT ITS HAS TAKE THE MAXIMUM TIME. SO HOW CAN I REDUCE THE HITING DB TABLE TIME OR IMPROVE THE PERFORMANCE?
    IF LT_JCDS[] IS NOT INITIAL.
        SELECT  OBJNR
                WKGBTR
                BELNR
                WRTTP
                BEKNZ
                PERIO
                GJAHR
                VERSN
                KOKRS
                VRGNG
                GKOAR
                BUKRS
                REFBZ_FI
                MBGBTR
                FROM COEP
          INTO CORRESPONDING FIELDS OF TABLE LT_COEP
          FOR ALL ENTRIES IN LT_JCDS
          WHERE KOKRS EQ 'DXES'
          AND  OBJNR EQ LT_JCDS-OBJNR
          AND GJAHR <= SO_GJAHR-LOW
          AND  VERSN eq '000'
          AND ( VRGNG EQ 'COIN'  OR VRGNG EQ 'RKU1' OR  VRGNG EQ 'RKL').
          IF SY-SUBRC <> 0.
           MESSAGE  e000(8i) WITH 'DATA NOT FOUND IN "CO Object: Line Items (by Period)"'.
         ENDIF.
      ENDIF.

    Hi
    see these points
    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one Internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    Points # 1/2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    Point # 1
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    Point # 2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Point # 3
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    Point # 4
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    Point # 5
    If all primary key fields are supplied in the Where condition you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements           contd..  SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    2.     For all frequently used Select statements, try to use an index.
    3.     Using buffered tables improves the performance considerably.
    Point # 1
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    Point # 2
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    Point # 3
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements       contd…           Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements    contd…For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
    Loop at int_cntry.
           Select single * from zfligh into int_fligh
    where cntry = int_cntry-cntry.
    Append int_fligh.
    Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements    contd…  Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    Point # 1
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    Point # 2
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Point # 3
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    Point # 2
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    Point # 3
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    Point # 5
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    Point # 6
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    Point # 7
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    Point # 8
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    Point # 9
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 10
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    Point # 11
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    Point # 12
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 13
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • PL/SQL Procedure Compilation error

    Hi,
    <br><br>
    I have wrote a PL/SQL Stored Procedure to read a couple of table values and then output some data to a file, when I create the procedure on the database I get the following compilation error:
    <br><br>
    LINE/COL ERROR<br>
    -------- -----------------------------------------------------------------<br>
    25/7 PLS-00103: Encountered the symbol ")" when expecting one of the<br>
    following:<br>
    ( - + case mod new null <an identifier><br>
    <a double-quoted delimited-identifier> <a bind variable> avg<br>
    count current max min prior sql stddev sum variance execute<br>
    forall merge time timestamp interval date<br>
    <a string literal with character set specification>
    <a number> <a single-quoted SQL string> pipe<br>
    The symbol "null" was substituted for ")" to continue.<br>
    <br>
    The script is below: <br><br>
    CREATE OR REPLACE <br>
         PROCEDURE TDF_EXTRACT AS<br>
    v_file UTL_FILE.FILE_TYPE;<br>
    YEAR number(4);<br>
    Q1_VALUE NUMBER(7);<br><br>
    BEGIN<br><br>
    SELECT PERSON_VALUE<br>
    INTO     Q1_VALUE<br>
    FROM PERSON<br>
    WHERE ID = 79;<br><br>
    SELECT EXTRACT(YEAR FROM SYSDATE)<br>
    INTO YEAR <br>
    FROM DUAL;<br><br>
    v_file := UTL_FILE.FOPEN(location => '/tmp',<br>
    filename => 'extratced_values.txt',<br>
    open_mode => 'W',<br>
    max_linesize => 32767);<br><br>
    UTL_FILE.PUT_LINE(v_file,<br>
    'Q1'     ||     YEAR     ||     '23'     ||     Q1_VALUE || '\r\n' ||<br>
              );<br><br>
    UTL_FILE.FCLOSE(v_file);<br><br>
    END TDF_EXTRACT;

    'Q1' || YEAR || '23' || Q1_VALUE || '\r\n' ||
    );Syntax error during concatenation, maybe?
    C.
    Message was edited by:
    cd

  • AD and adding group members via CFLDAP

    I posted this over in Advanced techniques with only one
    brave, yet
    unfortunately uninformed taker...
    Anyone here have a clue as to why I'd get the error described
    in the
    text below???
    [Only Response...]
    Thank you for your response... I probably should explain
    better what
    this code does...
    It queries a data source (DB2 database) for a list of about
    2000 names
    (specifically their Employee number).
    Then it queries the MS Active directory for a list of anyone
    who has an
    attribute of employeeNumber that
    is not an empty string.
    Next, it uses a QofQ to join the two record sets together,
    tossing out
    any records that do not match from
    both of the data sources.
    Then I loop over that list of employees adding them into a
    group.
    This operation dos nothing to modify a users password.
    Thanks,
    D.
    Ian Skinner wrote:
    > This came off of another CF related list. Not sure if it
    applies to
    > your situation or not.
    >
    > * You cannot change passwords unless you have a SSL cert
    setup for the
    > CF server and the AD domain controller.
    >
    > I have not first hand experience with this, so all I can
    offer is to
    > pass along the above comment.
    >
    > dnagel wrote:
    >> So, this is the advanced techniques group... and no
    one feels the
    >> least bit challenged?
    >> Theres got to be someone who enjoys delving into
    LDAP out there...
    >>
    >> D.
    I'm having a bit of trouble getting the CFLDAP Modify query
    to execute
    after
    I tied it into the CFLOOPed query... When I ran it with my
    own users DN it
    worked great... it does not work with any other DN. My
    account has Domain
    Adminis on this sandboxed server and is capable of making the
    change by hand
    using the AD tools inside of MMC... Any suggestions? Thanks,
    D.
    <cfset servername = "AD.TESTSITE.com">
    <cfset username = "[email protected]">
    <cfset password = "PASSWORD">
    <cfset domain = "TESTSITE">
    <cfset OU = "ou=Granite">
    <cfoutput>
    <CFSet GroupName="TestDistribution">
    <CFSet GroupDN =
    "cn=#GroupName#,cn=Users,dc=#domain#,dc=com">
    <CFQuery name="Users" datasource="GCI_Workforce">
    Select cast (WBAN8 as varchar(10)) as WBAN8, wbemal from
    WTWDSECPJ1 where WBEXEMPT ='Y'
    </CFQuery>
    <cfldap
    action="query"
    server = "#servername#"
    username = "#username#"
    password = "#password#"
    start = "#OU#,dc=#domain#,dc=com"
    attributes = "dn,employeeNumber"
    filter = "employeeNumber=*"
    name = "adDNLookup"
    scope = "subtree"
    >
    <CFQuery Name="JoinUsers" DBType="Query">
    Select
    adDNLookup.DN, adDNLookup.employeeNumber
    from
    adDNLookup,
    Users
    Where
    adDNLookup.employeeNumber = Users.wban8
    </CFQuery>
    <CFLoop Query="JoinUsers">
    <CFTry>
    <!---<CFSet UserDN = "member=cn=Dennis
    Nagel,CN=Users,DC=TESTSITE,DC=com">--->
    <CFSet UserDN = "member=#DN#">
    <CFSet UserName="#employeeNumber#">
    #UserName# #UserDN#<br>
    <cfldap
    action="modify"
    server = "#servername#"
    username = "#username#"
    password = "#password#"
    modifytype="add"
    attributes = "#UserDN#"
    dn="#GroupDN#"
    separator=";"
    >
    <cfoutput>#UserName# has been added to the group
    (#GroupName#).</cfoutput>
    <cfcatch type="any">
    <cfif FindNoCase( "ENTRY_EXISTS", cfcatch.message )>
    <cfoutput>
    #UserName# is already assigned to the group
    (#GroupName#).
    </cfoutput>
    <cfelse>
    <cfoutput>
    Unknown error : #cfcatch.detail#")
    </cfoutput>
    <cfabort>
    </cfif>
    </cfcatch>
    </CFTry>
    </CFLoop>
    </cfoutput>
    heres the trace info...
    110028 member=CN=Mary Chalfa, OU=PSP_Indio, OU=PSP,
    OU=GC_Branches,
    ou=Granite, dc=TESTSITE, dc=com
    Unknown error : One or more of the required attributes may be
    missing/incorrect or you do not have permissions to execute
    this
    operation on the server")
    Debugging Information ColdFusion Server Enterprise
    6,1,0,63958
    Template /JDE-AD-Sync/JDE-AD-Groups.cfm
    Time Stamp 22-Jun-06 12:02 PM
    Locale English (US)
    User Agent Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2;
    SV1; .NET
    CLR 1.1.4322; .NET CLR 1.0.3705)
    Remote IP 127.0.0.1
    Host Name 127.0.0.1
    Execution Time
    Total Time Avg Time Count Template
    687 ms 687 ms 1
    C:\Inetpub\wwwroot\JDE-AD-Sync\JDE-AD-Groups.cfm
    0 ms 0 ms 1 C:\Inetpub\wwwroot\JDE-AD-Sync\Application.cfm
    0 ms STARTUP, PARSING, COMPILING, LOADING, & SHUTDOWN
    687 ms TOTAL EXECUTION TIME
    red = over 250 ms average execution time
    Exceptions
    12:02:45.045 - Application Exception - in
    C:\Inetpub\wwwroot\JDE-AD-Sync\JDE-AD-Groups.cfm : line 67
    An error has occured while trying to execute modify :[LDAP:
    error code 49 - 80090308: LdapErr: DSID-0C090334, comment:
    AcceptSecurityContext error, data 525, vece].
    SQL Queries
    Users (Datasource=GCI_Workforce, Time=47ms, Records=2203) in
    C:\Inetpub\wwwroot\JDE-AD-Sync\JDE-AD-Groups.cfm @
    12:02:44.044
    Select cast (WBAN8 as varchar(10)) as WBAN8, wbemal from
    WTWDSECPJ1 where WBEXEMPT ='Y'
    JoinUsers (Datasource=, Time=16ms, Records=996) in
    C:\Inetpub\wwwroot\JDE-AD-Sync\JDE-AD-Groups.cfm @
    12:02:45.045
    Select
    adDNLookup.DN, adDNLookup.employeeNumber
    from
    adDNLookup,
    Users
    Where
    adDNLookup.employeeNumber = Users.wban8
    Scope Variables
    Application Variables:
    applicationname=JDE-AD-Sync
    ds=GCI_WFD
    Cookie Variables:
    JSESSIONID=36301107041151000811062
    Server Variables:
    COLDFUSION=Struct (8)
    OS=Struct (5)
    Session Variables:
    cfid=831
    cftoken=54562187
    sessionid=JDE-AD-SYNC_831_54562187
    urltoken=CFID=831&CFTOKEN=54562187
    Debug Rendering Time: 63 ms

    ok, I found it... re-use of the vaiable username... : -)
    Damn ambiguous error messages.
    Thanks to Ian for taking a look.
    D.

  • How do I tell if a mysql update was successful?

    How do I tell if a mysql update was successful?
    I need to know if an update was run or if the record was not found....  is there some way that coldfusion can use that traps success/fail resoponses from mysql [linda like myquery.RecordCount ]?
    basically I am trying to update a row, if no row was updated - the record must not exist so I then need to do an insert...
    -any ideas?
    -sean

    here is the test query:
    <cfquery name="qry" datasource="#application.dsn#">
         update DISC_CUST set DISC_PriceChange = '222222', DISC_TaxablePriceChange = '2222222'
         where DISC_ProdID = '1129'
    </cfquery>
    <cfdump var="#qry#" />
    the error is "Variable  QRY is undefined."
    if you remove the dump the debug results for the query show:
              Debugging Information
    ColdFusion Server Enterprise
    8,0,1,195765
    Template
    /Assets/Import/index.cfm
    Time Stamp
    19-Jul-10 02:19 PM
    Locale
    English (US)
    User Agent
    Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US;  rv:1.9.2.4) Gecko/20100611 Firefox/3.6.4 ( .NET CLR 3.5.30729)
    Remote IP
    192.168.1.100
    Host Name
    192.168.1.100
    Execution Time
    Total Time
    Avg Time
    Count
    Template
    5 ms
    5 ms
    1
    /data/vhome/xxxl/httpdocs/Assets/Import/index.cfm
    3324 ms
    STARTUP,  PARSING, COMPILING, LOADING, & SHUTDOWN
    3329 ms
    TOTAL EXECUTION  TIME
    red =  over 250 ms average execution time
    SQL  Queries qry (Datasource=dsn, Time=1ms,  Records=0) in  /data/vhome/xxx/httpdocs/Assets/Import/index.cfm @  14:19:19.019
         update DISC_CUST set DISC_PriceChange = '222222', DISC_TaxablePriceChange = '2222222'
         where DISC_ProdID = '1129'

  • Very Strange error and can't find any answer anywhere in the net

    I created a login form (see below).
    If I use the regular HTML form and form fields for the username, password and submit button I don't get any problem viewing my form
    But if I use cfform and cfinput form fields I get this error.
    Here is my cfform codes:
    cfform name="login" action="/login/checkLogin.cfm" method="post">
      cfinput type="text" name="username" size="25" required="Yes" message="Please enter your username">
      cfinput type="password" name="password" size="25" required="Yes" message="Please enter you password">
       cfinput type="submit" value="Sign In">
    /cfform>
    When I launch this page I get this error and I don't understand what this means and what am I supposed to do, please help!
    Total Time Avg Time Count Template
    1 ms 1 ms 1 /opt/coldfusion10/cfusion/wwwroot/CFIDE/administrator/templates/secure_profile_error.cfm
    0 ms 0 ms 1 top level /home/space/users/www/webdocsec/login/login.cfm
    19 ms   STARTUP, PARSING, COMPILING, LOADING, & SHUTDOWN
    20 ms   TOTAL EXECUTION TIME
    red = over 250 ms average execution time
    Exceptions
    10:08:08.008 - Template Exception - in : line -1
         Attribute validation error for tag CFINPUT.

    Using cfform
    To start with, leave cfform aside. You will find in the documentation that most developers are abandoning Coldfusion's native UI tags, for example, cfform, cfgrid, and so on. They are outdated and occasionally perform erratically. If you wish to validate forms, use a Javascript library such as jQuery.
    Implementing Site-Wide Error Handler
    By default, there is no site-wide error handler configured. It is advisable to create your own. Then register the path of the page in the Coldfusion Administrator.
    As the name implies, it is a CFM page which ColdFusion runs when it encounters an error on your site. Creating your own enables you to present a simple, customized, user-friendly page to your visitors.
    Alternatively, you may choose to implement a Secure Profile. This is available in the Administrator, on the page Security => Secure Profile.
    When you check the box, you configure Coldfusion to automatically implement all the security measures listed in the table. That includes Coldfusion's own the Site-Wide Error Handler. It is the system file, /CFIDE/administrator/templates/secure_profile_error.cfm.

  • What is mapLN and why is it eating all our space :)

    here's a DbPringLog from a few months ago:
    <pre>
    [xoopit@xda-004 ###]$ java -jar ~/###/je-3.3.74.jar DbPrintLog -s 0x00000000 -e 0x0000000a -S
    <DbPrintLog>
    Log statistics:
    type total provisional total min max avg entries
    count count bytes bytes bytes bytes as % of log
    LN_TX 219,704 0 26,803,878 117 132 121 44.7
    MapLN 14 0 956 59 72 68 0
    NameLN_TX 4 0 292 69 79 73 0
    DupCountLN_TX 73,025 0 5,915,025 81 81 81 9.9
    DupCountLN 211 211 11,816 56 56 56 0
    FileSummaryLN 10 0 69,506 91 26,681 6,950 0.1
    IN 2,033 76 8,213,996 67 6,368 4,040 13.7
    BIN 4,064 3,957 12,297,614 57 6,319 3,025 20.5
    DIN 799 54 253,737 197 687 317 0.4
    DBIN 1,334 1,333 3,626,993 166 6,389 2,718 6
    Root 4 0 326 62 88 81 0
    Commit 73,238 0 2,783,044 38 38 38 4.6
    CkptStart 2 0 81 39 42 40 0
    CkptEnd 3 0 256 84 87 85 0
    BINDelta 35 0 19,110 75 975 546 0
    FileHeader 7 0 266 38 38 38 0
    key/data 13,621,638 (22.7)
    Total bytes in portion of log read: 59,996,896
    Total number of entries: 374,487
    Per checkpoint interval info:
    lnTxn ln mapLNTxn mapLN end-end end-start start-end maxLNReplay ckptEnd
    0 0 0 1 1,345 563 782 0 0x0/0x541
    20,529 0 0 7 5,516,716 4,949,678 567,038 20,529 0x0/0x5432ed
    128,339 0 0 6 34,716,133 54,481,939 0 128,339 0x7/0x392d2
    70,836 0 0 0 19,765,806 19,765,806 0 70,836 0xa/0x0
    </DbPrintLog>
    </pre>
    then a few days ago:
    <pre>
    [xoopit@xda-004 ###]$ java -jar ~/###/je-3.3.74.jar DbPrintLog -s 0x003ca27f -e 0x003cb06b -S
    <DbPrintLog>
    Log statistics:
    type total provisional total min max avg entries
    count count bytes bytes bytes bytes as % of log
    LN_TX 534 0 51,086 86 108 95 0.1
    LN 267,341 0 22,505,538 77 93 84 25.3
    MapLN 9 0 668,956 74,226 74,539 74,328 0.8
    DupCountLN_TX 178 0 11,393 63 65 64 0
    DupCountLN 22 0 912 40 42 41 0
    FileSummaryLN 7,669 0 693,427 72 2,462 90 0.8
    IN 9,182 166 33,886,067 81 6,428 3,690 38.1
    BIN 7,731 491 24,977,397 49 6,478 3,230 28.1
    DIN 909 9 2,468,396 144 6,445 2,715 2.8
    DBIN 619 56 3,555,037 136 6,494 5,743 4
    Root 1 0 8,260 8,260 8,260 8,260 0
    Commit 178 0 6,230 35 35 35 0
    CkptStart 7 0 224 32 32 32 0
    BINDelta 153 0 21,740 42 1,284 142 0
    DupBINDelta 14 0 4,686 74 1,324 334 0
    Trace 114 0 20,812 76 306 182 0
    FileHeader 10 0 380 38 38 38 0
    key/data 13,167,649 (14.8)
    Total bytes in portion of log read: 88,880,541
    Total number of entries: 294,671
    Per checkpoint interval info:
    lnTxn ln mapLNTxn mapLN end-end end-start start-end maxLNReplay ckptEnd
    534 267,341 0 9 30,270,000,000 30,258,643,275 0 267,875 0x3cb06b/0x0
    </DbPrintLog>
    </pre>
    and from today:
    <pre>
    xoopit@xda-004 ###]$ java -jar ~/###/je-3.3.74.jar DbPrintLog -s 0x004207c0 -e 0x004207c9 -S
    <DbPrintLog>
    Log statistics:
    type total provisional total min max avg entries
    count count bytes bytes bytes bytes as % of log
    LN 44 0 3,507 78 93 79 0
    MapLN 32 0 76,040,837 295,485 8,572,698 2,376,276 96.7
    FileSummaryLN 156 0 9,362 30 220 60 0
    IN 36 2 110,580 46 6,178 3,071 0.1
    BIN 43 43 88,914 79 6,478 2,067 0.1
    Root 11 0 2,405,444 218,657 218,708 218,676 3.1
    CkptStart 1 0 31 31 31 31 0
    CkptEnd 1 0 71 71 71 71 0
    Trace 13 0 1,146 51 282 88 0
    FileHeader 10 0 380 38 38 38 0
    key/data 1,967 (0)
    Total bytes in portion of log read: 78,660,272
    Total number of entries: 347
    Per checkpoint interval info:
    lnTxn ln mapLNTxn mapLN end-end end-start start-end maxLNReplay ckptEnd
    0 44 0 22150,110,853,335150,109,212,264 1,641,071 44 0x4207c6/0xd0557
    0 0 0 10 29,146,665 29,146,665 0 5 0x4207c9/0x0
    </DbPrintLog>
    </pre>
    as you can see 96.7% of our store is taken up by these mapLN entries. we haven't found any documentation as to what these things are. this is killing one of our BDB stores because in the last 24 hours we've generated around 10,000 new .jdb files when we used to have a few hundred...
    we're using je-3.3.74
    we upgraded to je-3.3.74 (from 3.3.62) on November 3rd but this particular issue didn't show up until around 24 hours ago.
    thanks for taking a look ~j
    Edited by: jules | xoopit on Nov 13, 2008 11:59 AM
    Edited by: jules | xoopit on Nov 13, 2008 1:11 PM

    Hi All,
    We've been working on this with the Xoopit folks and I wanted to follow up and post the resolution, since it potentially impacts everyone using JE 3.3.x.
    There is a bug in JE 3.3.74 and earlier, in all versions of the 3.3.x product. The fix for this is in JE 3.3.75, which currently is available on request. We haven't decided when we'll update our download site with this updated release. If you would like the updated release, please send email to mark.hayes at the obvious .com (oracle).
    Here's the change log entry for the bug, which should explain what you need to know:
    Fix a bug that caused the space taken by internal metadata in JE log files to increase over a long period of time. The rate of increase was slow in most cases, but in at least one observed case became rapid after a long time period and after the log cleaner became backlogged. To determine whether your JE log exhibits this problem, run
    java -jar je.x.y.z.jar DbPrintLog -h DIR -S
    and examine the line labeled MapLN on the left. If the amount of the log taken by MapLNs is 10% or greater, or if you see this number increasing steadily over time, then your application is probably experiencing this problem.
    By installing JE 3.3.75 or later, the excess disk space will automatically be reclaimed over time, as ordinary checkpoints and log cleaning occur. If you wish to recreate your database rather than wait for this to occur gradually, you can use DbDump and DbLoad to do so.
    We'd like to express our appreciation and sincere thanks to Jules and the other folks at Xoopit who reported this problem and helped us to diagnose it. We would not have found or fixed this problem as quickly as we did without their help.
    For reference, the support ticket # for this problem is: #16610
    If you have further questions, please reply to this forum post.
    Thanks,
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Help needed in MV Query

    Hello,
    I have created a MV as follows:
    CREATE MATERIALIZED VIEW Test1_MV
    BUILD IMMEDIATE
    REFRESH COMPLETE
    ENABLE QUERY REWRITE
    AS
    SELECT b.Customer_ID,
    j.MONTH_END_DATE,
    (AVG(sum(COUNT *(case when (m.DIM1_DSC ='ABC') then 1 else 0 end)))OVER (ORDER BY b.Customer_ID, j.MONTH_END_DATE ROWS 5 PRECEDING)) AS AVG_ABC_Count,
    (AVG(sum(AMT *(case when (m.DIM1_DSC ='ABC') then 1 else 0 end)))OVER (ORDER BY b.Customer_ID, j.MONTH_END_DATE ROWS 5 PRECEDING)) AS AVG_ABC_Amount,
    (AVG(sum(COUNT *(case when (m.DIM1_DSC ='DEF') then 1 else 0 end)))OVER (ORDER BY b.Customer_ID, j.MONTH_END_DATE ROWS 5 PRECEDING)) AS AVG_DEF_Count,
    (AVG(sum(AMT *(case when (m.DIM1_DSC ='DEF') then 1 else 0 end)))OVER (ORDER BY b.Customer_ID, j.MONTH_END_DATE ROWS 5 PRECEDING)) AS AVG_DEF_Amount,
    sum(COUNT *(case when (m.DIM1_DSC ='ABC') then 1 else 0 end)) as Cumm_ABC_Count,
    sum(AMT *(case when (m.DIM1_DSC ='ABC') then 1 else 0 end)) as Cumm_ABC_Amount,
    sum(COUNT *(case when (m.DIM1_DSC ='DEF') then 1 else 0 end)) as Cumm_DEF_Count,
    sum(AMT *(case when (m.DIM1_DSC ='DEF') then 1 else 0 end)) as Cumm_DEF_Amount
    FROM
    DIM_CUSTOMERTABLE b,
    DM_TIME j,
    DIM2_TABLE k,
    DIM3_TABLE l,
    DIM1_TABLE m,
    FACT_TABLE cd
    WHERE
    cd.CUSTOMER_ID = b.CUSTOMER_ID
    AND cd.DATE_ID = j.MONTH_ID
    AND cd.DIM2_ID = k.DIM2_ID
    AND cd.DIM3_ID = l.DIM3_ID
    AND cd.DIM1_ID = m.DIM1_ID
    AND j.YEAR_DSC in('2007','2008')
    GROUP BY b.CUSTOMER_ID, j.MONTH_END_DATE
    ORDER BY b.CUSTOMER_ID, j.MONTH_END_DATE;
    I have a problem..
    My Fact Table has only one row for a customer_Id say 123 i.e., it has data for only DIM1 of data for the year 2007 as follows:
    AMT     DATE_ID     DIM1_ID     DIM2_ID     DIM3_ID     COUNT     CUSTOMER_ID
    5310.85     2007.5     2     2     2     1     123
    So when i query the MV, i should get AVG_ABC_Count and AVG_ABC_Amount should be same as Cumm_ABC_Count and Cumm_ABC_Amount. Because average of one value is the same value itself..But I am getting different values for Average and Cummulative.
    I am grouping by Customer_Id and Date. But why is it that the data is differing.

    Please somebody help me with this..I am stuck at this point..I am not able to find out the reason why i am getting the results like this..If anybody has any idea, will be very helpful..

  • How to distinguish built-in SQL functions of PL/SQL?

    I m having a hard time to figure out which functions are used ONLY in SQL statements and which are used in regular expr(ie, variable assignment,). Can anyone show me a list of each or perhaps a URL to look for?
    I have searched through either the developer's guide and reference but couldn't find any appropriate indication in one place that make it clear.
    For instance, I thought I can use CAST function in a variable assginment like the following:
    declare
    cursor myCur is SELECT Value_varchar2(1) FROM table WHERE id = 1;
    myRec myCur%ROWTYPE;
    var_a NUMBER(1);
    begin
    OPEN myCur;
    FETCH myCur INTO myRec;
    CLOSE myCur;
    var_a := CAST(myCur.Value_varchar2(1) AS NUMBER(1));
    DBMS_OUTPUT.PUT_LINE('var_a = ' || TO_CHAR(var_a));
    end;
    . It seems like CAST function can ONLY be used in SQL statement, but no doc so far states that?!
    Edited by: HappyJay on 2010/05/12 12:05

    Sorry to bother you, Bob!
    I think I might already found the list. Is it the following list?
    ---------------------- QUOTED FROM Oracle® Database PL/SQL Language Reference 11g Release 2 (11.2)Part Number E10472-06
    SQL Functions in PL/SQL Expressions
    In PL/SQL expressions, you can use all SQL functions except:
    Aggregate functions (such as AVG and COUNT)
    Analytic functions (such as LAG and RATIO_TO_REPORT)
    Data mining functions (such as CLUSTER_ID and FEATURE_VALUE)
    Encoding and decoding functions (such as DECODE and DUMP)
    Model functions (such as ITERATION_NUMBER and PREVIOUS)
    Object reference functions (such as REF and VALUE)
    XML functions (such as APPENDCHILDXML and EXISTSNODE)
    These conversion functions:
    BIN_TO_NUM
    These miscellaneous functions:
    CUBE_TABLE
    DATAOBJ_TO_PARTITION
    LNNVL
    NVL2
    SYS_CONNECT_BY_PATH
    SYS_TYPEID
    WIDTH_BUCKET
    PL/SQL supports an overload of BITAND for which the arguments and result are BINARY_INTEGER.
    When used in a PL/SQL expression, the RAWTOHEX function accepts an argument of data type RAW and returns a VARCHAR2 value with the hexadecimal representation of bytes that comprise the value of the argument. Arguments of types other than RAW can be specified only if they can be implicitly converted to RAW. This conversion is possible for CHAR, VARCHAR2, and LONG values that are valid arguments of the HEXTORAW function, and for LONG RAW and BLOB values of up to 16380 bytes.
    ----------------------

  • SELECT query takes too much time! Y?

    Plz find my SELECT query below:
    select w~mandt
    wvbeln wposnr wmeins wmatnr wwerks wnetwr
    wkwmeng wvrkme wmatwa wcharg w~pstyv
    wposar wprodh wgrkor wantlf wkztlf wlprio
    wvstel wroute wumvkz wumvkn wabgru wuntto
    wawahr werdat werzet wfixmg wprctr wvpmat
    wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
    wbedae wcuobj w~mtvfp
    xetenr xwmeng xbmeng xettyp xwepos xabart
    x~edatu
    xtddat xmbdat xlddat xwadat xabruf xetart
    x~ezeit
    into table t_vbap
    from vbap as w
    inner join vbep as x on xvbeln = wvbeln and
    xposnr = wposnr and
    xmandt = wmandt
    where
    ( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
    ( ( ( erdat > pre_dat and erdat < p_syndt ) or
    ( erdat = p_syndt and erzet <= p_syntm ) ) ) and
    w~matnr in s_matnr and
    w~pstyv in s_itmcat and
    w~lfrel in s_lfrel and
    w~abgru = ' ' and
    w~kwmeng > 0 and
    w~mtvfp in w_mtvfp and
    x~ettyp in w_ettyp and
    x~bdart in s_req_tp and
    x~plart in s_pln_tp and
    x~etart in s_etart and
    x~abart in s_abart and
    ( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
    The problem: It takes too much time while executing this statement.
    Could anybody change this statement and help me out to reduce the DB Access time?
    Thx

    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    4.     For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    If all primary key fields are supplied in the Where conditions you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    2.     For all frequently used Select statements, try to use an index.
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    3.     Using buffered tables improves the performance considerably.
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements  Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements  For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
              Loop at int_cntry.
      Select single * from zfligh into int_fligh
      where cntry = int_cntry-cntry.
      Append int_fligh.
                          Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • How to find the sum of a column

    I need to find the sum of a column and use it in a different column. The following is the example.
    Column names: Feedback(Good, Avg, Poor), Count(no of good, no of avg, no of poor) and %age(Feedback/sum(feedback))
    I want to find the sum in Java class and also calculate the last column in java class.
    Please tell me some way to do it.

    oh.. ok ..thanks for letting me know.. i will formulate the question in a proper way:
    This is what my UI should look like:
    Rating Count Percent
    Excellent 2 20
    Good 6 60
    Poor 1 10
    Bad 1 10
    Now i have the following columns in the data base:
    Meaning and feedback_rating.
    So the following SQL Query:
    SELECT hrl.meaning rating,
    sum(decode(bcpi.feedback_rating, null, 0, 1)) counted
    from cmp_cwb_person_info bcpi ,
    hr_lookups hrl
    group by hrl.meaning
    will give me the result as
    rating counted
    Excellent 2
    Good 6
    Poor 1
    Bad 1
    Now I want a third column as percentage : Earlier we were doing this calculation in the sql query itself, so the query was like
    SELECT hrl.meaning rating,
    sum(decode(bcpi.feedback_rating, null, 0, 1)) counted,
    sum(decode(bcpi.feedback_rating, null, 0, 1))/(max (select count (*) from cmp_cwb_person_info bcpi ,
    hr_lookups hrl )) percent
    from cmp_cwb_person_info bcpi ,
    hr_lookups hrl
    group by hrl.meaning
    Hence the third column (percent) was calculated in the sql query itself.
    But now i feel that the performance of the query could be improved if we get the first two columns from the database and the calculate the third column programatically in the java code.
    So this is what I want to know. How can i do that?

Maybe you are looking for

  • How to Deploy the third party EJB 3.0 jar in web application

    I have a web application which calls services provided by EJB 3.0 beans packaged in third party jar. How do a configure the ViewController project to deploy the beans to the weblogic server when I run the application rather than deploy the EJB ear as

  • Max. no of records in an Idoc

    Hi all, Can somebody please tell me, the Max. no of records that can be hold in an Idoc? Thanks in Advance, Soumya.

  • Cannot display pdf form with Adobe Reader 9.1.3

    Since the last update of my Reader (last version is 9.1.3), I can't display pdf form in the workspace. I got a blank page. Is anybody have the same problem?

  • Why did apple remove so many features from pages?

    I don't understand, the old version of pages worked so well and I paid for it. Now I can't download the older version I had before Yosemite. It's like a stripped down version that does not work half as well as it used to.

  • Using shapes for the background?!

    Hello everyone! This is my first website with muse. I do love this program just ran into a few issue, first problem is im using a white shape to put my content on and while this works it now has allowed you to scroll right to see the whole shape. The