Not goin to fetch the records

My code is following:
PROCEDURE STD_MESSAGE_ROUTINE (P_ERROR_NO IN NUMBER,
P_LANG_CODE IN VARCHAR2,
P_MSG1 IN VARCHAR2 := NULL,
P_MSG2 IN VARCHAR2 := NULL,
P_MSG3 IN VARCHAR2 := NULL,
P_MSG4 IN VARCHAR2 := NULL,
P_MSG5 IN VARCHAR2 := NULL,
P_MSG6 IN VARCHAR2 := NULL,
P_MSG7 IN VARCHAR2 := NULL,
P_MSG8 IN VARCHAR2 := NULL) IS
CURSOR ERROR_CUR IS
SELECT DECODE(P_LANG_CODE, 'E', EM_ENG_MSG, EM_BL_MSG) "MSG"
FROM SYSTEM_ERROR_MESSAGE
WHERE EM_ERR_NO = P_ERROR_NO ;
CURSOR C1 IS
SELECT EM_ENG_MSG
FROM SYSTEM_ERROR_MESSAGE
WHERE EM_ERR_NO = P_ERROR_NO ;
ERROR_MSG1 ERROR_CUR%ROWTYPE;
MSG2 VARCHAR2(100) ;
ERROR_MSG VARCHAR2(500) ;
M_MESSAGE VARCHAR2(100) ;
BEGIN
disp_message('in std');     
OPEN ERROR_CUR;
OPEN C1;
disp_message('goin to fech C1');
     FETCH C1 INTO MSG2;
     FETCH C1 INTO ERROR_MSG1;
DISP_MESSAGE(MSG2);      
disp_message('goin to fech');
FETCH ERROR_CUR INTO ERROR_MSG1;
disp_message('in std2');
IF ERROR_CUR%NOTFOUND THEN
disp_message('Get altert');
GET_ALERT_MSG('08',M_MESSAGE);
disp_message('Get altert done');
DISP_ALERT(TO_CHAR(P_ERROR_NO) || M_MESSAGE);
disp_message('display altert');
ELSE
ERROR_MSG := REPLACE(ERROR_MSG,'ERR1',P_MSG1);
ERROR_MSG := REPLACE(ERROR_MSG,'ERR2',P_MSG2);
ERROR_MSG := REPLACE(ERROR_MSG,'ERR3',P_MSG3);
ERROR_MSG := REPLACE(ERROR_MSG,'ERR4',P_MSG4);
ERROR_MSG := REPLACE(ERROR_MSG,'ERR5',P_MSG5);
ERROR_MSG := REPLACE(ERROR_MSG,'ERR6',P_MSG6);
ERROR_MSG := REPLACE(ERROR_MSG,'ERR7',P_MSG7);
ERROR_MSG := REPLACE(ERROR_MSG,'ERR8',P_MSG8);
DISP_ALERT(TO_CHAR(P_ERROR_NO) || ':' || ' '|| ERROR_MSG);
END IF;
CLOSE C1;
CLOSE ERROR_CUR;
END;
and I should call it through std_message_routine('11','ARD');
But the whole thing is not goin to fetch records, the message I get
disp_message('goin to fech C1');
nothing after it and if I press enter again same thing!
Any help?

Abdetu,
To be honest, I'm not a developer just doing some code checks randomly on a app developed by someone else... well I was tryin to setup a message routine by centralizing the code and the system message was set to 20, where it should show me the error it was suppressed some how... I really dun have any idea why it was doing it but when i set the system message level to 0 I started getting the error message.
If you like to ask me anything else please feel free and I will be happy to share the code..

Similar Messages

  • Query not fetched the record

    Hello,
    Could someone help me please ?
    I have a listing of my sales orders and I want to make changes in my order by opening the form and fetched with that record. When I click on that particular orderno in my listing of order and call the form to display the details, it calls the form but says "Query could not fetch the record". I do not know why ? Please help me with the solution.
    Thanx

    Hello,
    I think you are passing orderno to called form as a parameter. If you are using parameter list check..
    1. If parameter data is getting in form correctly ?
    2. Next, have you changed where clause of other block,so that is will display record with passed orderno ?
    I am expecting more details from you.
    Thanx
    Adi

  • Not fetching the records through DTP

    Hi Gurus,
    I am facing a problem while loading the data in to infocube through DTP.
    I have successfully loaded the data till PSA but not able to load the records into infocube.
    The request was successfully with status green but able to see only 0 records loaded.
    later one of my friend executed the DTP successfully with all the records loaded.
    can you please tell me why it is not working with my userid.
    I have found the following difference in the monitor.
    I am not able to see any selections for my request and ale to see REQUID = 871063 in the selection of the request started by my friend.
    can any one tell me why that REQUID = 871063 is not displaying automatically when I have started the schedule.

    Hi,
    I guess the DTP update process is DELTA UPDATE mode. Because you and your friend/colleague have executed SAME DTP object with a small time gap and during the same period no new TRANSATIONS HAVE POSTED IN SOURCE.
    -Try to execute after couple hours....
    Regards

  • Not able to fetch the data by Virtual Cube

    Hi Experts,
    My requirement is I need to fetch the data from Source System (From Data base table) by using Virtual Cube.
    What I have done is I have created Virtual Cube and created corresponding Function Module in Source System.
    That Function MOdule is working fine, if Data base table is small in Source System.But If data base table contains huge amount of data (millions of record), I am not able to fetch the data.
    Below is the code I have incorporated in my function module.
    c_th_mapping TYPE CL_RSDRV_EXTERNAL_IPROV_SRV=>TN_TH_IOBJ_FLD_MAPPING.
      DATA:
        l_s_map TYPE CL_RSDRV_EXTERNAL_IPROV_SRV=>TN_S_IOBJ_FLD_MAPPING.
      l_s_map-iobjnm = '0PARTNER'.
      l_s_map-fldnm  = 'PARTNER'.
      insert l_s_map into table l_th_mapping.
    create object l_r_srv
        exporting
           i_tablnm              = '/SAPSLL/V_BLBP'
          i_th_iobj_fld_mapping = l_th_mapping.
      l_r_srv->open_cursor(
        i_t_characteristics = characteristics[]
        i_t_keyfigures      = keyfigures[]
        i_t_selection       = selection[] ).
       l_r_srv->fetch_pack_data(
        importing
          e_t_data = data[] ).
      return-type = 'S'.
    In the above function Module,Internal table L_TH_MAPPING contains Info Objects from Virtual Cube and corresponding field from Underlying data base table.
    The problem where I am facing is, in the method FETCH_PACK_DATA, initially program is trying to fetch all the recordsfrom data base table to internal table.If Data base table so lagre, this logic is not working.
    So would you please help me how to handle these kind of issues.

    Hi Experts,
    My requirement is I need to fetch the data from Source System (From Data base table) by using Virtual Cube.
    What I have done is I have created Virtual Cube and created corresponding Function Module in Source System.
    That Function MOdule is working fine, if Data base table is small in Source System.But If data base table contains huge amount of data (millions of record), I am not able to fetch the data.
    Below is the code I have incorporated in my function module.
    c_th_mapping TYPE CL_RSDRV_EXTERNAL_IPROV_SRV=>TN_TH_IOBJ_FLD_MAPPING.
      DATA:
        l_s_map TYPE CL_RSDRV_EXTERNAL_IPROV_SRV=>TN_S_IOBJ_FLD_MAPPING.
      l_s_map-iobjnm = '0PARTNER'.
      l_s_map-fldnm  = 'PARTNER'.
      insert l_s_map into table l_th_mapping.
    create object l_r_srv
        exporting
           i_tablnm              = '/SAPSLL/V_BLBP'
          i_th_iobj_fld_mapping = l_th_mapping.
      l_r_srv->open_cursor(
        i_t_characteristics = characteristics[]
        i_t_keyfigures      = keyfigures[]
        i_t_selection       = selection[] ).
       l_r_srv->fetch_pack_data(
        importing
          e_t_data = data[] ).
      return-type = 'S'.
    In the above function Module,Internal table L_TH_MAPPING contains Info Objects from Virtual Cube and corresponding field from Underlying data base table.
    The problem where I am facing is, in the method FETCH_PACK_DATA, initially program is trying to fetch all the recordsfrom data base table to internal table.If Data base table so lagre, this logic is not working.
    So would you please help me how to handle these kind of issues.

  • Best way to Fetch the record

    Hi,
    Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
    Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
    The table has the following key Columns for the Select (sample Table)
    Client_Visit
    ID Number(12,0) --sequence generated number
    EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
    Create_TS Timestamp(6)
    Client_ID Number(9,0)
    Cascade Flg vahrchar2(1)
    On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
    I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
    Code 1:
    SELECT   au_subtyp1.au_id_k,
                                       au_subtyp1.pgm_struct_id_k
                                  FROM au_subtyp au_subtyp1
                                 WHERE au_subtyp1.create_ts =
                                          (SELECT MAX (au_subtyp2.create_ts)
                                             FROM au_subtyp au_subtyp2
                                            WHERE au_subtyp2.au_id_k =
                                                                au_subtyp1.au_id_k
                                              AND au_subtyp2.create_ts <
                                                     TO_DATE ('2013-01-01',
                                                              'YYYY-MM-DD'
                                              AND au_subtyp2.eff_dte =
                                                     (SELECT MAX
                                                                (au_subtyp3.eff_dte
                                                        FROM au_subtyp au_subtyp3
                                                       WHERE au_subtyp3.au_id_k =
                                                                au_subtyp2.au_id_k
                                                         AND au_subtyp3.create_ts <
                                                                TO_DATE
                                                                    ('2013-01-01',
                                                                     'YYYY-MM-DD'
                                                         AND au_subtyp3.eff_dte < =
                                                                TO_DATE
                                                                    ('2012-12-31',
                                                                     'YYYY-MM-DD'
                                   AND au_subtyp1.exists_flg = 'Y'
    Explain Plan
    Plan hash value: 2534321861
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  1 |  FILTER                  |           |       |       |       |            |          |
    |   2 |   HASH GROUP BY          |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  3 |    HASH JOIN             |           |  1404K|   121M|    19M| 33178   (1)| 00:06:39 |
    |*  4 |     HASH JOIN            |           |   307K|    16M|  8712K| 23708   (1)| 00:04:45 |
    |   5 |      VIEW                | VW_SQ_1   |   307K|  5104K|       | 13493   (1)| 00:02:42 |
    |   6 |       HASH GROUP BY      |           |   307K|    13M|   191M| 13493   (1)| 00:02:42 |
    |*  7 |        INDEX FULL SCAN   | AUSU_PK   |  2809K|   125M|       | 13493   (1)| 00:02:42 |
    |*  8 |      INDEX FAST FULL SCAN| AUSU_PK   |  2809K|   104M|       |  2977   (2)| 00:00:36 |
    |*  9 |     TABLE ACCESS FULL    | AU_SUBTYP |  1404K|    46M|       |  5336   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
       3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
       4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
       7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
           filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
                  "AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
       9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
    I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
    select au_id_k,pgm_struct_id_k from (
    SELECT au_id_k
          ,      pgm_struct_id_k
          ,      ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
          create_ts, eff_dte,exists_flg
          FROM   au_subtyp
          WHERE  create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
          AND    eff_dte  <= TO_DATE('2012-12-31','YYYY-MM-DD') 
          ) d  where rn =1   and exists_flg = 'Y'
    --Explain Plan
    Plan hash value: 4039566059
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  1 |  VIEW                    |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  2 |   WINDOW SORT PUSHED RANK|           |  2809K|   133M|   365M| 40034   (1)| 00:08:01 |
    |*  3 |    TABLE ACCESS FULL     | AU_SUBTYP |  2809K|   133M|       |  5345   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
       2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
                  INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
       3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
                  2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
    Vijay

    Hi Justin,
    Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
    The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
    The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
    Both the production and test environment are 3 Node RAC.
    First Query...
    CPU used by this session     4740
    CPU used when call started     4740
    Cached Commit SCN referenced     21393
    DB time     4745
    OS Involuntary context switches     467
    OS Page reclaims     64253
    OS System time used     26
    OS User time used     4562
    OS Voluntary context switches     16
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     2487
    bytes sent via SQL*Net to client     15830
    calls to get snapshot scn: kcmgss     37
    consistent gets     52162
    consistent gets - examination     2
    consistent gets from cache     52162
    enqueue releases     19
    enqueue requests     19
    enqueue waits     1
    execute count     2
    ges messages sent     1
    global enqueue gets sync     19
    global enqueue releases     19
    index fast full scans (full)     1
    index scans kdiixs1     1
    no work - consistent read gets     52125
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time cpu     1
    parse time elapsed     1
    physical write IO requests     69
    physical write bytes     17522688
    physical write total IO requests     69
    physical write total bytes     17522688
    physical write total multi block requests     69
    physical writes     2139
    physical writes direct     2139
    physical writes direct temporary tablespace     2139
    physical writes non checkpoint     2139
    recursive calls     19
    recursive cpu usage     1
    session cursor cache hits     1
    session logical reads     52162
    sorts (memory)     2
    sorts (rows)     760
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     1
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     9
    Second Query
    CPU used by this session     1197
    CPU used when call started     1197
    Cached Commit SCN referenced     21393
    DB time     1201
    OS Involuntary context switches     8684
    OS Page reclaims     21769
    OS System time used     14
    OS User time used     1183
    OS Voluntary context switches     50
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     767
    bytes sent via SQL*Net to client     15745
    calls to get snapshot scn: kcmgss     17
    consistent gets     23871
    consistent gets from cache     23871
    db block gets     16
    db block gets from cache     16
    enqueue releases     25
    enqueue requests     25
    enqueue waits     1
    execute count     2
    free buffer requested     1
    ges messages sent     1
    global enqueue get time     1
    global enqueue gets sync     25
    global enqueue releases     25
    no work - consistent read gets     23856
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time elapsed     1
    physical read IO requests     27
    physical read bytes     6635520
    physical read total IO requests     27
    physical read total bytes     6635520
    physical read total multi block requests     27
    physical reads     810
    physical reads direct     810
    physical reads direct temporary tablespace     810
    physical write IO requests     117
    physical write bytes     24584192
    physical write total IO requests     117
    physical write total bytes     24584192
    physical write total multi block requests     117
    physical writes     3001
    physical writes direct     3001
    physical writes direct temporary tablespace     3001
    physical writes non checkpoint     3001
    recursive calls     25
    session cursor cache hits     1
    session logical reads     23887
    sorts (disk)     1
    sorts (memory)     2
    sorts (rows)     2810365
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     2
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     5Thanks,
    Vijay
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM

  • Not able to edit the records in the table

    Hi Experts,
    I am not able to edit the records or delete the records from a database table.
    Those fields are not enabled.
    Edit icon is not at all there.
    Wat can i do.
    Cheers,
    Karthick

    Hi,
            In SM30, u can edit or delete records from a table.Before goin to this u need to create table maintainance generator for this table.
    regards
    lakshmi

  • I'm visiting new York(USA) for work on Sunday, just wanted to know if i can buy an un locked iphone from the apple store or not, since i'm not a resident, to the record we don't have an apple store at my country

    I'm visiting new York(USA) for work on Sunday, just wanted to know if i can buy an un locked iphone from the apple store or not, since i'm not a resident, to the record we don't have an apple store at my country.

    You can buy an unlocked iPhone at any of the 5 Apple stores in NYC. All are open on Sunday. Note a couple of things, however:
    US iPhone 5's will work on 3G in the rest of the world, but not on LTE because the US phones use different channels than other countries
    Be sure you say "unlocked", not "no contract"
    If it breaks it must be returned to a US store for repair, and if shipped to Apple it must be from and to a US address

  • Can we split and fetch the records in Database Adapter

    Hi,
    I designed a Database Adapter to fetch the records from oracle Database. Some time, the Database Adapter need to fetch around 5000, or 10,000 records in single shot. In that case my BPEL process is choking and getting error as
    java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
    Could someone help me to resolve this?
    In Database Adapter can we split and fetch the records, if number of records more then 1000.
    ex. First 100 rec as one set and next 100 as 2nd set like this.
    Thank you.

    You can send the records as batches useing the debatching feature of db adapter. Refer documentation for implementation details.

  • Fetch the records from cache

    say i have emp table
    eno ename sales
    1 david 1100
    2 lara 200
    3 james 1000
    1 david 1200
    2 lara 5400
    4 white 890
    3 james 7500
    1 david 1313
    eno can be duplicate
    when i give empno is 1
    i want to display his sales i.e 1100,1200,1313
    first time i will go to database and fetch the records
    but next time onwards i dont go to database; i will fetch the records from cache;
    i thought doing it using hashmap or hasptable ;both those two don't allow duplicate values(empno has duplicate values);
    How to solve this problem.

    Hi,
    Ever considered splitting that table up. You are thinking about caching thats a
    very good idea. But doesnt it make it vary evident that the table staructure that you have
    keeps a lot of redundant data. Specially it hardly makes a sense to have sales
    figures in a emp table. Instead you can have Emp table containing eno and
    ename with eno as the primary key and have another table called sales with eno
    and sales columns and in this case the eno references the Emp table.
    If you still want to continue with this structure then I think you can go ahead with
    the solution already suggested to you
    Aviroop

  • Less sql query cost ... but more time to fetch the records...

    Hi,
    I faced up to a 'peculiar' situation with a costly db view.
    I attempted to reduce the total view cost
    specifically for 223 records fetched{the cost from 149 reduced to 74,
                                                           the recursive calls from 796 reduced to 224,
                                                           the consistent gets from 311516 reduced to 310341,
                                                            the physical reads from 7 reduced to 0}
    but the amount of time needed to fetch the results is greater than the old version of the db view....{it may be the double...}
    Have you any idea about this...????
    Note: I have got fresh statistics...
             I use db 10g v.2
    Thanks,
    Sim

    Try tracing the query and see what tkprofs shows you.
    alter session set events '10046 trace name context forever, level 12';
    run the query
    alter session set events '10046 trace name context off';
    Then run tkprofs on the trace file produced to see where the database is spending its time/effort. Do likewise for the baseline query in a different session (so you will generate a different trace file).
    If that does not produce anything useful, try using
    alter session set events '10053 trace name context forever';
    run the query
    alter session set events '10053 trace name context off';
    Then examine the trace file to see if you can learn anything.
    Since you have not given us anything more to go on, that is about all the help I can give....

  • How to quickly fetch the records from an SQL recordset

    I'm using LW/CVI V7.1.1 and SQL Toolkit V2.06.
    When displaying the recordset from a SELECT statement I use the following construct:
      SQLhandle1 = DBActivateSQL(DBhandle, SELECTtext);
      DBBindCol() statements.......
      numRecs = DBNumberOfRecords(SQLhandle1);
      for (n=1; n<=numRecs; n++) {
        DBFetchNext(SQLhandle1);
        display record to the user...
      DBDeactivateSQL (SQLhandle1);
    This has always worked fine for me when using local databases. Now I am developing an app for a remote database, and the fetching of each selected record is proving to be an issue. It takes at best, 60msecs for my round-trip network access to fetch each record. If selecting very many records, the fetching can add up to a considerable time delay.
    My question is, how can I bind the entire recordset to my application variables, (or to a local table?) in a single request to the database? Does LW/CVI support such a method? Or perhaps someone knows an SQL method to help me?
    Thanks

    Hi Michael,
    Thanks for the help. This is what I was looking for. Not sure why I missed it!
    However, after trying it out, it doesn't seem to help. The statement:  DBGetVariantArray(SQLhandle, &array, &recs, &fields); seems to take the same amount of time to get the records as the individual DBFetchNext() statements. So if my SQL statement matches 100 records, the DBGetVariantArray() call will take 100*60msec to complete.
    Is there a DB attribute setting that needs changed?

  • How to do Query optimization?It takes more time to fetch the record from db. Very urgent, I need your assistance

    Hi all
                                     I want to fetch just twenty thousands records from table. My query take more time to fetch  twenty thousands records.  I post my working query, Could you correct the query for me. thanks in advance.
    Query                    
    select
    b.Concatenated_account Account,
    b.Account_description description,
    SUM(case when(Bl.ACTUAL_FLAG='B') then
    ((NVL(Bl.PERIOD_NET_DR, 0)- NVL(Bl.PERIOD_NET_CR, 0)) + (NVL(Bl.PROJECT_TO_DATE_DR, 0)- NVL(Bl.PROJECT_TO_DATE_CR, 0)))end) "Budget_2011"
    from
    gl_balances Bl,
    gl_code_combinations GCC,
    psb_ws_line_balances_i b ,
    gl_budget_versions bv,
    gl_budgets_v gv
    where
    b.CODE_COMBINATION_ID=gcc.CODE_COMBINATION_ID and bl.CODE_COMBINATION_ID=gcc.CODE_COMBINATION_ID and
    bl.budget_version_id =bv.BUDGET_VERSION_ID and gv.budget_version_id= bv.budget_version_id
    and gv.latest_opened_year in (select latest_opened_year-3 from gl_budgets_v where latest_opened_year=:BUDGET_YEAR )
    group by b.Concatenated_account ,b.Account_description

    Hi,
    If this question is related to SQL then please post in SQL forum.
    Otherwise provide more information how this sql is being used and do you want to tune the SQL or the way it fetches the information from DB and display in OAF.
    Regards,
    Sandeep M.

  • Fetch the records based on number

    Hi experts,
        I have a req like if user give input as 5..it should fetch 5 records from database.
    for example database values are
    SERNR                MATNR
    101                                              A
    102                                              A
    103                                              A
    104                                              A
    105                                              A
    106                                              A
    107                                              A
    If user gives the input as 5 it should fetch 5 records like 101 to 105....Can any body plz help me how to write select query for this..
    Thanks in advance,
    Veena.
    Edited by: s veena on Jan 18, 2011 5:52 AM

    Hi Veena,
    You can use UPTO in your select query. For example
    SELECT MATNR FROM MARA INTO TABLE IT_MATNR UPTO P_NUMBER ROWS.
    "Here P_NUMBER is the Selection Screen Parameter
    It will fetch records based on the number in the parameter.
    Thanks & Regards,
    Faheem.

  • Hibernate fails to fetch the record in the Database

    Hi ,
    We are developing a J2EE based server application which has to cater to different types of mobile clients.
    The clients will be pinging the server at a rate of 4-5 requests per sec.
    I use HibernateUtil.getSession().load(PK) and it returns null..but the record is very much there in the DB..i have not inserted that record using JDBC.
    This happens at times..not always..
    Exception : No row with the given identifier exists in the session..
    If i destroy and recreate the session factory, it works fine..but it is an expensive operation.
    DB : Mysql 5
    Hibernate version 3
    My mapping file :
    <?xml version="1.0"?>
    <!DOCTYPE hibernate-mapping PUBLIC
    "-//Hibernate/Hibernate Mapping DTD 3.0//EN"
    "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">
    <hibernate-mapping>
    <class name="client.Contacts" table="contacts">
    <id name="cId" column="c_Id" unsaved-value="null">
                   <generator class="identity" />               
              </id>
    <property name="ContactID">
    <column name="ContactID" />
    </property>
    <property name="firstName">
    <column name="FirstName" />
    </property>
    <property name="lastName">
    <column name="LastName" />
    </property>
    <property name="address">
    <column name="address" />
    </property>
    <property name="email">
    <column name="Email" />
    </property>
    <property name="url">
    <column name="url" />
    </property>
    <property name="birthday">
    <column name="birthday" />
    </property>
    <property name="note">
    <column name="note" />
    </property>
    <property name="telephone">
    <column name="telephone" />
    </property>
    <property name="telephoneFax">
    <column name="telephoneFax" />
    </property>
    <property name="telephoneHome">
    <column name="telephoneHome" />
    </property>
    <property name="telephoneWork">
    <column name="telephoneWork" />
    </property>
    <property name="telephoneCell">
    <column name="telephoneCell" />
    </property>
    <property name="nickName">
    <column name="nickName" />
    </property>
    <property name="version">
    <column name="version" />
    </property>
    <property name="userPhoneFK">
    <column name="userPhoneFK" />
    </property>
    <property name="isDeleted">
    <column name="isDeleted"/>
    </property>
    <property name="userModifiedFlag">
    <column name="isModifiedbyUser"/>
    </property>
    <property name="isRestored">
    <column name="isRestored"/>
    </property>
    <property name="lastBackUpTime">
                   <column name="lastBackUpTime" />
                   </property>
                   <property name="isAddedOnServer">
                        <column name="isAddedOnServer" />
                   </property>
    </class>
    </hibernate-mapping>
    Java code :
    String sortContactsQuery = " from Contacts c where c.isRestored= 'false' and isDeleted='false' and c.userPhoneFK="+msisdnID+" order by c.ContactID ASC ";
    contactList= getSession().createQuery(sortContactsQuery).list();
    contactList = criteria.list();
    This list is empty even though there are records.
    Thanks in advance.
    Ven

    Sorry but you have posted your question in the wrong forum. Your question isn't related to neither Patterns nor OO Design.

  • Not able to fetch the specification limits  results in the quality certific

    Hi ,
    I am facing an issue in outgoing Quality certificate. I am able to fetch the specifications for the the inspection characteristic& their result in the quality certificate.Although some of the are reflecting but some are not.
    Please advice.
    Regards,
    Vivek Sharma

    Helli Vivek,
    What is misleading in the information I gave you?
    The transaction QC03 is used to view the Quality certificate.
    The text element is found on the characteristics overview screen.
    You should be able to determine the certificate type used.
    Furthermore you must distinguish if you are using quantitive or qualitative characteristics.
    If you are using qualititive characteristics you might need text 0002
    Regards,
    Isabelle
    Edited by: Isabelle Britten on Jul 17, 2008 10:33 AM

Maybe you are looking for