Not fetching the records through DTP

Hi Gurus,
I am facing a problem while loading the data in to infocube through DTP.
I have successfully loaded the data till PSA but not able to load the records into infocube.
The request was successfully with status green but able to see only 0 records loaded.
later one of my friend executed the DTP successfully with all the records loaded.
can you please tell me why it is not working with my userid.
I have found the following difference in the monitor.
I am not able to see any selections for my request and ale to see REQUID = 871063 in the selection of the request started by my friend.
can any one tell me why that REQUID = 871063 is not displaying automatically when I have started the schedule.

Hi,
I guess the DTP update process is DELTA UPDATE mode. Because you and your friend/colleague have executed SAME DTP object with a small time gap and during the same period no new TRANSATIONS HAVE POSTED IN SOURCE.
-Try to execute after couple hours....
Regards

Similar Messages

  • Query not fetched the record

    Hello,
    Could someone help me please ?
    I have a listing of my sales orders and I want to make changes in my order by opening the form and fetched with that record. When I click on that particular orderno in my listing of order and call the form to display the details, it calls the form but says "Query could not fetch the record". I do not know why ? Please help me with the solution.
    Thanx

    Hello,
    I think you are passing orderno to called form as a parameter. If you are using parameter list check..
    1. If parameter data is getting in form correctly ?
    2. Next, have you changed where clause of other block,so that is will display record with passed orderno ?
    I am expecting more details from you.
    Thanx
    Adi

  • Should not fetch  the records which has tran_efct_dte more than 550 days

    I have a table BRKG_TRA AND below is the structure of the table
    BRKG_ORDER_ID     VARCHAR2(15 BYTE)
    BRKG_ORDER_ID_CNTX_CDE     VARCHAR2(10 BYTE)
    BRKG_ACCT_SIDE_CDE     CHAR(1 BYTE)
    TRD_FTR     NUMBER(15,9)
    PGM_ORIG_TRAN_ID     VARCHAR2(6 BYTE)
    BRKG_OPT_OPEN_CLOS_CDE     VARCHAR2(5 BYTE)
    BRKG_ORDER_QTY     NUMBER(17,4)
    TRAN_ID     VARCHAR2(20 BYTE)
    TRAN_CNTX_CDE     VARCHAR2(10 BYTE)
    CRTE_PGM     VARCHAR2(50 BYTE)
    CRTE_TSTP     DATE
    UPDT_PGM     VARCHAR2(50 BYTE)
    UPDT_TSTP     DATE
    DATA_GRP_CDE     VARCHAR2(10 BYTE)
    TRAN_EFCT_DTE     DATE

    select * from <table name> where <dt_field> > sysdate - 550

  • Lookup in transformation not fetching all records

    Hi Experts,
    In the routine of the transformation of a DSO (say DSO1), i have written a look-up on other DSO (say DSO2) to fetch records. I have used all the key fields of DSO2  in the Select statement, Still the look-up is not fetching all the records in DSO1. There is difference in the aggregated value of the Key Figure of both the DSOs. Please suggest, how can i remove this error.
    Thanks,
    Tanushree

    hi tanushree,
    The code which yu have written in the field routine for lookup is not fetching the data. you can debug the field routine code in the simulation mode of execution of DTP by keeping a break point after the transformation.
    you can test the routine with out actually loading the data..
    double click rule where you have routine and in the below you have option called test routine.
    here you can pass input parameters..
    i hope it will give you an idea.
    Regards
    Chandoo7

  • Best way to Fetch the record

    Hi,
    Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
    Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
    The table has the following key Columns for the Select (sample Table)
    Client_Visit
    ID Number(12,0) --sequence generated number
    EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
    Create_TS Timestamp(6)
    Client_ID Number(9,0)
    Cascade Flg vahrchar2(1)
    On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
    I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
    Code 1:
    SELECT   au_subtyp1.au_id_k,
                                       au_subtyp1.pgm_struct_id_k
                                  FROM au_subtyp au_subtyp1
                                 WHERE au_subtyp1.create_ts =
                                          (SELECT MAX (au_subtyp2.create_ts)
                                             FROM au_subtyp au_subtyp2
                                            WHERE au_subtyp2.au_id_k =
                                                                au_subtyp1.au_id_k
                                              AND au_subtyp2.create_ts <
                                                     TO_DATE ('2013-01-01',
                                                              'YYYY-MM-DD'
                                              AND au_subtyp2.eff_dte =
                                                     (SELECT MAX
                                                                (au_subtyp3.eff_dte
                                                        FROM au_subtyp au_subtyp3
                                                       WHERE au_subtyp3.au_id_k =
                                                                au_subtyp2.au_id_k
                                                         AND au_subtyp3.create_ts <
                                                                TO_DATE
                                                                    ('2013-01-01',
                                                                     'YYYY-MM-DD'
                                                         AND au_subtyp3.eff_dte < =
                                                                TO_DATE
                                                                    ('2012-12-31',
                                                                     'YYYY-MM-DD'
                                   AND au_subtyp1.exists_flg = 'Y'
    Explain Plan
    Plan hash value: 2534321861
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  1 |  FILTER                  |           |       |       |       |            |          |
    |   2 |   HASH GROUP BY          |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  3 |    HASH JOIN             |           |  1404K|   121M|    19M| 33178   (1)| 00:06:39 |
    |*  4 |     HASH JOIN            |           |   307K|    16M|  8712K| 23708   (1)| 00:04:45 |
    |   5 |      VIEW                | VW_SQ_1   |   307K|  5104K|       | 13493   (1)| 00:02:42 |
    |   6 |       HASH GROUP BY      |           |   307K|    13M|   191M| 13493   (1)| 00:02:42 |
    |*  7 |        INDEX FULL SCAN   | AUSU_PK   |  2809K|   125M|       | 13493   (1)| 00:02:42 |
    |*  8 |      INDEX FAST FULL SCAN| AUSU_PK   |  2809K|   104M|       |  2977   (2)| 00:00:36 |
    |*  9 |     TABLE ACCESS FULL    | AU_SUBTYP |  1404K|    46M|       |  5336   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
       3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
       4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
       7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
           filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
                  "AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
       9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
    I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
    select au_id_k,pgm_struct_id_k from (
    SELECT au_id_k
          ,      pgm_struct_id_k
          ,      ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
          create_ts, eff_dte,exists_flg
          FROM   au_subtyp
          WHERE  create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
          AND    eff_dte  <= TO_DATE('2012-12-31','YYYY-MM-DD') 
          ) d  where rn =1   and exists_flg = 'Y'
    --Explain Plan
    Plan hash value: 4039566059
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  1 |  VIEW                    |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  2 |   WINDOW SORT PUSHED RANK|           |  2809K|   133M|   365M| 40034   (1)| 00:08:01 |
    |*  3 |    TABLE ACCESS FULL     | AU_SUBTYP |  2809K|   133M|       |  5345   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
       2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
                  INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
       3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
                  2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
    Vijay

    Hi Justin,
    Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
    The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
    The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
    Both the production and test environment are 3 Node RAC.
    First Query...
    CPU used by this session     4740
    CPU used when call started     4740
    Cached Commit SCN referenced     21393
    DB time     4745
    OS Involuntary context switches     467
    OS Page reclaims     64253
    OS System time used     26
    OS User time used     4562
    OS Voluntary context switches     16
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     2487
    bytes sent via SQL*Net to client     15830
    calls to get snapshot scn: kcmgss     37
    consistent gets     52162
    consistent gets - examination     2
    consistent gets from cache     52162
    enqueue releases     19
    enqueue requests     19
    enqueue waits     1
    execute count     2
    ges messages sent     1
    global enqueue gets sync     19
    global enqueue releases     19
    index fast full scans (full)     1
    index scans kdiixs1     1
    no work - consistent read gets     52125
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time cpu     1
    parse time elapsed     1
    physical write IO requests     69
    physical write bytes     17522688
    physical write total IO requests     69
    physical write total bytes     17522688
    physical write total multi block requests     69
    physical writes     2139
    physical writes direct     2139
    physical writes direct temporary tablespace     2139
    physical writes non checkpoint     2139
    recursive calls     19
    recursive cpu usage     1
    session cursor cache hits     1
    session logical reads     52162
    sorts (memory)     2
    sorts (rows)     760
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     1
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     9
    Second Query
    CPU used by this session     1197
    CPU used when call started     1197
    Cached Commit SCN referenced     21393
    DB time     1201
    OS Involuntary context switches     8684
    OS Page reclaims     21769
    OS System time used     14
    OS User time used     1183
    OS Voluntary context switches     50
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     767
    bytes sent via SQL*Net to client     15745
    calls to get snapshot scn: kcmgss     17
    consistent gets     23871
    consistent gets from cache     23871
    db block gets     16
    db block gets from cache     16
    enqueue releases     25
    enqueue requests     25
    enqueue waits     1
    execute count     2
    free buffer requested     1
    ges messages sent     1
    global enqueue get time     1
    global enqueue gets sync     25
    global enqueue releases     25
    no work - consistent read gets     23856
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time elapsed     1
    physical read IO requests     27
    physical read bytes     6635520
    physical read total IO requests     27
    physical read total bytes     6635520
    physical read total multi block requests     27
    physical reads     810
    physical reads direct     810
    physical reads direct temporary tablespace     810
    physical write IO requests     117
    physical write bytes     24584192
    physical write total IO requests     117
    physical write total bytes     24584192
    physical write total multi block requests     117
    physical writes     3001
    physical writes direct     3001
    physical writes direct temporary tablespace     3001
    physical writes non checkpoint     3001
    recursive calls     25
    session cursor cache hits     1
    session logical reads     23887
    sorts (disk)     1
    sorts (memory)     2
    sorts (rows)     2810365
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     2
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     5Thanks,
    Vijay
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM

  • Issue while fetching the file through *.extension by FTP sender file adapte

    Hello Experts,
    I am facing a issue while fetching the data through sender File adapter with  ' *.file extension' .I am illustarting the scenario as below .
    It is a simple scenarion of File to File inboumd scenarion.Here file is getting picked up from a third party system
    through FTP sender chanel and stored in a temp folder of PI through NFS rceiver file adapter .
        The problem is however I am getting, while picking the file with file name as "*.exo"(where exo is the file extension).
    But while fetching the file with particular name like"abcd_10032011*.exo"(file naming has been done by combination of abcd(always same)_currentdate(change according to current date)),file picked successfully .
    So here ,in the prior case file not getting picked up,but in later case it dose .
    Can anyone please let me know what might be the issue?

    Hi Sunit,
    Connect from your PI System to 3rd Party System (where are placed the Source Files)
    FTP <PartySystemHostName>
    eg. FTP 10.2.3.456 (then insert Username & Password to Login)
    Go to source directory
    cd \<SourceDirectory>
    eg. cd \donaldduck\directory\
    Execute a File List command
    ls -la *.<extension>
    eg. ls -la *.exo
    In this way you should be able to view all files with this extension (*.exo), the same action that Sap XI perform to pickup the file.
    Then, try to copy that file to your Local PI System (to check if there're some permissions issue):
    mget <filename>.exo
    eg. mpget File1_01012011.exo

  • Invoice hold workflow is not fetching the approver from ame

    Hi,
    I'm trying to get the next approver(3rd level) in wf process from ame through profile option, but it's not fetching the approver.
    my query is
    SELECT persion_id||employee_id
    FROM fnd_user
    WHERE employee_id = fnd_profile.VALUE('MG_AP09_PAYABLES_SUPERVISOR')
    other two level approvers (level1 and level 2)I'm getting , which is not through profile but direct join of tables as given below
    SELECT 'person_id:'|| rcv.EMPLOYEE_ID
    FROM ap_holds_all aph
    ,po_distributions_all pd
    ,rcv_transactions rcv
    WHERE pd.line_location_id = aph.line_location_id
    AND pd.PO_DISTRIBUTION_ID= rcv.PO_DISTRIBUTION_ID
    AND aph.hold_id = :transactionId
    AND transaction_type = 'DELIVER'
    SELECT 'person_id:'|| HR2.attribute2
    from ap_holds_all AH
    ,po_line_locations_all PLL
    ,hr_locations_all HR1
    ,hr_locations_all HR2
    where pll.line_location_id = AH.line_location_id
    AND pll.ship_to_location_id = HR1.location_id
    AND nvl(HR1.attribute1,HR1.location_id) = HR2.location_id
    AND AH.hold_id = :transactionId
    what may be the issue?

    Hi Surjith,
    Please look at the code I have written in the user exit, which is just for testing purpose. In SPRO I set workflow as 9 for all the release codes.
    IF i_eban-werks = '1000'.
      actor_tab-otype = 'US'.
      actor_tab-objid = 'S_RITESH'.
      APPEND actor_tab.
      CLEAR actor_tab.
    ENDIF.
    In PR I am getting the user name in processor coloumn correctly.
    please let me know if I am going wrong.
    Thank you.

  • Garage Band,  I am using an MBOX with a digi design audio core as a mic interface to recod into Garage Band.  Now when I Change the preferences to use the MBOX I can not move the recorder slider and there is no sound going into garage band?  Any solutions

    Garage Band,  I am using an MBOX with a digi design audio core as a mic interface to recod into Garage Band.  Now when I Change the preferences to use the MBOX I can not move the recorder slider and there is no sound going into garage band?  Any solutions

    10.5.8 with latest garageband 5.1 with MBOX 1. On Nov 4.09 I did a software update, now i get no output volume or input volume to the computer itself. I get it from head phones from my mbox, but not to the computer. Before the software update It worked great. I've tried upgrading software through digidesign .com, but they dont seem to have one for what i need. any way to get my old garage band back? go from 5.1 to earlier? Or maybe it was the upgrade to 10.5.8. update? Can i go back in time lol?
    Any help would be appreciated
    BTW
    Ive tried all the system sound prefs, and garage band drivers. Back and forth.
    cheers
    ds

  • Report is not fetching the data from Aggregate..

    Hi All,
    I am  facing the problem  in aggregates..
    For example when i  am running the report using Tcode RSRT2, the BW report is not fetching the data from Aggregates.. instead going into the aggregate it is scanning whole cube Data....
    FYI.. Checked the characteristcis is exactely matching with aggregates..
    and also it is giving the  message as:
    <b>Characteristic 0G_CWWPTY is compressed but is not in the aggregate/query</b>
    Can some body explain me about this error message.. pls let me know solution asap..
    Thankyou in advance.
    With regards,
    Hari

    Hi
    Deactivate the aggregates and then rebuild the indexes and then activate the aggregates again.
    GTR

  • Can we split and fetch the records in Database Adapter

    Hi,
    I designed a Database Adapter to fetch the records from oracle Database. Some time, the Database Adapter need to fetch around 5000, or 10,000 records in single shot. In that case my BPEL process is choking and getting error as
    java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
    Could someone help me to resolve this?
    In Database Adapter can we split and fetch the records, if number of records more then 1000.
    ex. First 100 rec as one set and next 100 as 2nd set like this.
    Thank you.

    You can send the records as batches useing the debatching feature of db adapter. Refer documentation for implementation details.

  • Fetch the records from cache

    say i have emp table
    eno ename sales
    1 david 1100
    2 lara 200
    3 james 1000
    1 david 1200
    2 lara 5400
    4 white 890
    3 james 7500
    1 david 1313
    eno can be duplicate
    when i give empno is 1
    i want to display his sales i.e 1100,1200,1313
    first time i will go to database and fetch the records
    but next time onwards i dont go to database; i will fetch the records from cache;
    i thought doing it using hashmap or hasptable ;both those two don't allow duplicate values(empno has duplicate values);
    How to solve this problem.

    Hi,
    Ever considered splitting that table up. You are thinking about caching thats a
    very good idea. But doesnt it make it vary evident that the table staructure that you have
    keeps a lot of redundant data. Specially it hardly makes a sense to have sales
    figures in a emp table. Instead you can have Emp table containing eno and
    ename with eno as the primary key and have another table called sales with eno
    and sales columns and in this case the eno references the Emp table.
    If you still want to continue with this structure then I think you can go ahead with
    the solution already suggested to you
    Aviroop

  • RSBBS jump query not fetching the document no in R3

    Dear Gurus
    With RSBBS transcation jump is given to R3 tansaction FB03. When query is executed and with right click go to option and with display document (FB03) its fetching the document no in DEVELOPMENT server . But when the query is transported to Production its not fetching the document no.
    Kindly do the needful at the earliest
    Regards,
    R.Satish

    Hi
    You said it is not fetching the doc no. Is it failing and showing some error?
    Have all the pre-requisite settings been done via Tcode SICF.
    Regards
    Nageswara

  • Could not find the record specified with data_record option

    Hi, we've got the same error in a couple of templates, but have no idea what can be the cause or solution. Does anyone know where I should look when I got "Could not find the record specified with data_record option" as message?
    Thanks in advance!

    Hi Raf,
    That is what it normally is.  I thought it might be related to a multi-record form, like in this blog http://blogs.adobe.com/formfeed/2009/09/working_with_multiple_data_rec.html.
    That blog does not really go into it, but if your "record" is not the directly under the <xfa:data> element then you need a <record> element under config/data to say what the node is, something like <record>Worker</record>.
    You error message sounded familiar but I have not been able to reproduce it, but then I've moved on a couple of versions of Designer.
    Is it possible for you to share your form and seed data?
    Regards
    Bruce

  • I can not access the switch through the console (solved)

    Hello,
    I'm having a problem.
    I can not access the switch through the console. The web interface is working properly.
    Model: SRW224g4
    Below some pictures.
    The HyperTerminal settings
    error:
    If anyone can help me?
    Thank you and excuse the bad English.

    Hello Rumenigue,
    It looks to me like you are using a console cable.  The reason you usually see them the other way around is because with a console cable the RJ-45 end goes into the device (an ethernet jack labeled console) whereas on this switch the console port is serial itself. 
    Usually the serial end of the cable you have plugs into a USB to serial adapter (because most computers today don't have serial ports anymore) and that USB connection goes into your PC, creating the virtual COM port you need in HyperTerminal.
    So if you get a USB to Serial adapter you could plug that from USB to the console port, or if you computer has a serial port of its own just connect a serial cable directly from the PC to the switch, then use HyperTerminal with the settings recommended above by Tom.
    Hope I have helped,
    Christopher Ebert
    Network Support Engineer - Cisco Small Business Support Center

  • HT5012 Could not get the 4G through iphone 5s while my SIM supports it in other devices?!

    Could not get the 4G through iphone 5s while my SIM supports it in other devices?!

    Hamed.ghabshi wrote:
    Using the phone in Oman and my other number working fine with it in 4G!
    The phone number and SIM is irrelevant if you get 4G or not.
    The phone hardware is what is relevant.
    Where is this iPhone originally from?

Maybe you are looking for

  • Acrobat 9 Pro - Converting PDF to JPEG

    I had to reinstall my acrobat 9 pro after my hard drive had to be reformatted. When it was installed previously on my old hard drive, I never had issues with converting a pdf file to a jpeg.  Since the new installation that option does not work anymo

  • New Install on Server - What Version Do I Need to Buy?

    Hi guys/gals- Quick easy question....I've worked with CF for around 3 years now, but am about to face my first employment requiring me to purchase/install it on a server...here's my question. If I've got a single Windows server, that hosts 2 to 5 web

  • XML or Excel for data importing

    I´m rookie in Indesign and I must create a catalogue of products. What are the basic differences between import files in Excel and xml?

  • Fonts in iCal

    Can i change Fots in iCal?

  • Cannot runn javac

    I installed JDK 1.5 on my new computer. I can run java programs from the command line without a problem. But I cannot run javac. When I try to run a javac command I get the following message: 'javac' is not recognized as an internal or external comma