View/Modify Funds Disbursement Data Extract

In Oracle R12 payment format tied to the bank account dictates which data extract would be used and which XML template would be applied to the data. My question is how would I see the content of a data extract and I would I add more fields to the extract? I tried using XML Publisher responsibility but could not get answer to my 2 questions. I would really appreciate if someone could guide in the right direction. Thanks.

Well, when you create a template in XML Publisher, you have to assign the Data Definition you want to use for this template. The Data Definition also sits in XML Publisher. Click on the Data Definitions tab in XML Publisher if you want to see the Data Definition.
So the process, if you were creating something new, is this - 1) Create the Data Definition in XML Publisher. Have to create this first. 2. Then create the template and assign the proper data definition in that template.
Does this make sense? Maybe you have an XML Publisher guru there you can talk to. Or you can download the XML Publisher Administration and Developer's Guide for R12, and study up on this that way. Pretty hard in a forum like this to give you detailed information on something like this.
Hopefully this helps a bit.
John Dickey
Edited by: John Dickey, McCarthy on Mar 26, 2010 2:50 PM

Similar Messages

  • Table name and  BAPI to view/modify APO Resource capacity profile data

    Please advise the table where the data of APO resource capacity profile stores and BAPI which help to extract and modify the break duration and utlization rate for a resource.
    Steps to view APO resource capacity data
    1. Use transaction /SAPAPO/RES01 and enter as below
    Resource <Resource Name>
    Location XXXX
    Planning Version 000
    2. On the display screen, click button "Capacity Profile" to view the
    data of start, end, break duration and utilizaiton %.
    Appreciate your help!!

    Hi Kurt,
               You can use the below tables/views in which Resource and Capacity Profile related data is stored.
    Tables:
    /SAPAPO/RES_INTQ - Resource Capacity Intervals
    /SAPAPO/RES_INTV - Resource Capacity Intervals
    /SAPAPO/RESCAP - Capacity Profile: Resource Time Capacity
    /SAPAPO/RESDIM - Dimensions and Capacity of a Resource
    /SAPAPO/RESDOWN - Resource Downtime
    /SAPAPO/RESDOWNT - Resource Downtime
    /SAPAPO/RESKEY - Reset Key for Resource to UID
    Views:
    /SAPAPO/V_RESVER- Resource Capacity Variants
    /SAPAPO/VRESINTQ - Resource Interval of Available Capacity Quantity Model
    /SAPAPO/VRESINTR - Resource Interval of Available Capacity Rate Model
    BAPIs available to read:
    BAPI_RSSRVAPS_GETCAPAPRO - Read Resource Capacities Profile
    BAPI_RSSRVAPS_REQUESTCAPAPRO - Query: Selection of Capacities for Resource and Result Transfer
    BAPI_RSSRVAPS_GETLIST - Selection of Resources
    I hope this will help you to get the data which you want to read.
    Regards,
    Saurabh

  • Where are the Data Extract Views

    Hi,
    can anyone answer me where I would find the Oracle Data Extract Views? I created several data extract views and I would need to query the data in those views.
    I thought they would be in the Study Account but could not find the views there.
    Kindest regards
    Dennis

    Sorry I just found why I did not see my views ... there was an error in the mapping table which resulted in an error when creating the view and therefore it did not show up ...
    All my fault ...

  • How to add new fields to a data extract

    The following data extract program generates an output file which is displayed using a publisher template as a check format report.
    Oracle Payments Funds Disbursement Payment Instruction Extract 1.0
    How would I add new fields to the generated file? In other words what is the procedure to add new fields to the data extract?
    Thanks

    Do anyone pls advise how to customize the payment extraction program? We also have the similar requirement to extract extra fields to format a payment file.

  • IPayment Data extract customization

    For a payment process we are using Data extract  'Oracle Payments Funds Disbursement Payment Instruction Extract 1.0' and a custom template.
    We need some data which is not available in this extract. So I think either we have customize this extract.
    I would like to known how this Data extract calls the package and and how to make changes to that PKG.

    Check MEtalink, there's plenty of information on how to customize the extract programs to add fields not already included.

  • BODS 3.1 : SAP R/3 data extraction -What is the difference in 2 dataflows?

    Hi.
    Can anyone advise as to what is the difference  in using the data extraction flow for extracting Data from SAP R/3 ?
    1)DF1 >> SAPR/3 (R3/Table -query transformation-dat file) >>query transformation >> target
    This ABAP flow generates a ABAP program and a dat file.
    We can also upload this program and run jobs as execute preloaded option on datastore.
    This works fine.
    2) We also can pull the SAP R/3 table directly.
    DF2>>SAPR/3 table (this has a red arrow like in OHD) >> Query transformation >> target
    THIS ALSO Works fine. And we are able to see the data directly into oracle.
    Which can also be scheduled on a job.
    BUT am unable to understand the purpose of using the different types of data extraction flows.
    When to use which type of flow for data extraction.
    Advantage / disadvantage - over the 2 data flows.
    What we are not understanding is that :
    if we can directly pull data from R/3 table directly thro a query transformation into the target table,
    why use the Flow of creating a R/3 data flow,
    and then do a query transformation again
    and then populate the target database?
    There might be some practical reasons for using these 2 different types of flows in doing the data extraction. Which I would like to understand.  Can anyone advise please.
    Many thanks
    indu
    Edited by: Indumathy Narayanan on Aug 22, 2011 3:25 PM

    Hi Jeff.
    Greetings. And many thanks for your response.
    Generally we pull the entire SAP R/3 table thro query transformation into oracle.
    For which we use R/3 data flow and the ABAP program, which we upload on the R/3 system
    so as to be able to use the option of Execute preloaded - and run the jobs.
    Since we do not have any control on our R/3 servers nor we have anyone on ABAP programming,
    we do not do anything at the SAP R/3 level
    I was doing this trial and error testing on our Worflows for our new requirement
    WF 1 : which has some 15 R/3 TABLES.
    For each table we have created a separate Dataflow.
    And finally in between in some dataflows, wherein, the SAP tables which had lot of rows, i decided to pull it directly,
    by-passing the ABAP flow.
    And still the entire work flow and data extraction happens ok.
    In fact i tried creating a new sample data flow and tested.
    Using direct download and - and also execute preloaded.
    I did not see any major difference in time taken for data extraction;
    Because anyhow we pull the entire Table, then choose whatever we want to bring into oracle thro a view for our BO reporting or aggregate and then bring data as a table for Universe consumption.
    Actually, I was looking at other options to avoid this ABAP generation - and the R/3 data flow because we are having problems on our dev and qa environments - giving delimiter errors.  Whereas in production it works fine. Production environment is a old set up of BODS 3.1. QA and Dev are relatively new enviornments of BODS. Which is having this delimiter error.
    I did not understand how to resolve it as per this post : https://cw.sdn.sap.com/cw/ideas/2596
    And trying to resolve this problem, I ended up with the option of trying to pull directly the R/3 table. Without using ABAP workflow.  Just by trial and error of each and every drag and drop option. Because we had to urgently do a POC and deliver the data for the entire e recruiting module of SAP. 
    I dont know whether i could do this direct pulling of data - for the new job which i have created,
    which has 2 workflows with 15 Dataflows in each worflow.
    And and push this job into production.
    And also whether i could by-pass this ABAP flow and do a direct pulling of R/3 data, in all the Dataflows in the future for ANY of our SAP R/3 data extraction requirement.  And this technical understanding is not clear to us as regards the difference between the 2 flows.  And being new to this whole of ETL - I just wanted to know the pros and cons of this particular data extraction. 
    As advised I shall check the schedules for a week, and then we shall move it probably into production.
    Thanks again.
    Kind Regards
    Indu
    Edited by: Indumathy Narayanan on Aug 22, 2011 7:02 PM

  • Unable to view enhanced field in Data Source

    Hi All,
    We recently upgraded our R/3 system from 4.7 to ECC 6.0. Now when we are enhancing the data source, the field is visible in the Extract Structure. But we are unable to view it in the data source. We checked out in RSA2 the enhanced field attribute is specified as 'Field in OLTP and BW Hidden by SAP'. We have written code to modifiy the ROOSFIELD table to make the enahnced field visible. But this is the problem with all the data sources. Whenever, we have to enahnce, we have to go thru this process to make the enhanced field visible.
    Is it like that we missied out any patch in ECC 6.0 while upgrade? If any person faced similar situation, please help us.

    Hi Ravi,
    I think you are missing a step here. As soon as you enhance your datasource with new fields, by default, those 2 check boxes are checked ( Hidden and Field in User exit only). You go to RSA6, edit your datasource and here, you can uncheck these added fields so that you can use them. You do not need to write any code to change the fields settings in ROOSFIELD table. Do the step I mentioned above after enhancing and you should be good. Hope it helps.
    Thanks and Regards
    Subray Hegde

  • Steps for Data extraction from SAP r/3

    Dear all,
    I am New to SAP Bw.
    I have done data extraction from Excel into SAP BW system.
    that is like
    Create info objects > info area> Catalog
                                                --> Character catalog
                                                --> Key catalog
    Create info source
    Upload data.
    create info cube
    I need similar steps for data extraction for SAP R/3
    1. when data is in Ztables ( using Views/Infosets/Function etc)
    2. When data is with Standard SAP using Business Content.
    Thanks and Regards,
    Gaurav Sood

    hi,
    chk the links
    Generic Extraction
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/84bf4d68-0601-0010-13b5-b062adbb3e33
    CO-PA
    http://help.sap.com/saphelp_46c/helpdata/en/7a/4c37ef4a0111d1894c0000e829fbbd/content.htm
    CO-PC
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/fb07ab90-0201-0010-c489-d527d39cc0c6
    iNVENTORY
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    Extractions in BI
    https://www.sdn.sap.com/irj/sdn/wiki
    LO Extraction:
    /people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
    /people/sap.user72/blog/2005/01/19/logistic-cockpit-delta-mechanism--episode-three-the-new-update-methods
    /people/sap.user72/blog/2005/02/14/logistic-cockpit--when-you-need-more--first-option-enhance-it
    /people/sap.user72/blog/2004/12/23/logistic-cockpit-delta-mechanism--episode-two-v3-update-when-some-problems-can-occur
    /people/sap.user72/blog/2005/04/19/logistic-cockpit-a-new-deal-overshadowed-by-the-old-fashioned-lis
    Remya

  • Invalid data status error during the data extraction

    Hi,
    while extracting capacity data from the SNP Capacity view to BW. i get the "invalid data status error" and the data extraction fails.
    when debugged the bad requests of the ODS object, i found that for a certain product(which has both positive and negative input and out qtys) co-product manufacturing orders were created. but this product was not marked as the co-product and functionally its fine.
    how can i rectify the data extraction problem..can you advice.
    Thanks,
    Dhanush

    Sir,
    In my company for some production order status some are having "errors in cost calculation" ie "cser" .how to deal these kind of errors.

  • How to find all the Master data extract structures and Extractors

    I intend to create Master data dimensions closely similar to SAP BI in a 3rd party system. I would like to use SAP's standard extractors for populating the master data structures in SAP BI and then use a proprietary technology to create similar structures in a 3rd party database.
    Question: How to get a complete list of all Master data extract structures and corresponding extractors?
    Example: In ECC if I do SE80 and give 'Package' and 'MDX' and then press the 'display' spectacles, I get a list of structures and views under "dictionary objects" covering Material, Customer, Vendor, Plant Texts and attributes.
    How do I get the remainder of the Master data extract structures viz. Purchase Info Records, Address, Org Unit etc?
    Regards
    Sasanka

    Hi,
    try the table ROOSOURCE and search the data source with string in astrick attr  for master data and for text with text
    This will give you the list of all the master data source in the system and it contains a column which tells about the extract structure and function module used by each.
    Thanks
    Ajeet

  • Data extraction from Oracle database

    Hello all,
    I have to extract data from legacy database tables. I need to apply a lot of conditions on data extraction using SQL statements for getting only valid master data, transaction data, SAP date format etc. Unfortunately I don;t have a luxary of accessing legacy system data base table to create table views and applying select statements.
    Is there anyother way round by which I can filter data on source system side w/o getting into legacy system. I mean being in BW data source side.
    I am suppose to use both UD connect and DB connect to test which will workout better way of data extraction. But my question above should be same in either interface.
    This is very urgent as we are in design phase.
    Points will be rewarded immediately.
    Thanks
    Message was edited by:
            Shail
    Message was edited by:
            Shail

    Well I and eveyone know that it can be done in BI.
    I apologize that I did not mention it in my question.
    I am looking for very specific answer, if there is any trick we can do on source system side from BI. Or where we can insert SQL statements in infopackage or data source.
    Thanks

  • Data Extraction is Stuck

    Hi. We have a process extracting data from an Oracle database via JDBC connection. In general, the process takes more than 4 hours to extract 7-8 millions rows of data into a flat file. At the first 2-3 hours, the entire process is totally idle, and total rows were 32k. During that period of time, the temp tablespace is fine; the session is still active, but CPU is idle. Does anyone have any clues the root cause of data extraction is stuck?
    Thanks.

    >
    Hi. We have a process extracting data from an Oracle database via JDBC connection. In general, the process takes more than 4 hours to extract 7-8 millions rows of data into a flat file.
    >
    Then your process is doing something terribly wrong. But since you didn't post any information about whay your process is doing or the Java code that is doing it there isn't any way to help you.
    This thread should be reposted in the JDBC forum
    https://forums.oracle.com/forums/category.jspa?categoryID=288
    When you post provide your 4 digit Oracle version, full Java version, JDBC jar file name and version and the Java code that is doing the extract.
    Make sure you post the code that shows the query being executed, the batch settings that are made and how the data from the result set is being processed and written to the file.
    >
    At the first 2-3 hours, the entire process is totally idle, and total rows were 32k. During that period of time, the temp tablespace is fine; the session is still active, but CPU is idle. Does anyone have any clues the root cause of data extraction is stuck?
    >
    Since 32k rows can be extracted and written in a matter of seconds those metrics should convince you that you have a serious problem with your methodology.
    You need to do some basic troubleshooting to see if the problem is in the DB, the network, or writing the data to the file system.
    1. Run your query manually using a tool like sql developer or Toad to see how long it takes to return the first 50/500 rows in the result.
    2. Modify your app to quit after executing the query to see how long it takes to return from the EXECUTE statement.
    3. Modify your app to do nothing at all with the result set (do NOT write it or access it at all) except iterate it using ResultSet.next(). How long does that take?
    4. Stop working with millions of rows until your app actually works properly. That indicates that you did not do any testing or you would have known there was a problem before now.

  • Is content viewed during "private browsing" forensically extractable? e.g does it still write the content to the disk and can it be recovered?

    is content viewed during "private browsing" forensically extractable? e.g does it still write the content to the disk and can it be recovered?
    If a forensic investigator was to do forensics on my disk after i browsed content under private browsing, could it be recovered
    There is no intention to use this for illegall purposes, i just wondered how to private is "private browsing"

    Private browsing basically prevents any data from being stored in your Profile folder. [http://support.mozilla.com/en-US/kb/Private%20Browsing This] article provides more details.
    Please note that cookies '''are '''created during a private browsing session. However, they are deleted once the session is ended or when you exit Firefox application.
    Also note that other applications on your computer might be tracking which sites you visit (firewall, antivirus etc). They will still be able to collect your browsing related data even if you have activated private browsing in Firefox.

  • Data Extract Design

    Hi All,
    I have a requirement where I have to extract data from various different tables-->Only particular columns.
    The requirements are same for different databases, hence I thought to have a single generic approach and reuse the same code to perform the extract and create an ascii file.
    Below is the typical scenarion i want to achieve, hence need your expertise inputs to start off..
    a) Define the required columns -- This should be configurable, i.e., add or remove columns in future.
    b) Extract the column names from the database for those that are defined in the step a) above.
    c) Extract the data from relevent tables/columns for various conditions based on step a and b above.
    d) Create an ascii file for all the data extracted.
    I'm unsure if there is anything wrong or please suggest the best approach.
    Regs,
    R

    user10177353 wrote:
    I'm unsure if there is anything wrong or please suggest the best approach.
    The first thing to bear in mind is that developing a generic, dynamic solution is considerably more more complicated than writing a set of extract statements. So you need to be sure that the effort you're about to expend willl save you more time than writing a script and copying/editing it for subsequent re-use.
    You'll probably need three tables:
    1. Extracts - one record per extract definition (perhaps including info such as target file name)
    2. Extract tables - tableges for each extract
    3. Extract columns - columns for each extracted column.
    I'm writing this as though you'll be extracting more than one table per run.
    The writing to file is the trickiest bit. Choose a good target. Remember that although we called them CSV files, commas actually make a remarkably poor choice of separator, as way too much data contains them. Go for soemthing really unlikely, ideally a multi-character separator like ||¬.
    Also remember text files only take strings, so you need to convert your data to text. Use the data dictionary ALL_TAB_COLUMNS view to get the metatdata for the extracted columns, and apply explicit masks to date and numeric columns. You may want to allow date columns to have masks which include or exclude the time element.
    Consider what you want to do with complex data types (LOBs, UDTs, etc).
    Finally, you need to address the problem of the extract file's location. PL/SQL has a lot of utilities to wrangle files but they only work on the server side. So if you want to write to a local drive you'll need to use SPOOL.
    One last thought: how will you import the data? It would probably be a good idea to use this mechanism to generate DDL for a matching external table.
    Cheers, APC
    Edited by: APC on May 4, 2012 1:08 PM

  • Re  : MSS  Portal role for Non Employee to View/Edit all Employees data

    Hi All,
    Is there a role I can assign so that a manager who log into portal, can view all the employees and just not the directly reporting employee.
    This user is outside the SAP and should be able to see and modify all employees data in the entire company under diffrent org units
    Any hints on doiing this would be appreciated
    I understand if I assign MSS Manager role to a manager he can just view all the employess who report to him and not all the employees.
    Thanks,
    Sarvan

    Hi Sarvan,
    it's just a question of customizing in the HCM Backend. In the portal you can use the standard MSS role coming with the Business Package (or a changed copy). But, the person in question here has to have a user in the backend and an according position in the organizational structure.
    take a look at help.sap.com:
    http://help.sap.com/saphelp_erp2005/helpdata/en/29/d7844205625551e10000000a1550b0/frameset.htm
    see: "Object and Data Provider"
    regards
    Andreas

Maybe you are looking for

  • The problem about vrrp

    Dear all,I have a problem about vrrp,my core switch is cisco 3750x and huawei S5700-28C-EI, after I configured vrrp between them,the switch log got the error problem as follow: *Jan 11 00:01:43.111: %VRRP-6-STATECHANGE: g1/0 Grp 3 state Backup -> Mas

  • Sharing itunes on pc and mac

    I recently got hold of a mac and now I need to load my library onto it. I already have it om am external drive from my vista machine. Unfortunately when I go to add the foldr to itunes, the mac doesn't see it. It can see the Itunes folder, but not th

  • Problem in my itunes

    Dear I have a itunes account and everything you want to download refuses to open the About Me Page I verify payment info over and over again but it does NOTHING and I am unable to update apps Please solve my problem

  • No more wifi detection,

    After last software update yesterday, the ATV3 would block after language question and keep repeating that screen. Tried to manually upgrade with mini USB and iTunes but no change, it stayed in a loop with that question after boot. So I installed pre

  • PHP disaster with entropy installation need help with file location

    I have a mac mini snow leopard server running 10.6.5 ,and I was trying to resolve a mcrypt module issue so that I could install wordpress and magento on some of our sites.After doing what I thought was extensive research, I decided to download the en