Transfer Methods, data package

Hi,
I would like to know about
1) the differences between PSA and IDocs
2) the datapackage

Hi,
1.PSA and IDOC methods are two different data transfer methods in BW.
2. PSA will act as backup during dataload failure.
3. Data redundantly stored in the BW system for further usage but not in IDOC methods,
4. IDOC uses two types of IDOC in transferring the data - Information -IDOC and Data IDOC.
5. for dalta loading use PSA is the suggestion.
6. Updation in IDOCmethod is faster.
7. Error occurrence in IDOC is more.
Reg,
Vishwa

Similar Messages

  • ABAP Dataflow using FTP data transfer method

    Hi,
    I am facing this error while executing the ABAP dataflow, the datastore configuration has FTP as data transfer method.
    I wanted to know if the ABAP program is generated succesfully and the error is while executing the ABAP program or there is an issue with the generation of ABAP program itself... Please help me understand the issue.
    Since I am unable to attach the doc, please see the error message below.
    Job name: Job_Module4_ECC_To_BW
    (12.2) 02-28-11 11:14:45 (E) (16396:17152) R3C-150607: |Data flow DF_Module4_ECC_To_BW
                                                           Execute ABAP program <C:/TEMP/ZECC_To_BW> error <    Open File Error -- 
    madap8004\tmp/SAPECC_To_BW_ZCUST_NAME.txt>.
    (12.2) 02-28-11 11:14:45 (E) (16772:15444) R3C-150607: |Data flow DF_Module4_ECC_To_BW
                                                           Execute ABAP program <C:/TEMP/ZECC_To_BW> error <    Open File Error -- 
    madap8004\tmp/SAPECC_To_BW_ZCUST_NAME.txt>.
    Regards,
    Sri
    Edited by: srilakshmi.ponnada on Feb 28, 2011 11:58 AM

    Hi,
    I am really not clear with your question. This is the first time i am executing the ABAP interface with FTP data transfer method. The madap8004 is the FTP server name. and the path is where the file whould be generated I belive. Is it the problem with any read/write permissions on the server?
    -Sri

  • To transfer the data Cross the systems why we are not using BAPI Method?

    Hello All,
    To transfer the data Cross the systems why we are not using BAPI Method? why we are using only ALE/IDoc's?

    Hi Naveen,
       Listing down those I know -
       1. Master data is expected to be changed/created less frequestly.
       2. Transactional data is expected to be chnaged/created more frequestly.
       3. Transactions to create/change/display master data do no have message control integration, since you are not supposed to print/fax/mail master data to your partners.
       4. Distribution of master data in generally supposed to happen within Enterprise boundry and typically you will have one central server maintaining master data which will be distributed to multiple downstream operation system (Some thing where MDM fits). So ideally when distributing master data using ALE you will be sending it to multiple systems.
       5. Distributing transactional data PO/SO etc is same as sending fax/print which fits with the requirement of message control.
    Hope this helps,
    Regards,
    Sanjeev

  • Two Data transfer methods

    Hi Gurus
    Can any body tell if i will be asked like What are two data transfer methods? Which is the preferred method and why?
    thanks in advance
    murali

    Hi,
    The two data transfer methods are PSA and IDoc. In PSA, the number of fields is limited to a maximum of 255 when using TRFCs to transfer data. The length of the data record is limited to 1962 bytes when you use TRFCs.
    refer this help link
    http://help.sap.com/saphelp_nw04/helpdata/en/7d/724d3caa70ea6fe10000000a114084/frameset.htm
    Regards,
    R.Ravi

  • Data Transfer methods

    Hi all,
    Can anyone please explain me what are synchronous and asychronous data transfer methods?
    Where are the settings to be done in R/3 to get data to BI ?
    Thanks in advance

    Data Transfer
    Definition
    Synchronous Data Transfer
    Data is transferred directly (simultaneously, synchronously) from program to program via CPI-C communication.
    Both communications partners must be available at the same time. Because the central SAP system in a typical SAP installation does not run round the clock, but the linked systems are frequently in operation 24 hours a day, asynchronous data transfer is necessary.
    Synchronous data transfer has the following disadvantages:
    Transfer is not possible is the partner system or the line is not available.
    A data backlog in the reciever system causes a data backlog in the sender system. Processing in the application is delayed.
    If a connection is broken, it may be necessary to perform a recovery in both systems.
    Asynchronous Data Transfer
    With asynchronous, or buffered, data transfer, data is temporarily stored in a sequential queue.
    Asynchronous transfer has the following advantages:
    Wait times in the sender system are avoided.
    A recovery is automatically performed in the sender system.
    Transfer need not be performed during online time. This avoids placing unnecessary load on the system and thus helps to reduce costs.
    The Queue Application Programming Interface (Q-API) is an SAP interface for asynchronous data transfer.
    Data is buffered sequentially and processed immediately or later by an asynchronously running process.
    One possible processing method is to send data to an external partner system via CPI-C. Data units that belong together can be stored in accordance with transaction and sent to a communications partner.
    Data is buffered in queues before it is transferred to the target system.
    In an R/3 System, the queues are stored in a relational database. All R/3 database systems are supported.

  • Hi Data transfer methods for ABAP Dataflow

    HI
       Good Day !!!!
                        Can any one Explain about What are Data tansfer mothods in ABAP Dataflow Explain Each one  which is the best one can you explain with example .
    Regards
    Chandu

    Data Transfer
    Definition
    Synchronous Data Transfer
    Data is transferred directly (simultaneously, synchronously) from program to program via CPI-C communication.
    Both communications partners must be available at the same time. Because the central SAP system in a typical SAP installation does not run round the clock, but the linked systems are frequently in operation 24 hours a day, asynchronous data transfer is necessary.
    Synchronous data transfer has the following disadvantages:
    Transfer is not possible is the partner system or the line is not available.
    A data backlog in the reciever system causes a data backlog in the sender system. Processing in the application is delayed.
    If a connection is broken, it may be necessary to perform a recovery in both systems.
    Asynchronous Data Transfer
    With asynchronous, or buffered, data transfer, data is temporarily stored in a sequential queue.
    Asynchronous transfer has the following advantages:
    Wait times in the sender system are avoided.
    A recovery is automatically performed in the sender system.
    Transfer need not be performed during online time. This avoids placing unnecessary load on the system and thus helps to reduce costs.
    The Queue Application Programming Interface (Q-API) is an SAP interface for asynchronous data transfer.
    Data is buffered sequentially and processed immediately or later by an asynchronously running process.
    One possible processing method is to send data to an external partner system via CPI-C. Data units that belong together can be stored in accordance with transaction and sent to a communications partner.
    Data is buffered in queues before it is transferred to the target system.
    In an R/3 System, the queues are stored in a relational database. All R/3 database systems are supported.

  • Data Transfer Prozess (several data packages due two huge amount of data)

    Hi,
    a)
    I`ve been uploading data from ERP via PSA, ODS and InfoCube.
    Due to a huge amount of data in ERP - BI splits those data in two data packages.
    When prozessing those data to ODS the system delete a few dataset.
    This is not done in step "Filter" but in "Transformation".
    General Question: How can this be?
    b)
    As described in a) data is split by BI into two data packages due to amount of data.
    To avoid this behaviour I enterd a few more selection criteria within InfoPackage.
    As a result I upload data a several time, each time with different selction criteria in InfoPackage.
    Finally I have the same data in ODS as in a), but this time without having data deleted in step "Transformation".
    Question: How is the general behaviour of BI when splitting data in several data packages?
    BR,
    Thorsten

    Hi All,
    Thanks a million for your help.
    My conclusion from your answers are the following.
    a) Since the ODS is Standard - within transformation no datasets are deleted but aggregated.
    b) Uploading a huge amount of datasets is possible in two ways:
       b1) with selction criteria in InfoPackage and several uploads
       b2) without selction criteria in InfoPackage and therefore an automatic split of datasets in data packages
    c) both ways should have the same result within the ODS
    Ok. Thanks for that.
    So far I have only checked the data within PSA. In PSA number of datasets are not equal for variant b1 and b2.
    Guess this is normal technical behaviour of BI.
    I am fine when results in ODS are the same for b1 and b2.
    Have a nice day.
    BR,
    Thorsten

  • Transfer Methods...PSA & IDoc

    Hi Friends,
    Could you guys share your valuable ideas when we were doing data load at the Transfer structure level could see their Transfer method as PSA & IDOC.
    When we would prefer PSA & When we go for the IDOC.
    Can you guys give any business scenarios?
    Siri

    Hi,
    you want to load Master data text
    If it is a critical data we can use IDocs because data can't be modified here. In case the load fails while using Idcos we need to load data from source system again.
    Generally IDocs update is used to load master data hierachies.
    Some of the other differences are
    PSA
    Data is staged at PSA. From here we can either update the data automatically or manually in the corresponding data targets.
    Incase of PSA the length of data record can be upto 1962 where as Idoc it is 1000bytes max.
    you have advantage of performance ,large data package and error handling in PSA.
    Data transfer with IDoc
    The data is packed into IDocs by the source system and sent to the Business Information Warehouse. In BW your data is saved persistently and intransparently in the IDoc store. From there the IDocs can be updated in the IDocs manually or automatically.
    It is used for hierarchies load Limitations with IDoc is:
    transfer structure must not have more than 1000 Bytes in the character format.
    If it is a critical data we can use IDocs it can't be modified here. In case the load fails we need to load data from source system.
    http://help.sap.com/saphelp_nw04/helpdata/en/7d/724d3caa70ea6fe10000000a114084/frameset.htm

  • Issue in transfer of data from ECC to APO

    Hi All,
    I have a requirement of transferring data from ECC to APO. I am using EXIT_SAPLCMAT_001 fro this purpose. The problem is, I need to transfer the data of a field that is not present in cif_matloc but present in /sapapo/matloc.
    How should I proceed...Please help....this is an urgent requirement
    Thanks & Regards,
    SriLalitha

    Hi,
    you may want to go to the transaction /SAPAPO/SNP_SFT_PROF
    Determine Forecast of Replenishment Lead Time
    Use
    In this field, you specify how the extended safety stock planning determines
    the forecast of the replenishment
    lead time (RLT). The following values are available:
    Supply Chain
    The system determines the RLT forecast using the supply chain structure by
    adding the corresponding production, transportation, goods receipt, and goods
    issue times. If there are alternative procurement options, the system always
    takes the
    longest
    option into account.
    Master Data
    The system determines the RLT forecast from the location product master
    data.
    Master Data/ Supply Chain
    First, the system determines the RLT forecast from the location product
    master data. If no RLT forecast can be determined, the system determines the
    forecast using the supply chain structure (as described under
    Supply
    Chain
    Dependencies
    You can retrieve the replenishment lead time forecast yourself by using the
    GET_LEADTIME
    method of the Business Add-In (BAdI) /SAPAPO/SNP_ADV_SFT.
    Replenishment Lead Time in Calendar Days
    Number of calendar days needed to obtain the product, including its
    components, through in-house
    production or external
    procurement.
    Use
    The replenishment lead time (RLT) is used in the enhanced methods of safety
    stock planning in Supply Network Planning (SNP). The goal of safety
    stock planning is to comply with the specified service level, in order
    to be prepared for unforeseen demand that may arise during the replenishment
    lead time. The longer the RLT, the higher the planned safety stock level.
    Dependencies
    The field is taken into account by the system only if you have specified
    master data or master data/supply chain in the RLT: Determine
    Forecast field of the safety stock planning profile used.
    Hope this helps.
    The RLT from ECC is in MARC-WZEIT which is transferred to APO in structure /SAPAPO/MATIO field CHKHOR.
    May be if you maintain the setting in the profile, you may get the value in RELDT.
    Thanks.

  • Identify the last data package in start routine

    Hi Everyone
    We have a start routine in transformations. We require to do some special processing in the start routine only when the last data package is executing. How can we determine in the start routine that current package is last one or not ? Any pointers in this direction are appreciated.

    Hi,
    You can get packet Id from datapackid in start routine and end routine. But I'm not so sure how to identify the last packet ID, alternatively you can store this packet id in some where else and read the same value in End routine if your logic(processing) permits to do this in End routine instead of Start routine.
    METHODS
          start_routine
            IMPORTING
              request                  type rsrequest
              datapackid               type rsdatapid
            EXPORTING
              monitor                  type rstr_ty_t_monitors
            CHANGING
              SOURCE_PACKAGE              type tyt_SC_1
            RAISING
              cx_rsrout_abort.
    hope it helps...
    regards.
    Raju

  • Problems with the O2 blackberry data package on my Curve 3G.

    I have already informed O2 about this but they claim that I should be used the blackberry support services, but nothing there helps me!
    I got my Blackberry Curve 3G on September 9th this year and I put on the Blackberry Data Package bolt-on onto my phone on September 16th. I then received a text to say they've taken £5 from my credit and it will be up and running in the next 24 hours. Its now September 19th and my BBM is not working at all and I am extremely upset with the services and behaviour I have received from both O2 and Blackberry.
    Is there any way you can help? If this fails, I shall be forced to go back to the shop from where I got my Blackberry from and ask for their help.
    Many thanks, Jade.

    Can a bubble whistle Problems with the O2 blackberry data package on my Curve 3G.? The seat matures in your oar. The lad ices the pursuing method inside a resident. A judge spins against the vendor! The rose wows the hello. 
    filipina heart

  • The "infallible" backup system crashes and burns; how to transfer the data?

    After five years of exemplary, error-free service, my old Powerbook G4 1Ghz finally crashed during a backup to an external hard drive. I just bought a new Macbook Pro (arrived September 16th) and want to transfer the data from the old Pro to the new Pro. Herein lies the complex problem, and Migration Assistant and a firewire cable won't solve it.
    When the old Pro died, I was maintaining what I believed to be an infallible backup system (ha). I did nightly backups to two different external hard drives; one was a USB LACIE drive, the other a firewire LACIE drive. I used Carbon Copy Cloner to do a merge contents backup on the USB drive and a file-level bootable clone to the firewire drive. Surely nothing could go wrong. Oh, one only needs to think of the Titanic and say, “Something did.”
    The old Pro could only backup to the USB drive when using an express card that added USB 2.0 connections (the old Pros have USB 1.1 and they’re too slow). But whenever the express card was plugged into my old Pro and it fell asleep without my first powering the card off, the computer crashed. This was no problem; I just powered off the card before putting it to sleep and ignored the fact that the computer on rare occasion acted buggy with the USB express card plugged in. Well, on the fateful night in question, the computer did its backup to the USB drive and then fell asleep while it was still plugged in (this is what its energy saver told it to do, unfortunately). When I tried to turn the computer back on, a black box appeared in the upper left hand corner with a bunch of random-seeming white text and the concluding message, “Waiting for remote debugger.” The computer was frozen and a cold reboot was the only option.
    My old Pro would never be the same again.
    I tried to boot it up. Sometimes it did so, but then would suddenly show a red screen and crash. It kept crashing, sometimes just powering off immediately. So I brought out good old Disk Warrior, booted up with that, repaired permissions, repaired disk, and rebuilt the directory. I also booted up with the Tiger OS X install CD and fixed permissions that way just for good measure. The computer then started up fine. I thought I was back to normal! Celebration time! Not. The USB drive that I had backed up to would not mount anymore. So I, seeing that the computer appeared to be back to normal, just plugged in second external hard drive--the firewire--and began the bootable clone. It delved into making the clone without a hitch and I went off to work happy and content that I would have an up-to-date backup soon.
    Lo and behold, when I came back from work, the backup to the firewire drive hung midway through. It had done nothing but wiped the entire drive and copied about 500 mb of data to it before freezing. I tried again several times and, believe it or not, only got vastly improved results when the offending express card was actually plugged into the computer. By this mysterious and probably risky method, I was able to get some data off. But only about 1/10 of the whole. The old Pro always would crash before more than 5 gigs of data or so can be transferred.
    Starting from around that time, the offending old Pro began to exhibit hideous popping, gurgling, grating, clicking, swiping, and other sounds from the left hand side of the computer—where I imagine the hard drive is. These sounds can be quite loud and sometimes even sound like tempered UFO tones beamed down to the computer. Sometimes they aren’t present at all and the computer acts fine, until a certain file or folder is accessed, and then they start again and the computer hangs, going from fine to disastrous in an instant.
    I took out the offending express card and launched it away and took the USB drive and put it on another computer. It still would not mount. I finally got it to unlock (it’s a fingerprint-access safe drive) and ran Disk Warrior on it and was able to get as much data off it as I could. But it appears that a huge chunk of data was simply missing from the drive, whether from Disk Warrior having to delete data when it rebuilt the directory or from damage when the computer died while backing up to this drive, I don’t know.
    So now I went from a supposedly infallible backup system of two rotating external hard drives to potentially losing 5 years of utterly critical work!
    I tried starting up the old Pro in target disk mode with my new Pro. At first, this appeared to have very promising results because there was no grating or popping sounds from my old Pro when I did this. I began a data transfer with glee. But no luck; the old Pro always eventually hangs and the new Pro doesn’t even recognize it (the icon for the old Pro disappears as if I unmounted it).
    Sometimes the Old Pro can still boot up and seem to act fine; it can sit like that indefinitely, I'd guess; but if the disk is actually read and written to enough, it always fails. I am using it as little as possible now (i.e. not at all) so as not to exacerbate the problem any more than it already has been.
    Two quick notes that might further explain the situation: two weeks before the crash happened, I used iDefrag to defragment the drive. The night before the crash happened, I did a lot of work on my computer while on a bumpy trip on the Long Island Railroad. The computer acted fine throughout all this, but perhaps the bumpiness or the defragmentation contributed to this problem.
    I must get the data off. This dead/dying hard drive doesn’t just contain mere photos and music libraries and nifty applications. I’m talking completed and incomplete novel manuscripts, lecture notes, lesson plans, published and to-be-published articles, a massive thesis, and much more totaling literally thousands upon thousands of pages and representing more than 5 years of graduate and professional research and work. For me, the stakes are critically high. Thank you for reading this Hero's Journey of a post.

    Good news!
    Imagine a Venn diagram with four interlapping circles. The circles are:
    1. Catastrophically exploded hard drive
    2. Catastrophically annihilated external hard drive used for incremental backups
    3. Catastrophically wiped cleaned external hard drive used for bootable clones
    4. Email, CD, and iPod backups
    With a little shake, rattle and roll and other craziness you do with a hard drive with failed bearings, I got #1 working briefly, thank goodness.
    With Disk Warrior and figuring out how to surpass a fingerprint-activated security feature, I got #2 working in a limited fashion.
    With some workarounds, I got some use out of #3.
    #4 just is.
    The interlap of these 4 things produced about 95-99% or so of my data, including all critical items. It was a disorganized mess I just spent two days organizing, but I am back up and running without having to use expensive data backup services. I am slightly bothered by that slight bit of missing data, but from what I can tell, it amounts to nothing more than a brief sketch of a lesson plan, a song or two from the iTunes store, and maybe a handful of attachments. Nothing to spend big bucks recovering, in other words.
    Now I will substitute my "infallible" backup system with a new one:
    1. Time capsule/time machine
    2. Complete online backups with Mozy
    3. Occasional bootable clones to the external hard drive I have that didn't get damaged.
    I think this will be the ticket!
    Thank you all.

  • Transfer ownership data from one scenario to another in HFM

    I'm developing HFM Rules for a company that needs to transfer data from one scenario to another. This is so that they can perform What-if analysis on the other scenario. I can transfer the data correctly using the following code:
    'This is in Sub Calculate()
    'Check if there is a source scenario and transfer all data if there is
    If HS.Entity.IsBase("","") And HS.Value.Member = "<Entity Currency>" Then
    If Trim(HS.Scenario.UD1("")) <> "" Then
    HS.Clear "A#ALL"
    HS.Exp "A#ALL = S#" & Trim(HS.Scenario.UD1(""))
    End If
    End If
    'Check if there is a destination scenario and set the impact status if there is
    ScenList = HS.Scenario.List("", "[BASE]")
    For CurrScen = LBound(ScenList) To UBound(ScenList)
    If Trim(HS.Scenario.UD1(ScenList(CurrScen))) = HS.Scenario.Member Then
    HS.ImpactStatus "S#" & ScenList(CurrScen)
    End If
    Next
    If I extend this code to check *HS.Value.Member = "[None]"* and run a calculate on the *[None]* Value Layer, this also copies ownership methods and percentages for the relevant entity. Unfortunately, going to each parent entity and running a calculate for each month is not going to be practical since there are more than 500 entities in this group.
    What I want to do is to actually copy ownership methods and percentages automatically if I run a consol. Is there a way to do either of the following:
    1. Set a value for a POV that is in a different value layer?
    2. Trigger a calculate for a different value layer in the rules?

    Try posting this to the HFM forum

  • I want to transfer my data to new computer and use the new computer for iPhone 5. Have anyone can recommend the best way for me?

    I have some problem to transfer my data via home sharing. (Maybe my internet connection is poor)
    I tried to transfer all the apps and music via home sharing. When it's done, it showed the internet connection is time-out. (So sad )
    So I want some people to help me about the methods to transfer datas from old computer to the new one.
    Then I can use the new one with new iphone 5.
    PS. I use iPhone 3gs with iOS 4.3.5.
    Old computer uses Microsoft WIndows XP.
    New one uses Microsoft Windows 7.
    Thank you

    you can't recover iOS crypted backup password, for security purposes. your backup is useless. sorry.
    if you still have your old iPhone, make a new back-up

  • Approaches to transfer changed data from a table to another table

    Hello expert,
    will you please show me all approaches to automatically detect and transfer changed data from a table to another table ? I don't need detail information, but a little comment for those approaches are more welcome.
    Many Thanks,
    Edited by: 843178 on 19-Dec-2011 2:04 PM

    You will want to review information on replication and similar methods
    In a nutshell:
    Replication allowed data to exist at more than one site and take the Master / original table data and clone it at a different site
    As tranporting all of the table data to a remote / clone site got more and more expensive we developed a method
    to identify just the 'diffs' and only send those from the master to the clone which needed only the diffs to become a clone of the master table
    This has many methods and options, but looking under key words such as
    - distributed transations
    - materialized views
    - replication
    ... are a few categories that will get you started

Maybe you are looking for

  • Having trouble setting up parental controls

    Hello, when I set parental controls on the guest account, I still have trouble going to "allowed" sites. An example is, I allow access to google and amazon but when I try to access gmail it won't let me. It says something about a bad connection. The

  • Not able to display title of link.

    I am creating download excel link. but it is not working with the below code. i kept the following code in report attributes output csv as yes and link lable(paste below code} a href="f?p=&APP_ID.:0:&SESSION.:DOWNLOAD_EXCEL:NO::T_REGION_ID,T_EXCEL_NA

  • Migration of Fxcop Custom Rules to Visuatl Studio 2012

    I have an existing custom rule which was written in visual studio 2008 using fxocp SDK 1.35 and Microsoft.CCI 1.35. Now i am migrating it to latest version of Fxcop 11.0 and Microsoft .cci 11.0 using Vistual Studio 2012. using System; using System.Co

  • How to clear open posting change notices (Movt type 309)

    Hi, I want to clear the open PCN's created with mov type 309 (Posting change, general) being shown in LL01. All these posting change quantity are positive. I am not able to nullify them because there is no stock of those materials. Whenever I try to

  • How to installl sun access manager in windows xp

    Hi, Can any one guide me how to install sun access managment in windows xp. I have downloaded access managment 7.x version i didnt find any installation shield nor exe file to install. More over i didnt find any document regarding installation for su