Data trasfer techniques

Hi,
We have so may data transfer techniques such as BDC, LSMW, BAPI, IDoc.
What is the each technology for? I mean what is the significance of each technology?

Hi Sandeep,
BDC is the old technic used for DATA transfer. SAP Replaced all BDCs with BAPI function modules in the newer versions. SAP doesn't encourage using BDC. Almost for all transactions we have readily available BAPIs. Both will do the same thing
i.e Updating the data base. BDC and BAPI can be used both from with in SAP or from Non SAP to SAP.
eg: If we are going to change some order information it will be with in SAP. We will get the order details from SAP database and change the details using BDC or BAPI.
If we are going to create new order by uploading the data from Flat files then it will be between Non SAP to SAP.
Another important point is All BAPIs are RFCs. SO u can use
BAPI from another system also(SAP/NonSAP)
LSMW is generally used for initial migration of legacy data from Non SAP system to SAP system.
IDOC is used to send and receive the documents/data From SAP to SAP or Non SAP to SAP or SAP to Non SAP. I heard that it is possible to send the data between 2 Non SAP systems also using IDOCs.
Thanks,
Vinod.

Similar Messages

  • Data uploading techniques//LSMW

    Dear Experts,
    As i am very new to SAP HCM. Can any one explain about data uploading techniques in HCM point of view.
    ANd what is BDC,PDC
    Thanks in Advance
    Ram

    http://wiki.scn.sap.com/wiki/display/ABAP/Batch+Input+-+BDC
    SAP ECC - Plant Data Collection - Time, Attendance and Employee Expenditures (HR-PDC)

  • Data Migration techniques

    Hi Experts,
    I want to know about data migration techniques and how we can best use MDM while migrating old version of R/3 to new version of R/3....
    I have implemented SAP MDM in cases where we have number of SAP R/3 instances across different region and we were taking data from different R/3 one by one and doing data stan'zation,consolidation, Harmonization and all... I am not talking about all this...
    There was a good explanation from <b>Markus Ganser</b> about <b>Duplicate data and identical data</b>... I know all this..but my question is when I have <b>only one SAP R/3</b> and I still want to implement MDM solution while migrating my old R/3 instance to new one, How can I proceed in this scenario? What is the data migration technique...
    I know the common answer will be to use MDM as a middle ware..take Master data from old instance and after consolidation, send it back to new instance and at the same time send tran'data directly to new version... But is this worth doing this? Is there any other approach?
    If there is any document on this or any one have idea about data migration techniques while implementing MDM solution than send me documents on [email protected].......
    In short, I am looking for below 3 points while doing migration along with SAP MDM
    <b><b>Data Migration techniques</b>
    <b>Prerequisites</b>
    <b>Methodology in this kind of scenario</b>
    Step by step procedure</b>
    cheers,
    R.n

    hi..
    here i am sending u the link for complete Data Migration Life Cycle
    <a href="http://www.redwoodsystems.co.uk/dataMWhitePaper.html#links">http://www.redwoodsystems.co.uk/dataMWhitePaper.html#links</a>
    hope it might be of any use to u
    thank you & reward points if useful
    Message was edited by:
            Dasari Narendra

  • Periodic Data Trasfer using LSMW

    Dear All,
    How to do periodic Data trasfer using LSMW?
    For this, is it necessary to use File from Application Server only (& not from PC)?
    How the Last Extra Step will be processed?
    Please help.
    Thanks in advance.
    Kind Regards,
    Prasad

    Hi,
      refer
    https://forums.sdn.sap.com/click.jspa?searchID=10949951&messageID=5211212
    Regards
    Kiran Sure

  • Comparison of Data Loading techniques - Sql Loader & External Tables

    Below are 2 techniques using which the data can be loaded from Flat files to oracle tables.
    1)     SQL Loader:
    a.     Place the flat file( .txt or .csv) on the desired Location.
    b.     Create a control file
    Load Data
    Infile "Mytextfile.txt" (-- file containing table data , specify paths correctly, it could be .csv as well)
    Append or Truncate (-- based on requirement) into oracle tablename
    Separated by "," (or the delimiter we use in input file) optionally enclosed by
    (Field1, field2, field3 etc)
    c.     Now run sqlldr utility of oracle on sql command prompt as
    sqlldr username/password .CTL filename
    d.     The data can be verified by selecting the data from the table.
    Select * from oracle_table;
    2)     External Table:
    a.     Place the flat file (.txt or .csv) on the desired location.
    abc.csv
    1,one,first
    2,two,second
    3,three,third
    4,four,fourth
    b.     Create a directory
    create or replace directory ext_dir as '/home/rene/ext_dir'; -- path where the source file is kept
    c.     After granting appropriate permissions to the user, we can create external table like below.
    create table ext_table_csv (
    i Number,
    n Varchar2(20),
    m Varchar2(20)
    organization external (
    type oracle_loader
    default directory ext_dir
    access parameters (
    records delimited by newline
    fields terminated by ','
    missing field values are null
    location ('file.csv')
    reject limit unlimited;
    d.     Verify data by selecting it from the external table now
    select * from ext_table_csv;
    External tables feature is a complement to existing SQL*Loader functionality.
    It allows you to –
    •     Access data in external sources as if it were in a table in the database.
    •     Merge a flat file with an existing table in one statement.
    •     Sort a flat file on the way into a table you want compressed nicely
    •     Do a parallel direct path load -- without splitting up the input file, writing
    Shortcomings:
    •     External tables are read-only.
    •     No data manipulation language (DML) operations or index creation is allowed on an external table.
    Using Sql Loader You can –
    •     Load the data from a stored procedure or trigger (insert is not sqlldr)
    •     Do multi-table inserts
    •     Flow the data through a pipelined plsql function for cleansing/transformation
    Comparison for data loading
    To make the loading operation faster, the degree of parallelism can be set to any number, e.g 4
    So, when you created the external table, the database will divide the file to be read by four processes running in parallel. This parallelism happens automatically, with no additional effort on your part, and is really quite convenient. To parallelize this load using SQL*Loader, you would have had to manually divide your input file into multiple smaller files.
    Conclusion:
    SQL*Loader may be the better choice in data loading situations that require additional indexing of the staging table. However, we can always copy the data from external tables to Oracle Tables using DB links.

    Please let me know your views on this.

  • Criticism of new data "optimization" techniques

    On February 3, Verizon announced two new network practices in an attempt to reduce bandwidth usage:
    Throttling data speeds for the top 5% of new users, and
    Employing "optimization" techniques on certain file types for all users, in certain parts of the 3G network.
    These were two separate changes, and this post only talks about (2), the "optimization" techniques.
    I would like to criticize the optimization techniques as being harmful to Internet users and contrary to long-standing principles of how the Internet operates. This optimization can lead to web sites appearing to contain incorrect data, web sites appearing to be out-of-date, and depending on how optimization is implemented, privacy and security issues. I'll explain below.
    I hope Verizon will consider reversing this decision, or if not, making some changes to reduce the scope and breadth of the optimization.
    First, I'd like to thank Verizon for posting an in-depth technical description of how optimization works, available here:
    http://support.vzw.com/terms/network_optimization.html
    This transparency helps increase confidence that Verizon is trying to make the best decisions for their users. However, I believe they have erred in those decisions.
    Optimization Contrary to Internet Operating Principles
    The Internet has long been built around the idea that two distant servers exchange data with each other by transmitting "packets" using the IP protocol. The headers of these packets contain the information required such that all the Internet routers located between these servers can deliver the packets. One of the Internet's operating principles is that when two servers set up an IP connection, the routers connecting them do not modify the data. They may route the data differently, modify the headers in some cases (like network address translation), or possibly, in some cases, even block the data--but not modify it.
    What these new optimization techniques do is intercept a device's connection to a distant server, inspect the data, determine that the device is downloading a file, and in some cases, to attempt to reduce bandwidth used, modify the packets so that when the file is received by the device, it is a file containing different (smaller) contents than what the web server sent.
    I believe that modifying the contents of the file in this matter should be off-limits to any Internet service provider, regardless of whether they are trying to save bandwidth or achieve other goals. An Internet service provider should be a common carrier, billing for service and bandwidth used but not interfering in any way with the content served by a web server, the size or content of the files transferred, or the choices of how much data their customers are willing to use and pay for by way of the sites they choose to visit.
    Old or Incorrect Data
    Verizon's description of the optimization techniques explains that many common file types, including web pages, text files, images, and video files will be cached. This means that when a device visits a web page, it may be loading the cached copy from Verizon. This means that the user may be viewing a copy of the web site that is older than what the web site is currently serving. Additionally, if some files in the cache for a single web site were added at different times, such as CSS files or images relative to some of the web pages containing them, this may even cause web pages to render incorrectly.
    It is true that many users already experience caching because many devices and nearly all computer browsers have a personal cache. However, the user is in control of the browser cache. The user can click "reload" in the browser to bypass it, clear the cache at any time, or change the caching options. There is no indication with Verizon's optimization that the user will have any control over caching, or even knowledge as to whether a particular web page is cached.
    Potential Security and Privacy Violations
    The nature of the security or privacy violations that might occur depends on how carefully Verizon has implemented optimization. But as an example of the risk, look at what happened with Google Web Accelerator. Google Web Accelerator was a now-discontinued product that users installed as add-ons to their browsers which used centralized caches stored on Google's servers to speed up web requests. However, some users found that on web sites where they logged on, they were served personalized pages that actually belonged to different users, containing their private data. This is because Google's caching technology was initially unable to distinguish between public and private pages, and different people received pages that were cached by other users. This can be fixed or prevented with very careful engineering, but caching adds a big level of risk that these type of privacy problems will occur.
    However, Verizon's explanation of how video caching works suggests that these problems with mixed-up files will indeed occur. Verizon says that their caching technology works by examining "the first few frames (8 KB) of the video". This means that if multiple videos are identical at the start, that the cache will treat them the same, even if they differ later on in the file.
    Although it may not happen very frequently, this could mean that if two videos are encoded in the same manner except for the fact that they have edits later in the file, that some users may be viewing a completely different version of the video than what the web server transmitted. This could be true even if the differing videos are stored at completely separate servers, as Verizon's explanation states that the cataloguing process caches videos the same based on the 8KB analysis even if they are from different URLs.
    Questions about Tethering and Different Devices
    Verizon's explanation says near the beginning that "The form and extent of optimization [...] does not depend on [...] the user's device". However, elsewhere in the document, the explanation states that transcoding may be done differently depending on the capabilities of the user's device. Perhaps a clarification in this document is needed.
    The reason this is an important issue is that many people may wish to know if optimization happens when tethering on a laptop. I think some people would view optimization very differently depending on whether it is done on a phone, or on a laptop. For example, many people, for, say, business reasons, may have a strong requirement that a file they downloaded from a server is really the exact file they think they downloaded, and not one that has been optimized by Verizon.
    What I would Like Verizon To Do
    With respect to Verizon's need to limit bandwidth usage or provide incentives for users to limit their bandwidth usage, I hope Verizon reverses the decision to deploy optimization and chooses alternate, less intrusive means to achieve their bandwidth goals.
    However, if Verizon still decides to proceed with optimization, I hope they will consider:
    Allowing individual customers to disable optimization completely. (Some users may choose to keep it enabled, for faster Internet browsing on their devices, so this is a compromise that will achieve some bandwidth savings.)
    Only optimizing or caching video files, instead of more frequent file types such as web pages, text files, and image files.
    Disabling optimization when tethering or using a Wi-Fi personal hotspot.
    Finally, I hope Verizon publishes more information about any changes they may make to optimization to address these and other concerns, and commits to customers and potential customers about their future plans, because many customers are in 1- or 2-year contracts, or considering entering such contracts, and do not wish to be impacted by sudden changes that negatively impact them.
    Verizon, if you are reading, thank you for considering these concerns.

    A very well written and thought out article. And, you're absolutely right - this "optimization" is exactly the reason Verizon is fighting the new net neutrality rules. Of course, Verizon itself (and it's most ardent supporters on the forums) will fail to see the irony of requiring users to obtain an "unlimited" data plan, then complaining about data usage and trying to limit it artificially. It's like a hotel renting you a room for a week, then complaining you stayed 7 days.
    Of course, it was all part of the plan to begin with - people weren't buying the data plans (because they were such a poor value), so the decision was made to start requiring them. To make it more palatable, they called the plans "unlimited" (even though at one point unlimited meant limited to 5GB, but this was later dropped). Then, once the idea of mandatory data settles in, implement data caps with overages, which is what they were shooting for all along. ATT has already leapt, Verizon has said they will, too.

  • Need help on Proper LabVIEW data smoothing technique

    Hello,
    I am wondering if anybody can shed the light on proper way to smooth the data collected from an analog input.  I am collecting a channel with 100 Samples to Read and at a rate of 1000 Hz.  My intention is to average the 100 samples and treat the average as one data point of one iteration of my While loop.  The next time of my While loop executes, it averages another 100 samples, and treat the average as one data point, and so on. 
    The way I'm doing it now is as follows:
    DAQ Express VI ----> Collector (from Signal Manipulation pallette)  ------>  Statistics (Arithmetic Means from Signal Analysis pallette)
    ----> Convert from DDT to 1-D array of waveforms.
    But in doing so, I loose the time information of my waveforms, i.e. the data coming out from "Convert from DDT to 1-D array of waveforms" is always one data point with t=0 for each iteration of my While loop.  The t value is always 0.  This is not my original intention.  All I want is to smooth the data from my data collection by averaging the sampled data and preserve the time stamp.  How can I achieve this?  Thanks for any pointers.

    Hi Peter,
    The best option would be a moving average technique.
    I am attaching some examples that do moving average with simulated signals.
    You can implement this logic in your application
    Regards
    Dev
    Attachments:
    Moving_Average_on_Multiple_Records.llb ‏152 KB
    Moving_Average.vi ‏33 KB

  • DWH data loading techniques

    Hi,  I am almost new in DataWareHousing
    I have two DB s application and DWH ,  now I can easily load the data from APP to DWH, those db s are in the same hosting
    Now,  I need to load the data from live DB to DWH which is in another hosting.
    I am thinking to create just NoSQL DB on the sama hosting with DWH,  make a DUMP of the live DB  and run it into NOSQL.
    Is it right way or not ?
    If yes
    can you please give some loading techniques
    which I can use to load the data from NoSQL to my NEW DWH ?
    Thank you in advance
    Guka

    "NOSQL"  in "SQL and PL/SQL" forum? ;-)
    On more serious note, if you source and target DBs, are Oracle then why add "another variable"? Oracle can handle DWH load very efficiently for very large data volume (even across different hosts).
    vr,
    Sudhakar

  • Data trasfer from maint. view to excel sheet

    I have one maint. view , How to trasfer the data from that maint view to excel through bdc?
    Thanks,
    Regards,
    Vishal Bhagwat.

    Hi Vishal,
    In BDC call the below FM
    You can try using KCD_EXCEL_OLE_TO_INT_CONVERT FM.
    Add values to internal table
    SORT t_cells BY row col.
    LOOP AT t_cells INTO wa_cells.
    MOVE : wa_cells-col TO l_index.
    ASSIGN COMPONENT l_index OF STRUCTURE itab TO <f_value>.
    MOVE : wa_cells-value TO <f_value>.
    AT END OF row.
    APPEND itab
    CLEAR itab.
    ENDAT.
    ENDLOOP.
    Regards,
    Aarti.

  • Data trasfer to file

    Hi,
    I want to trasfer the data from two internal tables to a single table.Both the internal tables are interconnected.
    One is for header values and other for items details. I tried to do as :
      LOOP AT g_t_bkpf.
        TRANSFER g_t_bkpf TO l_output_file.
        LOOP AT g_t_bseg where bukrs = g_t_bkpf-bukrs    and belnr = g_t_bkpf-belnr.
         TRANSFER g_t_bseg TO l_output_file.
         ENDLOOP.
      ENDLOOP.
    Its giving an error. I want the layout of the file as :
    Document Header details
              Item1
              Item2 and so on...
    Please help

    Hi,
    I tried to the same, but data in the tables are as:
    g_t_bkpf :
    doc_no, year, and so on.. that is unique.
    but in g_t_bseg:
    details for each items for that particular doc no.
    If i combine both, it contain,
    unique details - item 1
    unique details - item 2
    and it dont want like tht. It need to be as
    unique details:
                   Item1
                   Item2

  • Related to data transfer techniques

    hi,
    when we use lsmw ,when we use bdc ?
    please explain clearly.......
    with regards,
    ramnaresh.

    hi
    good
    Batch Data Communication (BDC) is the oldest batch interfacing technique that SAP provided since the early versions of R/3. BDC is not a
    typical integration tool, in the sense that, it can be only be used for uploading data into R/3 and so it is not bi-directional.
    BDC works on the principle of simulating user input for transactional screen, via an ABAP program. Typically the input comes in the form
    of a flat file. The ABAP program reads this file and formats the input data screen by screen into an internal table (BDCDATA). The
    transaction is then started using this internal table as the input and executed in the background.
    In Call Transaction, the transactions are triggered at the time of processing itself and so the ABAP program must do the error handling.
    It can also be used for real-time interfaces and custom error handling & logging features. Whereas in Batch Input Sessions, the ABAP
    program creates a session with all the transactional data, and this session can be viewed, scheduled and processed (using
    Transaction SM35) at a later time. The latter technique has a built-in error processing mechanism too.
    Batch Input (BI) programs still use the classical BDC approach but doesnt require an ABAP program to be written to format the
    BDCDATA. The user has to format the data using predefined structures and store it in a flat file. The BI program then reads this and
    invokes the transaction mentioned in the header record of the file.
    Direct Input (DI) programs work exactly similar to BI programs. But the only difference is, instead of processing screens they validate
    fields and directly load the data into tables using standard function modules. For this reason, DI programs are much faster (RMDATIND - Material Master DI program works at least 5 times faster) than the BDC counterpart and so ideally suited for loading large volume data. DI programs are
    not available for all application areas.
    LSMW is an encapsulated data transfer tool. It can provide the same functionality as BDC infact much more but when coming to techinical perspective most the parameters are encapulated. To listout some of the differences :
    •LSMW is basicaly designed for a fuctional consultant who do not do much coding but need to explore the fuctionality while BDC is designed for a technical consultant.
    •LSMW offers different techinque for migrating data: Direct input ,BAPI,Idoc,Batch input recording. While bdc basically uses recording.
    •LSMW mapping is done by SAP while in BDC we have to do it explicitly .
    •LSMW is basically for standard SAP application while bdc basically for customized application.
    •Coding can be done flexibly in BDC when compared to LSMW
    its all depends upon the requirement before implementing any method to your requirement.
    thanks
    mrutyun^

  • Sine waveform data Compression Techniques

    Hi Engineers,
    I am looking for some techniques or algorithms to compress the Sine Wave waveform data.
    (Some of the changes I already done like DBL to SGL format and 16 bit integers )
    I can't effort the sample loss.
    Thanks and Regards
    Himanshu Goyal
    Thanks and Regards
    Himanshu Goyal | LabVIEW Engineer- Power System Automation
    Values that steer us ahead: Passion | Innovation | Ambition | Diligence | Teamwork
    It Only gets BETTER!!!

    Himanshu,
    simple mathematic for binary files:
    30 channels a 10KS/s presumably double => 30*10.000*8 Bytes/s = 2.400.000 Bytes/s (roughly 2MB/s).
    Running the application for 1 hour should result in 2MB/s*60*60 = 7.2GB.
    So the file you are getting is the most condensed version of data without lossing information.
    Loss of information will bring that down to less amount of space needed, but you have to consider which information you want to discard.
    Possible ways:
    1) Convert all data to Single: You will lose information, eventually cutting off values if the values are very big or very small. On the other hand, you cut down the space to 1/2 (3.6GB/h)
    2) Averaging: Calculate the average for several values. This is ok for good oversampling (>1000) and small numbers (<50) esp. when the signal has lots of noise. The space needed is cut down by the amount of values you build the average on. Please note that you cannot use a moving average (which is in fact a simple filtering method).
    3) Calculate a form-fit function for packages of the signal and store the parameters for the given function: Best compression, but will lose nearly all information of the waveform and introducing uncertainties by the form fit function (increasing errors). In addition to that, packages might have steps to one another since the form fit function will not result in a continuous function without steps.
    There are more methods for sure, but those are the most basic and common that i can think of.
    hope this helps,
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Data Trasfer with IDoc

    Hi,
       Can anybody please provide the document and/or more information on R/3 to APO data transfer using IDOC and XI. R/3 version is 3.0 and CIF is not possible.
    Thanks,
    Sanju

    Hi Sanjay,
    since APO is based on BW technology, it is possible to transfer R/3 data using BAPI via ALE/RFC to APO if CIF is not available.
    there are lots of business content around SCM, please have a look, maybe it is helpful:
    http://help.sap.com/saphelp_nw04/helpdata/en/29/79eb3cad744026e10000000a11405a/frameset.htm
    Lilly

  • Very slow data trasfer to USB 2.0 external hard drive

    This might have already been here, but was not able to find anything.
    I have Mac Pro with 10.5.8 OS X version. I bought a 1.5 TB external hard drive formated it to HFS+ and plubbed it in. Drive pops up OK, but when I start trasfering files the transfer speed is some thing like 10-15 Mbits/sec which is quite a lot below the theoretical 480 MBits/sec. I know you cannot reach that but 10-12 Mbits/sec seems very low.
    HDD is and Iomega HDD.
    Could anybody suggest me somethig how to speed up the trasfer process. I edit HD video so the files tend to get rather large.
    Thanks !!

    hi,
    i have the same problem and i have a MacBook Aluminium 2.4ghz with 2GB ram on it and i have a Maxtor External hard-drive
    OneTouch:
    Capacity: 465.76 GB
    Removable Media: Yes
    Detachable Drive: Yes
    BSD Name: disk1
    Product ID: 0x7310
    Vendor ID: 0x0d49
    Version: 1.25
    Serial Number: XXXXXXXX
    Speed: Up to 480 Mb/sec
    Manufacturer: Maxtor
    Location ID: 0x26200000
    Current Available (mA): 500
    Current Required (mA): 2
    Mac OS 9 Drivers: No
    Partition Map Type: MBR (Master Boot Record)
    S.M.A.R.T. status: Not Supported
    and there is 2 partition on it, i for mac (format as Journaled HFS+)
    and the other for Windows as a FAT32
    and transfer to any of the 2 partitions seem to be slow about 2MB/s.
    but if i plug the external to a vista it transfers normally around 15Mb/s
    any suggestion would be very helpful!!

  • PI-BI Data trasfer

    Hi Team,
    I am unable to find a document as to how to extract the data from the PI to BI.
    Please post a link or a document as to extract the data.
    Thanks,
    Venkat.P

    Hi
    you just go thru SDN search you will get a lot of links on these topics.
    To push data from BI to XI, please check
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/g-i/how%20to%20push%20data%20into%20bw%20from%20xi.pdf
    To send data from XI to BW please check
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/g-i/integrate%20bw%20via%20xi.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/40574601-ec97-2910-3cba-a0fdc10f4dce
    use this thread XI-BI Integration unsing RFC
    Pushing data from BI to XI using RFC Adapter
    some other links.
    ODS -- XI  -- RFC Synch call
    Regards,
    Shweta

Maybe you are looking for