Looking for ideas for transferring large amounts of data between systems

Hello,
I am looking for ideas based on best practices for transferring Large Amounts of Data in and out of a Netweaver based application.
We have a new system we are developing in Netweaver that will utilize both the Java and ABAP stack, and will require integration with other SAP and 3rd Party Systems. It is a standalone product that doesn't share any form of data store with other systems.
We need to be able to support 10s of millions of records of tabular data coming in and out of our system.
Since we need to integrate with so many different systems, we are planning to use RFC for our primary interface in and out of the system. As it turns out RFC is not good at dealing with this large amount of data being pushed through a single call.
We have considered a number of possible ideas, however we are not very happy with any of them. I would like to see what the community has done in the past to solve problems like this as well as how SAP currently solves this problem in other applications like XI, BI, ERP, etc.

Primoz wrote:Do you use KDE (Dolphin) 4.6 RC or 4.5?
Also I've noticed that if i move / copy things with Dolphin they're substantially slower than if I use cp/mv. But cp/mv works fine for me...
Also run Dolphin from terminal to try and see what's the problem.
Hope that help at least a bit.
Could you explain why Dolphin should be slower? I'm not attacking you, I'm just asking.
Cause I thought that Dolphin is just a „little" wrapper around the cp/mv/cd/ls applications/commands.

Similar Messages

  • How can I copy large amount of data between two HD ?

    Hello !
    Which command could I user to copy large amount of data between two hard disk drives ?
    How Lion identify the disk drives when you want to write some script, for example in Windows I sue
    Robocopy D:\folder source\Files E:\folder destination
    I just want to copy files and if the files/folders exist in destination the files should be overwritted.
    Help please, I bougth my first MAC 4 days ago.
    Thanks !

    Select the files/folders on one HD and drag & drop onto the other HD. The copied ones will overwrite anything with the same names.
    Since you're a newcomer to the Mac, see these:
    Switching from Windows to Mac OS X,
    Basic Tutorials on using a Mac,
    Mac 101: Mac Essentials,
    Mac OS X keyboard shortcuts,
    Anatomy of a Mac,
    MacTips,
    Switching to Mac Superguide, and
    Switching to the Mac: The Missing Manual,
    Snow Leopard Edition.&
    Additionally, *Texas Mac Man* recommends:
    Quick Assist,
    Welcome to the Switch To A Mac Guides,
    Take Control E-books, and
    A guide for switching to a Mac.

  • How to pass large amount of data between steps

    Hi all,
    I have some LabVIEW VIs for data acquisition。
    I need to pass large amount of data(array size >5000000 each time) from one step to another.
    But it is not allowed to set array size larger than 5000000.
    Any suggestion?
    czhen
    Win 7 SP1 & LabVIEW 2012 SP1, Teststand 2012 SP1
    Solved!
    Go to Solution.
    Attachments:
    Array Size Limits.png ‏34 KB

    In your LabVIEW code, put the data into a data value reference.  Pass this reference between your TestStand steps.  As an added bonus, you will not get an extra copy of the data at each step.  You will need to use the InPlace element structure to get your data out of the data value reference.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Best data provisioning tool for very large amount of data updated real time?

    about a few hundred million entries of data a day and it must be replicated to sap hana in real time, what would be the best option?

    Hi Wayne,
    If you are looking for real time replication, then SLT is the best option. What is the source system for this replication?
    Regards,
    Chandu.

  • [SOLVED] Problem transferring large amounts of data to ntfs drive

    I have about 150GB of files that I am trying to transfer from my home partition which is ext3 /dev/sdd2 to another partition which is ntfs mounted with ntfs-3g /dev/sde1 (both drives are internal sata). The files are sizes ranging from 200MB to 4GB. When the files start to move they transfer at a reasonable speed (10 to 60MB/s), but will randomly (usually after about 1 to 5GB transfered) slow down to about 500KB/s transfer speed. The computer becomes unusable at this point, and even if I cancel the transfer the computer will continue to be unusable (I must use alt+sysreq REISUB). I have tried transferring with dolphin, nautilus, and the mv command but they all will produce the same results. I have also tried this in dolphin as root with no change. If I leave dolphin running long enough I also get the message "the process for the file protocol died unexpectedly".
    There is nothing that I can tell is wrong with the drives, I've run disk checks on both, and checked the S.M.A.R.T. readings for both disks and everything was fine.
    My hardware is an intel X58 motherboard, core i7 processor, 6GB of RAM, 5 internal sata drives and 1 internal sata optical drive.
    Another thing to note is that every once in a while I will get an error message at boot saying "Disabling IRQ #19".
    This is driving me crazy as I have no idea why this is happening and when I search I can't find any solutions that work.
    If anybody knows how to solve this or can help me diagnose the problem please help.
    Thank you.
    Last edited by zetskee (2011-01-07 21:29:58)

    Primoz wrote:Do you use KDE (Dolphin) 4.6 RC or 4.5?
    Also I've noticed that if i move / copy things with Dolphin they're substantially slower than if I use cp/mv. But cp/mv works fine for me...
    Also run Dolphin from terminal to try and see what's the problem.
    Hope that help at least a bit.
    Could you explain why Dolphin should be slower? I'm not attacking you, I'm just asking.
    Cause I thought that Dolphin is just a „little" wrapper around the cp/mv/cd/ls applications/commands.

  • Transferring large amount of data to external drive

    Hi I have 6.53GB of Data on my Mac Pro which is for a Windows XP system! I have attached an NTFS formatted 20GB empty external drive to my system via USB. When I pick up my file to transfer to the external drive a transparent 'no entry' logo appears and I cannot drop it onto the external drive. I'm aware of the 4GB limit using the FAT format but the external disk is definitely NTFS. I have considered using a dual layer DVD but would that work if I'm having these problems now? I would far prefer to just copy this to the external drive as DL DVD's are expensive! Any help would be appreciated. Thanks

    The issue is that the Mac OS can't write to NTFS drives without 3rd-party software.
    This software can do it, but I'm not sure if it's SL compatible yet:
    http://www.paragon-software.com/home/ntfs-mac/
    There is also NTFS-3G, which is not SL compatible at this time:
    http://www.ntfs-3g.org/
    ~Lyssa

  • Passing large amount of data between steps

    Hello,
    in my application, I developed some step types for data acquisition and analysis.
    I need to pass large arrays (>100000 points) from one step to another.
    Actually I pass the sequence context, and I use Variants to retrieve data (in the second step).
    This method seems however quite slow.
    Has anyone any suggestion?
    I use TS3.5 and CVI 8.0
    Thank you
    baloss

    Hi,
    You should be able to pass the data back directly via the parameter list of the function and like wise directly into the next function without having to use the sequence context.
    Regards
    Ray Farmer
    Regards
    Ray Farmer

  • Deleting large amounts of data

    All,
    I have several tables that have about 1 million plus rows of historical data that is no longer needed and I am considering deleting the data. I have heard that deleting the data will actually slow down performance as it will mess up the indexing, is this true? What if I recalculate statistics after deleting the data? In general, I am looking for advice what is best practices for deleting large amounts of data from tables.
    For everyones reference I am running Oracle 9.2.0.1.0 on Solaris 9. Thanks in advance for the advice.
    Thanks in advance!
    Ron

    Another problem with delete is that it generates a vast amount of redo log (and archived logs) information . The better way to get rid of the unneeded data would be to use TRUNCATE command:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_107a.htm#2067573
    The problem with truncate that it removes all the data from the table. In order to save some data from the table you can do next thing:
    1. create another_table as select * from <main_table> where <data you want to keep clause>
    2. save the indexes, constraints, trigger definitions, grants from the main_table
    3. drop the main table
    4. rename <stage_table> to <main_table>.
    5. recreate indexes, constraints and triggers.
    Another method is to use partitioning to partition the data based on the key (you've mentioned "historical" - the key could be some date column). Then you can drop the historical data partitions when you need it.
    As far as your question about recalculating the statistics - it will not release the storage allocated for index. You'll need to execute ALTER INDEX <index_name> REBUILD :
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_18a.htm
    Mike

  • Storing large amounts of data

    Hello,
    I'd like to use Berkeley DB for logging large amounts of data - i.e. structures that are ~400KB in size and I need to store them ~10 times per second for up to several hours, but I get into quite big performance issues the more records I insert into the database. I've set the pagesize to its maximum (64KB - I split my data into several packages so it doesn't get stored on an overflow page) and experimented with several chache sizes (8MB, 64MB, 2GB, 4GB), but I haven't managed to get rid of the performance issues, independent of which access method I use (although I got the "best" results when using DB_QUEUE, but that varies heavily from day to day).
    To get to the point: Performance starts at "0" seconds per insert (where 1 "insert" = 7 real inserts because of splitting up the data), between the 16750. and 17000. insertion it takes ~0.00352 per insert and when reaching the 36000. insertion it already takes about 0.0074 seconds per insert, and so on ...
    Does anyone have an idea on how I can increase my performance? Because when the time needed for each insertion keeps increasing over time, it's not possible to keep the program running at its intended speed at some point.
    Thanks,
    Thomas

    Hello,
    A good starting point are the suggestions in the Berkeley DB Reference Guide at:
    http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/am_misc_tune.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/transapp_tune.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/transapp_throughput.html
    Thanks,
    Sandra

  • Large Amount of Data in JSF

    Hello,
    I am using the Table Group component for displaying data in my application designed in Java Studio Creator.
    I have enabled paging on the component. I use CachedRowSet on the bean for the page for getting the data. This works very well at the moment in my development environment. At the moment I am testing on small amount of data.
    I was wondering how does this component perform with very large amounts of data (>75,000 rows). I noticed that there is a button available for users to retrieve all the rows. So I was wondering apart from that instance, when viewing in a paged mode does the component get all the results from the database everytime ?
    Which component would be best suited for displaying large amounts of data in a table format?
    Thanks In Advance!!

    Thanks for your reply. The table control that I use does have paging as a feature and I have enabled it. It still takes time to load the data initially.
    I wonder if it is got to do with the logic of paging. How do you specify which set of 20 records to extract from SQL.
    Thanks for your help!!

  • Power BI performance issue when load large amount of data from database

    I need to load data set from my database, which have large amount of data, it will take so many time to initialize data before I can build report, is there any good way to process large amount of data for PowerBI? As I know many people analysis data based
    on PowerBI, is there any suggestion for loading large amount of data from database?
    Thanks a lot for help

    Hi Ruixue,
    We have made significant performance improvements to Data Load in the February update for the Power BI Designer:
    http://blogs.msdn.com/b/powerbi/archive/2015/02/19/6-new-updates-for-the-power-bi-preview-february-2015.aspx
    Would you be able to try again and let us know if it's still slow? With the latest improvements, it should take between half and one third of the time that it used to.
    Thanks,
    M.

  • How do I pause an iCloud restore for app with large amounts of data?

    I am using an iPhone app which is holding 10 Gb of data (media files) .
    Unfortunately, although all data was backed up, my iPhone 4 was faulty and needed to be replaced with a new handset. On restore, the 10Gb of data takes a very long time to restore over wi-fi. If interrupted (I reached the halfway point during the night) to go to work or take the dog for a walk, I end up of course on 3G for a short period of time.
    Next time I am in a wi-fi zone the app is restoring again right from the beginning
    How does anyone restore an app with large amounts of data or pause a restore?

    You can use classifications but there is no auto feature to archive like that on web apps.
    In terms of the blog, Like I have said to everyone that has posted about blog preview images:
    http://www.prettypollution.com.au/business-catalyst-blog
    Just one example of an image at the start of the blog post rendering out, not hard at all.

  • What java collection for large amount of data and user customizable record

    I'm trying to write an application which operates on large amount of data. I want user could customize data structure (record) from different types of variables(float,int,bool,string,enums). These records should be stored in some kind of Array. Size of record: 1-200 variables; size of Array of those records: about 100000 items (one record every second through whole day). I want these data stored in some embedded database (sqlite, hsqldb) - access using simple JDBC. Could you give me some advise how to design thoses data strucures. Sincerely yours :)
    Ok, maybe I give some example. This will be some C++ code.
    I made an interface:
    class ParamI {
    virtual string toString() = 0;
    virtual void addValue( ParamI * ) = 0;
    virtual void setValue( ParamI * ) = 0;
    virtual BYTE getType() = 0;
    Than I made some template class derived from interface ParamI:
    template <class T>
    class CParam : CParamI {
    public:
         void setValue( T val );
         T getValue();
         string toString();
         void setValue( ParamI *src ) {
              if ( itemType == src->getType() ) {
                   CParam<T> ptr = (CParam<T>)src;
                   value = ptr->value;
    private:
         BYTE itemType;
         T value;
    sample constructor of <int> template:
    template<> CParam<int>::CParam() {
         itemType = ParamType::INTEGER;
    This solution makes me possible to write collection of CParamI:
    std::vector<CParamI*> myCollection;
    CParam<int> *pi = new CParam<int>();
    pi->setValue(10);
    myCollection.push_back((CParamI*)pi);
    Is this correct solution?. My main problem is to get data from the collection. I have to check its data type using getType() method of CParamI interface.
    Please could give me some advise, some idea to make it right using java.

    If you have the requirement that you have to be able to configure on the fly, then what I've done in the past is just put everything into data pairs into a list: something along the line of: (<Vector>, <String>), where the Vector would store your data and String would contain a data type. I would then make a checker to validate the input according to the SQL databypes that I want to support on the project. It's not a big deal with the amount of data you are talking about.
    The problem you're going to have is when you try to allow dynamic definition, on the fly, of data being input to a table that has already been defined. Your DB will not support that, unless you just store that data pair--which I do not suggest.

  • Extremely Slow USB 3.0 Speeds When Transferring Large Amounts of Video

    Hi there,
    I am transferring large amounts of footage (250GB-1.75TB chunks) from 5x 5400rpm 2TB drives to 5x 7200rpm 2TB drives simultaneously (via 6x USB 3.0 connections and 4x SATA III, with copy/paste in explorer) and the transfer speeds are incredibly slow. Initially the speeds show up as quite fast (45-150mb/ps+) but then they slow down to around 3mb/ps.
    The drives have not been manually defragmented but the vast majority of the files on each are R3D video files.
    I am wondering if the amount of drives/data being used/sent is what is causing such slow speeds or if there might be another culprit? I would be incredibly appreciative to learn of any solutions to increase speed significantly. Many thanks...
    Specs:
    OS: Windows 7 Professional
    Processor: i7 4790k
    RAM: 32GB
    GPU: Nvidia 970 GTX

    If the USB ports are all on the same controller, they share their resources, so the transfer rate with 6 ports would be max 1/6-th of the transfer rate with 1 USB port if we disregard the overhead. Add that overhead to the equation and the transfer rate goes down even further. Now take into account the fact that you are copying from slow 5400 RPM disks that effectively max out at around 80 MB/s with these chunks and high latency, add the OS overhead and these transfer rates do not surprise me.

  • Looking for a International Calling Plan + Data?

    Hello, 
    Let me first start by saying I am deploying to South Korea (Osan AFB) .
    I am looking for a plan with unlimited data usage and at least 450 minutes. I was reading about your global phones and south Korea is CDMA so that is good. It looks like I would have to get a new Droid but that's alright.
    I have a few questions though:
    1.  How does the data work for that plan? I don't quite understand how it works.
    2.  It says its $1.99 per minute, do you really charge by the minute? $1.99 is steep for a minute, 
    If I used 200 minutes at that rate my bill would cost $398 plus the taxes and other expenses. 
    3. What can the party  I am calling expect? Would it have to be a world phone to?
    I am just really trying to figure out how much this would cost.
    Thanks,
    Freddy

    I would strongly reccomend AGAINST trying to use your Verizon service overseas.  As you've discovered, the per minute rates are quite high, and your unlimited data plan will not work there, you would need to pay roaming rates for data as well, and they are no more palatable than the voice rates for international service.
    Once you get to South Korea, you should research local carriers and get service from one that is affordable and has the services you need, probably a prepaid carrier.
    To your last question, Verizon customers are not charged international rates for INCOMING international calls, so you could call back to the States from your local cellphone, landline, or VoIP provider, and they would be charged the same as if they were getting a call from another phone in the US.

Maybe you are looking for

  • PocketMac crashing using iCal/Calendar on BB

    Hello everyone, I'm using PocketMac 41.25 and everything was ok until couple of days ago when I tried syncing again my iCal in Calendars on the BlackBerry. Can anyone help by reading this crash log and tell me where is the problem located ? Thanks in

  • Airport Card/Bluetooth Installation

    Hi. Was wondering if anyone could help provide links with easy to understand, clearcut (step by step pics are always nice) instructions on how to install an airport card and bluetooth into my G4 flatpanel? Or, is this something best left to professio

  • BUSOBJ_j2ee.extranet

    Hello, We are using BOXI 3.1. We've got a problem with Deski when trying to access a universe. The problem is : when we are logging with the name of the server in the item "system" : everything is alrigth BUT when we are logging with "BUSOBJ_j2ee.ext

  • Html web galleries not generating

    I am using LR 2.4.  Mac G4, 10.4.11 Last night I tried to generate an html gallery with about 350 photos.  After about five hours, it was not done.  So I gave up. Today, I cannot even get html galleries TO DISPLAY, much less generate/export.  (Flash

  • Strange behavior with "insertTextAsContent"

    Hi, I've got a strange behavior with this loop : for (j = 0; j < myParagraph.characters.length-1; j++) {      var myCharacter = myParagraph.characters[j];      if (myCharacter.appliedFont.fontStyleName == "Regular") {           alert(myCharacter.cont