Exp/Imp alternatives for large amounts of data (30GB)

Hi,
I've come into a new role where various test database are to be 'refreshed' each night with cleansed copies of production data. They have been using the Imp/Exp utilities with 10g R2. The export process is ok, but what's killing us is the time it takes to transfer..unzip...and import 32GB .dmp files. I'm looking for suggestions on what we can do to reduce these times. Currently the import takes 4 to 5 hours.
I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilities. Are 'Transportable Tablespaces' the next logical solution? I've been reading up on them and could start prototyping/testing the process next week. What else is in Oracle's toolbox I should be considering?
Thanks
brian

Hi,
I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilitiesDatapump will be faster for a couple of reasons. It uses direct path to unload the data. DataPump also supports parallel processes, so while one process is exporting metadata, the other processes can be exporting the data. In 11, you can also compress the dumpfiles as you are exporting. (Both data and metadata compression is available in 11, I think metadata compression is available in 10.2). This will remove your zip step.
As far as transportable tablespace, yes, this is an option. There are some requirements, but if it works for you, all you will be exporting will be the metadata and no data. The data is copied from the source to the target by way of datafiles. One of the biggest requirements is that the tablespaces need to be read only while the export job is running. This is true for both exp/imp and expdp/impdp.

Similar Messages

  • What java collection for large amount of data and user customizable record

    I'm trying to write an application which operates on large amount of data. I want user could customize data structure (record) from different types of variables(float,int,bool,string,enums). These records should be stored in some kind of Array. Size of record: 1-200 variables; size of Array of those records: about 100000 items (one record every second through whole day). I want these data stored in some embedded database (sqlite, hsqldb) - access using simple JDBC. Could you give me some advise how to design thoses data strucures. Sincerely yours :)
    Ok, maybe I give some example. This will be some C++ code.
    I made an interface:
    class ParamI {
    virtual string toString() = 0;
    virtual void addValue( ParamI * ) = 0;
    virtual void setValue( ParamI * ) = 0;
    virtual BYTE getType() = 0;
    Than I made some template class derived from interface ParamI:
    template <class T>
    class CParam : CParamI {
    public:
         void setValue( T val );
         T getValue();
         string toString();
         void setValue( ParamI *src ) {
              if ( itemType == src->getType() ) {
                   CParam<T> ptr = (CParam<T>)src;
                   value = ptr->value;
    private:
         BYTE itemType;
         T value;
    sample constructor of <int> template:
    template<> CParam<int>::CParam() {
         itemType = ParamType::INTEGER;
    This solution makes me possible to write collection of CParamI:
    std::vector<CParamI*> myCollection;
    CParam<int> *pi = new CParam<int>();
    pi->setValue(10);
    myCollection.push_back((CParamI*)pi);
    Is this correct solution?. My main problem is to get data from the collection. I have to check its data type using getType() method of CParamI interface.
    Please could give me some advise, some idea to make it right using java.

    If you have the requirement that you have to be able to configure on the fly, then what I've done in the past is just put everything into data pairs into a list: something along the line of: (<Vector>, <String>), where the Vector would store your data and String would contain a data type. I would then make a checker to validate the input according to the SQL databypes that I want to support on the project. It's not a big deal with the amount of data you are talking about.
    The problem you're going to have is when you try to allow dynamic definition, on the fly, of data being input to a table that has already been defined. Your DB will not support that, unless you just store that data pair--which I do not suggest.

  • XML-Export Error for large amount of data

    Hi there...
    I have an application process, which runs on demand (Button) and which exports data (from sql query) into a file (.xls).
    The result is being formated and the export works fine as long as the query returns just a small amount of data, approx. 8 to 10 rows.
    As the result is being stored in a clob, I output the data with "htp.prn" in a loop by "cutting" the clob into small pieces (varchar).
    However, as soon as the amount is bigger than the mentioned 8 to 10 rows, I get an error (sqlerrm:ORA-06502: PL/SQL: numeric or value error).
    I guess there must be something wrong with my loop or the way I "cut" the clob into pieces and output them.
    Maybe someone has a hint for me where to look at exactly!?
    Thanks in advance...
    Johnny
    Here is my code (I removed parts of it, which are not important for this issue):
    declare
    l_xml_header varchar2(32767);
    l_xml_body clob;
    l_xml_text varchar2(32767);
    l_xml_footer varchar2(32767);
    runner number;
    clob_size number;
    begin
    runner := 2;
    owa_util.mime_header( 'application/octet', FALSE);
    htp.p('Content-Disposition: attachment; filename="Test.xls"');
    owa_util.http_header_close;
    l_xml_header := '<?xml version="1.0" encoding="utf-8"?>'||chr(10)||
    '<?mso-application progid="Excel.Sheet"?>'||chr(10)||
    '<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:o="urn:schemas-microsoft-com:office:office"'||chr(10)||
    'xmlns:x="urn:schemas-microsoft-com:office:excel"'||chr(10)||
    'xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:html="http://www.w3.org/TR/REC-html40">'||chr(10)||
    '<DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">'||chr(10)||
    '<Version>1.0</Version>'||chr(10)||
    '</DocumentProperties>'||chr(10)||
    '<ExcelWorkbook xmlns="urn:schemas-microsoft-com:office:excel">'||chr(10)||
    '<WindowHeight>8580</WindowHeight>'||chr(10)||
    '<WindowWidth>15180</WindowWidth>'||chr(10)||
    '<WindowTopX>120</WindowTopX>'||chr(10)||
    '<WindowTopY>45</WindowTopY>'||chr(10)||
    '<ProtectStructure>False</ProtectStructure>'||chr(10)||
    '<ProtectWindows>False</ProtectWindows>'||chr(10)||
    '</ExcelWorkbook>'||chr(10)||
    '<Styles>'||chr(10)||
    '<Style ss:ID="Default" ss:Name="Normal">'||chr(10)||
    '<Alignment ss:Vertical="Bottom"/>'||chr(10)||
    '<Borders/>'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss"/>'||chr(10)||
    '<Interior/>'||chr(10)||
    '<NumberFormat/>'||chr(10)||
    '<Protection/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s22">'||chr(10)||
    '<Font x:Family="Swiss" ss:Bold="1"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s67">'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss" ss:Color="#FFFFFF"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s157">'||chr(10)||
    '<Borders/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s158">'||chr(10)||
    '<Borders>'||chr(10)||
    '<Border ss:Position="Right" ss:LineStyle="Continuous" ss:Weight="1"/>'||chr(10)||
    '</Borders>'||chr(10)||
    '</Style>'||chr(10)||
    '</Styles>';
    for z in 1..1
    loop
    l_xml_body:=l_xml_body||'<Worksheet ss:Name="Worksheet1"> <Table x:FullColumns="1" x:FullRows="1" ss:DefaultColumnWidth="60">';
    l_xml_body:=l_xml_body||'<Row><Cell ss:StyleID="s163"><Data ss:Type="String">Colum1</Data></Cell>'||
    '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum2</Data></Cell>'||
    '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum3</Data></Cell>'||
    '<Cell ss:StyleID="s163"><Data ss:Type="String">...</Data></Cell>'||
    '<Cell ss:StyleID="s166"><Data ss:Type="String">ColumN</Data></Cell></Row>';
    for z in (
    select
    a."Col1",
    a."Col2",
    b."Col3",
    b."ColN"
    from table1 a,
    table2 b
    where a.id = b.id
    loop
    l_xml_body := l_xml_body||'<Row><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.Col1||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.Col2||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.Col3||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    ... ||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.ColN||'</Data></Cell>';
    l_xml_body := l_xml_body||'</Row>'||chr(10);
    runner := runner + 1;
    end loop;
    l_xml_body := l_xml_body||'</Table>';
    end loop;
    clob_size := dbms_lob.getlength(l_xml_body);
    htp.prn(l_xml_header);
    for i in 1..ceil(clob_size / 32767)
    loop
    l_xml_text := dbms_lob.SUBSTR (l_xml_body, 32767, v_count);
    HTP.prn (l_xml_text);
    v_count := v_count + 32767;
    end loop;
    htp.prn('</Worksheet></Workbook>');
    HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
    EXCEPTION
    WHEN OTHERS
    THEN
    OWA_UTIL.mime_header ('application/octet', FALSE);
    HTP.prn ('Content-Disposition: attachment; filename="Test.xls"');
    OWA_UTIL.http_header_close;
    HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
    end;
    #######################################################

    Thanks for the hint Paul,
    here is my code in code-tags.
    I appreciate any help!
    Johnny
    declare
    l_xml_header varchar2(32767);
    l_xml_body clob;
    l_xml_text varchar2(32767);
    l_xml_footer varchar2(32767);
    runner number;
    clob_size number;
    begin
    runner := 2;
    owa_util.mime_header( 'application/octet', FALSE);
    htp.p('Content-Disposition: attachment; filename="Test.xls"');
    owa_util.http_header_close;
    l_xml_header := '<?xml version="1.0" encoding="utf-8"?>'||chr(10)||
    '<?mso-application progid="Excel.Sheet"?>'||chr(10)||
    '<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:o="urn:schemas-microsoft-com:office:office"'||chr(10)||
    'xmlns:x="urn:schemas-microsoft-com:office:excel"'||chr(10)||
    'xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:html="http://www.w3.org/TR/REC-html40">'||chr(10)||
    '<DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">'||chr(10)||
    '<Version>1.0</Version>'||chr(10)||
    '</DocumentProperties>'||chr(10)||
    '<ExcelWorkbook xmlns="urn:schemas-microsoft-com:office:excel">'||chr(10)||
    '<WindowHeight>8580</WindowHeight>'||chr(10)||
    '<WindowWidth>15180</WindowWidth>'||chr(10)||
    '<WindowTopX>120</WindowTopX>'||chr(10)||
    '<WindowTopY>45</WindowTopY>'||chr(10)||
    '<ProtectStructure>False</ProtectStructure>'||chr(10)||
    '<ProtectWindows>False</ProtectWindows>'||chr(10)||
    '</ExcelWorkbook>'||chr(10)||
    '<Styles>'||chr(10)||
    '<Style ss:ID="Default" ss:Name="Normal">'||chr(10)||
    '<Alignment ss:Vertical="Bottom"/>'||chr(10)||
    '<Borders/>'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss"/>'||chr(10)||
    '<Interior/>'||chr(10)||
    '<NumberFormat/>'||chr(10)||
    '<Protection/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s22">'||chr(10)||
    '<Font x:Family="Swiss" ss:Bold="1"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s67">'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss" ss:Color="#FFFFFF"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s157">'||chr(10)||
    '<Borders/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s158">'||chr(10)||
    '<Borders>'||chr(10)||
    '<Border ss:Position="Right" ss:LineStyle="Continuous" ss:Weight="1"/>'||chr(10)||
    '</Borders>'||chr(10)||
    '</Style>'||chr(10)||
    '</Styles>';
    for z in 1..1
    loop
      l_xml_body:=l_xml_body||'<Worksheet ss:Name="Worksheet1"> <Table x:FullColumns="1" x:FullRows="1" ss:DefaultColumnWidth="60">';
      l_xml_body:=l_xml_body||'<Row><Cell ss:StyleID="s163"><Data ss:Type="String">Colum1</Data></Cell>'||
                              '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum2</Data></Cell>'||
                              '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum3</Data></Cell>'||
                              '<Cell ss:StyleID="s163"><Data ss:Type="String">...</Data></Cell>'||
                              '<Cell ss:StyleID="s166"><Data ss:Type="String">ColumN</Data></Cell></Row>';
      for z in (
      select
       a."Col1",
       a."Col2",
       b."Col3",
       b."ColN"
      from table1 a,
           table2 b
      where a.id = b.id
      loop
          l_xml_body := l_xml_body||'<Row><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.Col1||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.Col2||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.Col3||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                             ...  ||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.ColN||'</Data></Cell>';
          l_xml_body := l_xml_body||'</Row>'||chr(10);
          runner := runner + 1;  
    end loop;
        l_xml_body := l_xml_body||'</Table>';
    end loop;
    clob_size           := dbms_lob.getlength(l_xml_body);
    htp.prn(l_xml_header);
    for i in 1..ceil(clob_size / 32767)
    loop
       l_xml_text := dbms_lob.SUBSTR (l_xml_body, 32767, v_count);
       HTP.prn (l_xml_text);
       v_count := v_count + 32767;
    end loop;
    htp.prn('</Worksheet></Workbook>');
    HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
       EXCEPTION
          WHEN OTHERS
          THEN
             OWA_UTIL.mime_header ('application/octet', FALSE);
             HTP.prn ('Content-Disposition: attachment; filename="Test.xls"');
             OWA_UTIL.http_header_close;
             HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
    end;

  • Mail for large amount of data

    Hey guys,
    I just switched to Mac (and I love it ;)) and now I am searching for a mail program to host my office pop mail account. I get normally around 5000 messages / month, totalling over 3GB, mostly pictures.
    Can Mail handle the amount of data, or will it corrupt my inbox?
    Am I asking because under Windows, OE couldn't handle the amount of data, neither could Thunderbird before the update to 1.5.

    Hi Ernie,
    Thanks for your ideas.
    more below:
    Mick,
    The only place I know for sure that discusses this,
    can be found at:
    http://docs.info.apple.com/article.html?artnum=25812
    Yeah, I read this when I had a couple In Boxes blow a year or two ago. Sadly, it wasn't really up to date and I'd seen no warning or otherwise prior that this could happen. These days I try to keep the IN BOX as slim and trim as possible. Just wish I knew what the real specs were on how much is too much?
    Other than an individual xxxx.mbox folder, I know of
    no limit that would apply, either to the total size
    of the Mail folder, nor to number of messages.
    What's the limit to an individual xxxx.mbox folder?
    Having said that, I never rely solely upon the Mail
    files to archive important attachments. I also keep
    my Mail folder active on more than one Mac, and thus
    on more than one hard drive. I do not, however, sync
    general Sent messages between the various Macs, on
    the theory that other backup practices protect any
    information that I would ever attach to send. Of
    course some On My Mac mailboxes are archiving both
    received and sent messages for certain subject
    areas.
    Sent messages are important to most of the businesses I support so they all need to be there. Nice thought about archiving attachments. I wonder if it's possible to create a rule or automator flow that would do that? Any attachment over 500k would get archived and then deleted from the message. I wonder if that's possible and simple enough...
    My own Mail folder is in excess of 5 GB, and my
    largest individual xxxx.mbox barely exceeds 1 GB.
    Is each folder created "On My Mac" one mbox?
    Again, she has in excess of 20GB and I'm worried about all that weight in the program. Makes me long for the good old days of Eudora which could handle massive sizes like that.
    thanks, Ernie.
    cheers,
    Mick

  • How do I remove spaces or special characters within a cell for large amounts of data

    Is there any shortcut to remove spaces between words and numbers within a cell?
    Example:
    Current: .5 lt PET (6)
    Need: .5ltPET(6)
    Is there any shortcut to remove special characters between numbers within a cell?
    Example:
    Current: 0--000--000--0
    Need: 00000000

    Thanks Wayne.
    I have been away from using Numbers or Excel for 4-5 years so it is slowly coming back to me. I am get that I need to use the SUBSTITUTE function however I am having trouble with getting it to work.
    My Data
    ST PAULI 12/12 NR
    $27.16
    12oz NR(12)
    0--80660--95937--5
    ST PAULI 4/6/12 NR
    $28.76
    12oz NR(6)
    0--80660--95935--1
    ST PAULI DK 12/12 NR
    $0.00
    12oz NR(12)
    0--000--000--0
    ST PAULI DK 4/6/12 NR
    $28.76
    12oz NR(6)
    0--80660--95945--0
    ST PAULI N/A 4/6/12 NR
    $20.66
    12oz NR(6)
    0--80660--95955--9
    CAYMAN JACK 4/6/12 NR
    $29.12
    12oz NR(6)
    8--15829--01006--8
    CAYMAN JACK 8OZ/12PK CAN
    $23.18
    8oz CAN(12)
    8--15829--01061--7
    TGIF LIIT 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01043--3
    TGIF MARGARITA 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01047--1
    TGIF PINA COLADA 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01045--7
    TGIF STRAWBERRY 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01042--6
    BALLAST PT BIG EYE IPA 1/2 BBL
    $190.00
    KEG 1984oz (1/2 KEG)
    0--000--000--0
    BALLAST PT BIG EYE IPA 1/6 BBL
    $73.00
    KEG 660.1oz (1/6 KEG)
    0--000--000--0
    BALLAST PT BIG EYE IPA 4/6/12 CAN
    $33.00
    12oz CAN(6)
    6--72438--00052--7
    There are many more but this is enough to show you. I need to remove all spaces from the First and Third Columns. I need to remove all (--) from the fourth. Where do I put in the substitute function and what is source sting, existing-string, new-string, and occurrence.
    Thank You for your help.

  • Sort algorithm for LARGE amount of data?

    hi,
    i need a sorting scheme for the following situation:
    I have a data file where entries are in chunks of variable length. The size of each
    chunk is defined in the first 5 bytes as a string, so the length can be from
    00001-99999, though it is usually around 1000-3000 bytes long. In reality it is never
    over 10000 bytes, but it is possible for it to be.
    Anyways, I need to sort these files according to the data found in certain
    displacements in these chunks. I will be sorting anywhere from 200,000 to
    100,000,000 at a time. Time is an issue certainly, but if it takes a week to finish that is
    fine, i just need it to work.
    So, my problem is that none of the typical sorts will work for me (bubble, heap) as far
    as i can tell because in those sorts i need to have the data loaded into memory, and
    this much data will overload the system. I have used, in the past, a c method that
    feeds these chunks to the sort function a few at a time, then makes files. Hence, not
    all chunks need to be loaded at once. Does anyone know of any solution to this
    problem? Any sort algorithms or sort classes that can handle this much data? thanks!

    Ever tried the radix sort? it's got linear complexity.
    You can still work a chunk at a time, and simply separate the data into several different "buckets", each one identified by, oh, say, the unicode number for the first character in the chunk.
    You now have several smaller lists to sort, and when you're done, NO MERGING IS NECESSARY. Simply append the lists, because the main sets of lists are already sifted into different "buckets".
    Kinda like this:
    create 256 files, and store in each every record that contains a first character that corresponds to it's ascii value. Then create 256 files for each of the original 256 files, and store in each every recond that contains a second character that correstonds to it's second character.
    etc, etc, etc.
    This is very memery intensive for storage, but in terms of run time complexity, it is linear: You will make an explicit number of passes through the list of data. And, as you go along, the lists get shorter and shorter. So while it appears that you are making 256 ^ (max length of data) passes, you're really only making (max length of data) passes, with some additional overhead of creating extra multiple files.
    For that much data, I would definitely recommend a linear algorithm. Any other sorts would be extremely slow.

  • Query For Large Amount of Data

    Hello All,
    I apologize in advance if I am not posting this in the right section. I am fairly new to APEX and database designing. My goal is to create an inquiry screen for a database of people.
    I am running APEX 4.2 on 11g. The information is store in 3 tables; Names, Demographics, Address. Each table had a PIN ID column that ties them all together. Each table has almost a million rows in them.
    Currently I have it set up that the person types in the name they want to search and it gets passed into a hidden page item on the next page where there is a report with a select statement based on the page item. Everything works right now however it is slow. I am having a 5-10 second delay before the results come up.
    My question is, is there a better way to set up these tables. What is the best way to make this faster?
    I'm sorry if this is a vague question but any help, or point in the right direction will be greatly appreciated
    Thank You !

    976533 wrote:
    Hello All,Welcome to the forum: please read the FAQ and forum sticky threads (if you haven't done so already), and update your forum profile with a real handle instead of "976533".
    When you have a problem you'll get a faster, more effective response by including as much relevant information as possible upfront. This should include:
    <li>Full APEX version
    <li>Full DB/version/edition/host OS
    <li>Web server architecture (EPG, OHS or APEX listener/host OS)
    <li>Browser(s) and version(s) used
    <li>Theme
    <li>Template(s)
    <li>Region/item type(s) (making particular distinction as to whether a "report" is a standard report, an interactive report, or in fact an "updateable report" (i.e. a tabular form)
    With APEX we're also fortunate to have a great resource in apex.oracle.com where we can reproduce and share problems. Reproducing things there is the best way to troubleshoot most issues, especially those relating to layout and visual formatting. If you expect a detailed answer then it's appropriate for you to take on a significant part of the effort by getting as far as possible with an example of the problem on apex.oracle.com before asking for assistance with specific issues, which we can then see at first hand.
    I apologize in advance if I am not posting this in the right section. I am fairly new to APEX and database designing. My goal is to create an inquiry screen for a database of people.It might be more appropriate to the {forum:id=75} forum, so you should look at the following entries on their FAQ as well:
    <li>{message:id=9360002}
    <li>{message:id=9360003}
    I am running APEX 4.2 on 11g. The information is store in 3 tables; Names, Demographics, Address. Each table had a PIN ID column that ties them all together. Each table has almost a million rows in them.
    Currently I have it set up that the person types in the name they want to search and it gets passed into a hidden page item on the next page where there is a report with a select statement based on the page item. Everything works right now however it is slow. I am having a 5-10 second delay before the results come up.
    My question is, is there a better way to set up these tables. What is the best way to make this faster? Are there suitable indexes on the tables?
    Does the report query use them?
    As described above, either: reproduce the problem on apex.oracle.com; or post DDL to allow us to recreate the tables and indexes, and the SQL from your report.

  • Using Siebel-OPA connector BO mapping for large amount of data

    Hi,
    We plan to use the BO mapping approach to get multiple values from OPA to Siebel, which we plan to store as multiple records in Siebel.
    1. Is it advisable to do so using BO mapping?
    2. Would IO mapping be a better approach, considering the size of data involved?
    Thanks

    nilskil wrote:
    Hi,
    We plan to use the BO mapping approach to get multiple values from OPA to Siebel, which we plan to store as multiple records in Siebel.
    1. Is it advisable to do so using BO mapping?
    2. Would IO mapping be a better approach, considering the size of data involved?
    ThanksFor passing lots of data between OPA and Siebel I would definitely recommend using an IO mapping. You will find it faster and also, the return IO xml will be easier to deal with.
    Cheers
    Frank

  • Looking for ideas for transferring large amounts of data between systems

    Hello,
    I am looking for ideas based on best practices for transferring Large Amounts of Data in and out of a Netweaver based application.
    We have a new system we are developing in Netweaver that will utilize both the Java and ABAP stack, and will require integration with other SAP and 3rd Party Systems. It is a standalone product that doesn't share any form of data store with other systems.
    We need to be able to support 10s of millions of records of tabular data coming in and out of our system.
    Since we need to integrate with so many different systems, we are planning to use RFC for our primary interface in and out of the system. As it turns out RFC is not good at dealing with this large amount of data being pushed through a single call.
    We have considered a number of possible ideas, however we are not very happy with any of them. I would like to see what the community has done in the past to solve problems like this as well as how SAP currently solves this problem in other applications like XI, BI, ERP, etc.

    Primoz wrote:Do you use KDE (Dolphin) 4.6 RC or 4.5?
    Also I've noticed that if i move / copy things with Dolphin they're substantially slower than if I use cp/mv. But cp/mv works fine for me...
    Also run Dolphin from terminal to try and see what's the problem.
    Hope that help at least a bit.
    Could you explain why Dolphin should be slower? I'm not attacking you, I'm just asking.
    Cause I thought that Dolphin is just a „little" wrapper around the cp/mv/cd/ls applications/commands.

  • How do I pause an iCloud restore for app with large amounts of data?

    I am using an iPhone app which is holding 10 Gb of data (media files) .
    Unfortunately, although all data was backed up, my iPhone 4 was faulty and needed to be replaced with a new handset. On restore, the 10Gb of data takes a very long time to restore over wi-fi. If interrupted (I reached the halfway point during the night) to go to work or take the dog for a walk, I end up of course on 3G for a short period of time.
    Next time I am in a wi-fi zone the app is restoring again right from the beginning
    How does anyone restore an app with large amounts of data or pause a restore?

    You can use classifications but there is no auto feature to archive like that on web apps.
    In terms of the blog, Like I have said to everyone that has posted about blog preview images:
    http://www.prettypollution.com.au/business-catalyst-blog
    Just one example of an image at the start of the blog post rendering out, not hard at all.

  • Large Amount of Data in JSF

    Hello,
    I am using the Table Group component for displaying data in my application designed in Java Studio Creator.
    I have enabled paging on the component. I use CachedRowSet on the bean for the page for getting the data. This works very well at the moment in my development environment. At the moment I am testing on small amount of data.
    I was wondering how does this component perform with very large amounts of data (>75,000 rows). I noticed that there is a button available for users to retrieve all the rows. So I was wondering apart from that instance, when viewing in a paged mode does the component get all the results from the database everytime ?
    Which component would be best suited for displaying large amounts of data in a table format?
    Thanks In Advance!!

    Thanks for your reply. The table control that I use does have paging as a feature and I have enabled it. It still takes time to load the data initially.
    I wonder if it is got to do with the logic of paging. How do you specify which set of 20 records to extract from SQL.
    Thanks for your help!!

  • Bex Report Designer - Large amount of data issue

    Hi Experts,
    I am trying to execute (on Portal) report made in BEx Report Designer, with about 30 000 pages, and the only thing I am getting is a blank page. Everything works fine at about 3000 pages. Do I need to set something to allow processing such large amount of data?
    Regards
    Vladimir

    Hi Sauro,
    I have not seen this behavior, but it has been a while since I tried to send an input schedule that large. I think the last time was on a BPC NW 7.0 SP06 system and it worked OK. If you are on a recent support package, then you should search for relevant notes (none come to mind for me, but searching yourself is always a good idea) and if you don't find one then you should open a support message with SAP, with very specific instructions for recreating the problem from a clean input-schedule.
    Good luck,
    Ethan

  • Advice needed on how to keep large amounts of data

    Hi guys,
    Im not sure whats the best way is to make large amounts of data available to my android  app on the local device.
    For example records of food ingredients, in the 100's?
    I have read and successfully created .db's using this tutorial.
    http://help.adobe.com/en_US/AIR/1.5/devappsflex/WS5b3ccc516d4fbf351e63e3d118666ade46-7d49. html
    However to populate the database I use flash? So this kind of defeats the purpose of it. No point in me shifting a massive array of data from flash to a sql database, when I could access the data direct from the as3 array?
    So maybe I could create the .db with an external program? but then how would I include that .db in the apk file and then deploy it to users android device.
    Or maybe I create a as3 class with an xml object init and use that as a means of data storage?
    Any advice would be appreciated

    You can use any means you like to populate your SQLite database, including using external programs, (temporarily) embedding a text file with SQL statements, executing some SQL from AS3 code etc etc.
    Once you have populated your db, deploy it with your project:
    http://chrisgriffith.wordpress.com/2011/01/11/understanding-bundled-sqlite-databases-in-ai r-for-mobile/
    Cheers, - Jon -

  • Error in Generating reports with large amount of data using OBIR

    Hi all,
    we hve integrated OBIR (Oracle BI Reporting) with OIM (Oracle Identity management) to generate the custom reports. Some of the custom reports contain a large amount of data (approx 80-90K rows with 7-8 columns) and the query of these reports basically use the audit tables and resource form tables primarily. Now when we try to generate the report, it is working fine with HTML where report directly generate on console but the same report when we tried to generate and save in pdf or Excel it gave up with the following error.
    [120509_133712190][][STATEMENT] Generating page [1314]
    [120509_133712193][][STATEMENT] Phase2 time used: 3ms
    [120509_133712193][][STATEMENT] Total time used: 41269ms for processing XSL-FO
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Helvetica closed.
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Times-Roman closed.
    [120509_133712848][][PROCEDURE] FO+Gen time used: 41924 msecs
    [120509_133712848][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) is called.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) done. All inputs are cleared.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] End Memory: max=496MB, total=496MB, free=121MB
    [120509_133818606][][EXCEPTION] java.net.SocketException: Socket closed
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
    at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
    at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
    at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
    at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:304)
    at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
    at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
    at oracle.apps.xdo.servlet.util.IOUtil.readWrite(IOUtil.java:47)
    at oracle.apps.xdo.servlet.CoreProcessor.process(CoreProcessor.java:280)
    at oracle.apps.xdo.servlet.CoreProcessor.generateDocument(CoreProcessor.java:82)
    at oracle.apps.xdo.servlet.ReportImpl.renderBodyHTTP(ReportImpl.java:562)
    at oracle.apps.xdo.servlet.ReportImpl.renderReportBodyHTTP(ReportImpl.java:265)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:270)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:250)
    at oracle.apps.xdo.servlet.XDOServlet.doGet(XDOServlet.java:178)
    at oracle.apps.xdo.servlet.XDOServlet.doPost(XDOServlet.java:201)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at oracle.apps.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:97)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(Unknown Source)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    It seems where the querry processing is taking some time we are facing this issue.Do i need to perform any additional configuration to generate such reports?

    java.net.SocketException: Socket closed
         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
         at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
         at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
         at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
         at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
         at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
         at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
         at weblogic.servlet.internal.CharsetChunkOutput.implWrite(CharsetChunkOutput.java:396)
         at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:198)
         at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
         at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
         at com.tej.systemi.util.AroundData.copyStream(AroundData.java:311)
         at com.tej.systemi.client.servlet.servant.Newdownloadsingle.producePageData(Newdownloadsingle.java:108)
         at com.tej.systemi.client.servlet.servant.BaseViewController.serve(BaseViewController.java:542)
         at com.tej.systemi.client.servlet.FrontController.doRequest(FrontController.java:226)
         at com.tej.systemi.client.servlet.FrontController.doPost(FrontController.java:128)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3498)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(Unknown Source)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:17
    (Please help finding a solution in this issue its in production and we need to ASAP)
    Thanks in Advance
    Edited by: 909601 on Jan 23, 2012 2:05 AM

  • With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?

    With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?
    For example, in Notes, I have written three notes; however if I click on 'All On My Mac' on the side bar, I see about 10 different versions of each note I make, it saves a version every time I add or delete a sentence.
    I also noticed, that when I write an email, Mail saves about 10 or more draft versions before the final is sent.
    I understand that all this journaling provides a level of security, and prevents data lost; but I was wondering, is there a function to clean up journal logs once in a while?
    Thanks
    Roz

    Are you using Microsoft word?  Microsoft thinks the users are idiots. They put up a lot of pointless messages that annoy & worry users.  I have seen this message from Microsoft word.  It's annoying.
    As BDaqua points out...
    When you copy information via edit > copy,  command + c, edit > cut, or command +x, you place the information on the clipboard. When you paste information, edit > paste or command + v, you copy information from the clipboard to your data file.
    If you edit > cut or command + x and you do not paste the information and you quite Word, you could be loosing information.  Microsoft is very worried about this. When you quite Word, Microsoft checks if there is information on the clipboard & if so, Microsoft puts out this message.
    You should be saving your work more than once a day. I'd save every 5 minutes.  command + s does a save.
    Robert

Maybe you are looking for

  • 9.2.2 install cd not working. it's not the original.

    hello all, i'm forced to put my tangerine ibook back into service. the main problem was i had no OS 9 classic to run my OSX 10.0 . the original cd was broken by accident and i bought a replacement. it had 9.2.2 and AHT on it but would not run on the

  • Leap Year Date Issue (Example provided)

    Please see link for document: Dropbox - DateIssue.pdf When the FROM dropdown has 02/28/2016 selected, the date for Monday throws an error. The rest of the dates change but Monday does not change because it is supposed to be 02/29 (See Screenshot). An

  • Partial footage/black screen when replacing footage in PPro CS5

    Let's see if I can get all the relevant info down here. Started off with a project in PPro CS5; using two source files (one an MP4 from a Flip cam - not the footage I am having an issue with) the other a direct-stream feed from a 3D application, conv

  • Iweb not publishing changed content

    In iweb the "Visit site" button is not enabled and each time I try publishing all my content to www.preppypug.com changes made in iweb are not being published - any suggesstions?

  • Iphoto 6 cannot see photos

    I rebuilt iphoto6, then my photos disappeared, I assume iphoto reverted to library on original drive so optioned clicked on iphoto startup and selected my library which is on a  separate drive but photos still do not appear but they do exist in the l