Whole load and data package

Hi Experts,
By a single Data_package Loop, in start routines, does it mean that 1 loop is processed per 'logical partitions' of a single infopackage load? I would think so, but just like to clarify.
If this is the case, what should be done if the processing logic requires reading all the data per infopackage load before it can do the calculation?
That is, processing can only be done after knowing what are all the records in 1 infopackage load.
Thanks,
Suzie

Hi Suzie,
For this you have to upload first in an ODS and then loop the ODS on itself, to analyze all the records, then you can upload in the subsequent structure.
To loop the ODS you can build an Update Rules that start from the ODS and arrive to the ODS itself.
Ciao.
Riccardo.

Similar Messages

  • Library of flash files, loading, and data assignment?

    Hi; noob here...
    Here is what I want to accomplish:
    what little flash I know makes me just dangerous enough to
    want to be able to have either a database of vector information, or
    a .fla library of shapes (or symbols, I don't know) that represent
    counties, districts, states, watersheds (and such) of the USA.
    These can be loaded singly (for example if we want only to show the
    state of Maine, and not all of the US on a single demo) and
    assembled into separate layers and either loaded as or turned into
    symbols.
    These symbols then are altered (re-colored, shifting
    transparency, even moving, shrinking or exploding, or, as a blob,
    being panned in one direction or another) whenever certain buttons
    are pressed that load new XML data. For example, let us say that
    Oxford County in Maine has a 2% native american population, and
    Penobscot County has 4%... both had been loaded by actionscript and
    imported into the demonstration. Then, we change the color of those
    two counties to whatever color we would want to represent that one
    is 2% and the other 4%. Then, on another click or menu selection
    within the flash document, those values are dismounted, and new XML
    loaded and applied to the same symbols... and later on, these
    county symbols might be unloaded and new ones (for new hampshire or
    california or whole u.s., who knows) mounted.
    What I'm hoping people can tell me is what these processes
    might be called, or link me to tutorials that cover similar
    concepts, or otherwise point me to documentation or give
    suggestions on how I might tackle this. I keep scouring the
    internets (hehe) in search of similar things, but I don't even have
    the correct terminology to do an accurate search. Such just knowing
    what to search for may be of great help! Thanks in advance.

    unfortunately Flash does not hold an autosave file for you
    (actually most software doesn't do this - to my knowledge only
    Microsoft Office and a couple of others do)

  • SQL*Loader and data enclosed by single quotes

    Hello,
    I need to migrate some data from Sybase to Oracle. Can't use the Migration Workbench because this is Sybase's SQL Anywhere product. We are therefore trying to load via SQL*Loader.
    Sybase outputs its data into text files and encloses it with single quotes, like:
    'ABC123',
    'UR94LL',
    '7YUHII'
    We are running into a problem because we cannot figure out how to use the OPTIONALLY ENCLOSED BY parameter with single quotes. If we just use the FIELDS TERMINATED BY parameter, then the entire string, including the quotation marks, gets loaded into the Oracle table. This causes bad records to be created when the data is exactly the length of the column width - the two quotation markss make the field longer than is allowed in the table. Plus - the data shouldn't have quotes around it in normal situations.
    We can do 'Find and Replace' to replace the single quotation marks with double quotations, but some of the Sybase tables are huge, and we'd like to avoid having to open and edit them. However, if this is the only way to go, then we'll have to use it. I just wondered whether anyone had run into this before and been able to solve it.
    Thanks.
    -melissa

    We are running into a problem because we cannot figure out how to use the OPTIONALLY ENCLOSED BY parameter with single quotesTry
    OPTIONALLY ENCLOSED BY X'27'http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96652/ch06.htm#1015083

  • SQL*LOADER and date formatted data

    Hi there, I don't want to load complex things in my table - just a simple date....
    The control file is:
    load data
    infile 'c:\data\mydata.csv'
    into table test_table
    fields terminated by "," optionally enclosed by '"'
    ( sampledate, name )
    The mydata.csv file is:
    30-12-2003, Test1
    31-12-2003, Test2
    However this format is not being accepted by the database... I tried other combinations of date (eg, 30122003, 20-DEC-2003, 30/12/2004, etc...) but always had a 'data error'
    Thanks for any suggestions.

    Example:
    Control File for Case Study 3
    This control file loads the same table as in case 2, but it loads three additional columns (hiredate, projno, and loadseq). The demonstration table emp does not have columns projno and loadseq. To test this control file, add these columns to the emp table with the command:
    ALTER TABLE emp ADD (projno NUMBER, loadseq NUMBER);
    The data is in a different format than in case 2. Some data is enclosed in quotation marks, some is set off by commas, and the values for deptno and projno are separated by a colon.
    1) -- Variable-length, delimited, and enclosed data format
    LOAD DATA
    2) INFILE *
    3) APPEND
    INTO TABLE emp
    4) FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
    (empno, ename, job, mgr,
    5) hiredate DATE(20) "DD-Month-YYYY",
    sal, comm, deptno CHAR TERMINATED BY ':',
    projno,
    6) loadseq SEQUENCE(MAX,1))
    7) BEGINDATA
    8) 7782, "Clark", "Manager", 7839, 09-June-1981, 2572.50,, 10:101
    7839, "King", "President", , 17-November-1981,5500.00,,10:102
    7934, "Miller", "Clerk", 7782, 23-January-1982, 920.00,, 10:102
    7566, "Jones", "Manager", 7839, 02-April-1981, 3123.75,, 20:101
    7499, "Allen", "Salesman", 7698, 20-February-1981, 1600.00,
    (same line continued) 300.00, 30:103
    7654, "Martin", "Salesman", 7698, 28-September-1981, 1312.50,
    (same line continued) 1400.00, 3:103
    7658, "Chan", "Analyst", 7566, 03-May-1982, 3450,, 20:101
    Joel Pèrez
    http://otn.oracle.com/experts

  • SQL Loader and Dates

    Hi,
    I have a date field stored in a text file, can anyone tell me what the control file entry for that field should be (if we assume a format of DDMMYYYY).
    thanks in advance,
    ED.

    Check this out.
    http://download-west.oracle.com/docs/cd/B13789_01/server.101/b10825/ldr_cases.htm#sthref1640

  • What are the difference between Oracle IMP, SQL*Loader and Data Pump

    It's hard to decide which should be used for a flat file import. any body give some suggestions or if there is any guidelines for choosing among these?
    Thanks

    <p>You might want to take a look at <b>this</b>. It should answer all of your questions.</p>
    Tom

  • Automate EIS dim builds and data loads

    I want to automate the dimesion builds and data loads from my ETL tool (DTS). I have not been able to find anything about scripting EIS automation in the documentation. Is there any?

    what you can do is create go into EIS metadata outline and create a member load and data load script. Do this by selecting the Outline menu item, then select member load. click next, on this screen, select only save load script. Click the button "Save scripts" to give it a name. click finish. repeat for the dataload script. (If you are using ASO cubes, you must use separate scripts, you can't do both in one script)Then create a batch file to run the member load and data loads. In DTS, use an execute process task to run the batch file

  • Issue in Update routine due to Data Package

    We have this peculiar situation.
    The scenario is ..
    We have to load data from ODS1 to ODS2.
    The data package size is 9980 while transferring data from ODS1 to ODS2.
    In the update rule we have some calculations and we rank the records based on these calculations.
    The ODS key for both ODS1 and ODS2 is same ie Delivery Number , Delivery Item & Source System.
    For example a Delivery Number has 12 Delivery Items.
    These Delivery Items are in different Data Packages namely Data Package 1 and Data Package 4.
    So instead of having the ranks as 1 to 10 its calculating it as 1 to 5 and second item as 1 to 5.
    But what we require is Rank as 1 to 10.
    This is due to the fact that the items are in different Data packages.
    In this case the ABAP routine is working fine but the Data Package is the problem.
    Can anybody any alternative solution to this issue.?
    Thanks in advance for assistance.............

    CODE FOR INTER DATA PACKAGE TREATMENT
    PROGRAM UPDATE_ROUTINE.
    $$ begin of global - insert your declaration only below this line  -
    TABLES: ...
    DATA:   ...
    DATA: v_packet_nbr TYPE i VALUE 1.
    DATA:
      g_requnr  TYPE rsrequnr.
    DATA:
      l_is        TYPE string VALUE 'G_S_IS-RECNO',
      l_requnr    TYPE string VALUE 'G_S_MINFO-REQUNR'.
    FIELD-SYMBOLS: <g_f1> TYPE ANY,
                   <g_requnr> TYPE ANY.
    TYPES:
      BEGIN OF global_data_package.
            INCLUDE STRUCTURE /bic/cs8ydbim001.
    TYPES: recno   LIKE sy-tabix,
      END OF global_data_package.
    DATA lt_data_package_collect TYPE STANDARD TABLE OF global_data_package.
    DATA ls_datapack TYPE global_data_package.
    datapackage enhancement Declaration
    TYPES: BEGIN OF datapak.
            INCLUDE STRUCTURE /bic/cs8ydbim001.
    TYPES: END OF datapak.
    DATA: datapak1 TYPE STANDARD TABLE OF datapak,
          wa_datapak1 LIKE LINE OF datapak1.
    Declaration for Business Rules implementation
    TYPES : BEGIN OF ty_ydbsdppx.
            INCLUDE STRUCTURE /bic/aydbsdppx00.
    TYPES: END OF ty_ydbsdppx.
    DATA : it_ydbsdppx TYPE STANDARD TABLE OF ty_ydbsdppx WITH HEADER LINE,
           wa_ydbsdppx TYPE ty_ydbsdppx,
           temp TYPE /bic/aydbim00100-price,
           lv_tabix TYPE sy-tabix.
    $$ end of global - insert your declaration only before this line   -
    The follow definition is new in the BW3.x
    TYPES:
      BEGIN OF DATA_PACKAGE_STRUCTURE.
         INCLUDE STRUCTURE /BIC/CS8YDBIM001.
    TYPES:
         RECNO   LIKE sy-tabix,
      END OF DATA_PACKAGE_STRUCTURE.
    DATA:
      DATA_PACKAGE TYPE STANDARD TABLE OF DATA_PACKAGE_STRUCTURE
           WITH HEADER LINE
           WITH NON-UNIQUE DEFAULT KEY INITIAL SIZE 0.
    FORM startup
      TABLES   MONITOR STRUCTURE RSMONITOR "user defined monitoring
               MONITOR_RECNO STRUCTURE RSMONITORS " monitoring with record n
               DATA_PACKAGE STRUCTURE DATA_PACKAGE
      USING    RECORD_ALL LIKE SY-TABIX
               SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS
      CHANGING ABORT LIKE SY-SUBRC. "set ABORT <> 0 to cancel update
    $$ begin of routine - insert your code only below this line        -
    fill the internal tables "MONITOR" and/or "MONITOR_RECNO",
    to make monitor entries
    TABLES: rsmonfact.
      TYPES:
        BEGIN OF ls_rsmonfact,
          dp_nr TYPE rsmonfact-dp_nr,
        END OF ls_rsmonfact.
      DATA: k TYPE i,
            v_lines_1 TYPE i,
            v_lines_2 TYPE i,
            v_packet_max TYPE i.
    declaration of internal tables
      DATA: it_rsmonfact TYPE STANDARD TABLE OF ls_rsmonfact.
    INTER-PACKAGE COLLECTION TREATMENT *******************
      ASSIGN (l_requnr) TO <g_requnr>.
      SELECT dp_nr FROM rsmonfact
        INTO TABLE it_rsmonfact
        WHERE rnr = <g_requnr>.
      DESCRIBE TABLE it_rsmonfact LINES v_packet_max.
      IF v_packet_nbr < v_packet_max.
      APPEND LINES OF DATA_PACKAGE[] TO lt_data_package_collect[].
        CLEAR: DATA_PACKAGE.
        REFRESH DATA_PACKAGE.
        v_packet_nbr = v_packet_nbr + 1.
        CLEAR: MONITOR[], MONITOR.
        MONITOR-msgid = '00'.
        MONITOR-msgty = 'I'.
        MONITOR-msgno = '398'.
        MONITOR-msgv1 = 'All data_packages have been gathered in one. '.
        MONITOR-msgv2 = 'The last DATA_PACKAGE contains all records.'.
        APPEND MONITOR.
      ELSE.
    last data_package => perform Business Rules.
        IF v_packet_max > 1.
          APPEND LINES OF DATA_PACKAGE[] TO lt_data_package_collect[].
          CLEAR: DATA_PACKAGE[], DATA_PACKAGE.
          k = 1.
    We put back all package collected into data_package, handling recno.
          LOOP AT lt_data_package_collect INTO ls_datapack.
            ls_datapack-recno = k.
            APPEND ls_datapack TO DATA_PACKAGE.
            k = k + 1.
          ENDLOOP.
          CLEAR : lt_data_package_collect.
          REFRESH : lt_data_package_collect.
        ENDIF.
    sorting global data package and only keep the first occurence of the
    *record
      SORT DATA_PACKAGE BY material plant calmonth.
      DELETE ADJACENT DUPLICATES FROM DATA_PACKAGE
            COMPARING material plant calyear.
      SELECT * FROM /bic/aydbsdppx00
          INTO TABLE it_ydbsdppx
          FOR ALL ENTRIES IN DATA_PACKAGE
            WHERE material = DATA_PACKAGE-material
              AND plant    = DATA_PACKAGE-plant
              AND calyear  = DATA_PACKAGE-calyear.
    Enhance Data_package with Target additionnal fields.
      LOOP AT DATA_PACKAGE.
        CLEAR : wa_datapak1, wa_ydbsdppx.
        MOVE-CORRESPONDING DATA_PACKAGE TO wa_datapak1.
        READ TABLE it_ydbsdppx INTO wa_ydbsdppx
          WITH KEY material = DATA_PACKAGE-material
                      plant = DATA_PACKAGE-plant
                    calyear = DATA_PACKAGE-calyear.
        IF sy-subrc NE 0.       "new product price
          APPEND wa_datapak1 TO datapak1.
        ELSE.                   " a product price already exists
          IF wa_ydbsdppx-calmonth GE DATA_PACKAGE-calmonth.
    keep the eldest one  (for each year), or overwrite price if same month
            APPEND wa_datapak1 TO datapak1.
          ENDIF.
        ENDIF.
      ENDLOOP.
    ENDIF.
    if abort is not equal zero, the update process will be canceled
      ABORT = 0.
    $$ end of routine - insert your code only before this line         -
    ENDFORM.
    Edited by: mansi dandavate on Jun 17, 2010 12:32 PM

  • Maximum package size for data packages was exceeded and Process terminated

    Hello Guru,
    When i am execute the process chain i got this message Maximum package size for data packages was exceeded and Process terminated,any body help to me in this case how can i proceed.
    Thanks & Regards,
    Suresh.

    Hi,
    When the load is not getiing processed due to huge volume of data, or more number of records per data packet, Please try the below option.
    1) Reduce the IDOC size to 8000 and number of data packets per IDOC as 10. This can be done in info package settings.
    2) Run the load only to PSA.
    3) Once the load is succesfull , then push the data to targets.
    In this way you can overcome this issue.
    You can also try RSCUSTV* where * is an integer to change data load settings.
    Change Datapackage size for extraction, use Transaction RSCUSTV6.
    Change Datapackage size when upload from an R/3 system, set this value in R/3 Customizing (SBIW -> General settings -> Control parameters for data transfer).
    IN R/3, T-Code SBIW --> Genaral settings --> Maintain Control Parameters for Data Transfer (source system specific)
    Hope this helps.
    Thanks,
    JituK

  • Stage tab delimited CSV file and load the data into a different table

    Hi,
    I pretty new to writing PL/SQL packages.
    We are using Application express for our development. We get CSV files which is stored as a BLOB content in a table. I need to write a trigger that would get executed once the user the uploads the file and parse thru the Blob content and upload or stage the data in a different table.
    I would like to see if there is any tutorial or article that could explain the above process with the example or sample code to do the same. Any help in this regard will be highly appreciated.

    Hi,
    This is slightly unusual but at the same time easy to solve. You can read through a blob using the dbms_lob package, which is one of the Oracle supplied packages. This is presumably the bit you are missing, as once you know how you read a lob the rest is programming 101.
    Alternatively, you could write the lob out to a file on the server using another built in package called utl_file. This file can be parsed using an appropriately defined external table. External tables are the easiest way of reading data from flat files, including csv.
    I say unusual because why are you loading a csv file into a blob? A clob is almost understandable but if you can load into a column in a table why not skip this bit and just load the data as it comes in straight into the right table?
    All of what I have described is documented functionality, assuming you are on 9i or greater. But you didn't provide a version so I can't provide a link to the documentation ;)
    HTH
    Chris

  • Update and Overwrite Option in Import Master Data Package

    Hello,
    In the package Import Master Data there is the option "Select method for loading master data". I don't understand very well what is the difference between the "Update" and "Overwrite" option. I have tested both and I get the same result.
    Can someone explain to me what is the difference between both options ?
    Thanks,
    Mathieu

    Option overwrite: all properties are overwritten. If they are not maintained in the source file, they will be deleted in the member sheet
    Option update: only the properties maintained in the source file are overwritten. The other ones are not changed.
    D

  • SQL loader and executing the package

    Hi all,
    I am trying to load the data from flatfile and execute a package from .cmd / .bat file.
    here is the script zip_load.cmd
    sqlldr CONTROL=zip.ctl LOG=zip.log DISCARD=Zip.dsc BAD=Zip.bad userid=scott/tiger@mydb
    sqlplus scott/tiger@mydb
    --sqlplusw scott/tiger@mydb
    EXEC pkg_zip_log.prc_zip_log ('zip_load', NULL, 'address', 'begin', 'begin prc_zip');
    commit;
    EXEC pkg_zip_load.prc_zip_maint;
    pkg_zip_log.prc_zip_log ('zip_load', NULL, 'address', 'end', 'begin prc_zip');
    commit;
    exit;
    The above script stopping after loading the data and logging to database from sqlplus.
    sqlldr CONTROL=zip.ctl LOG=zip.log DISCARD=Zip.dsc BAD=Zip.bad userid=scott/tiger@mydb
    sqlplus scott/tiger@mydb
    It's executing the above 2 statments and stopping at
    SQL prompt SQL>
    Could you guys let me know the fix for executing the all statments in one step.
    Thanks in adv.
    bye

    It happens because SQL+ doesn't know anything about the rows after the connect string. It's just waiting for your input. Here is an example how to make it work.
    I've created test package:
    create or replace package body test
    as
    procedure p1
    as
    begin
    dbms_output.put_line('Hello!');
    end;
    end;
    Then on my Unix machine I created a file test.sh:
    sqlplus scott/tiger@prod << EOF
    set serveroutput on
    exec test.p1
    quit
    EOF
    When I run it from command line:
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Oracle Data Mining option
    JServer Release 9.2.0.1.0 - Production
    SQL> SQL> Hello!
    SQL> Disconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Oracle Data Mining option
    JServer Release 9.2.0.1.0 - Production
    Voila!

  • Loaded data amount into cube and data monitor amount

    Hi,
    when I load data into the cube the inserted data amount in the administrator section shows 650000 data sets. The monitor of that request shows a lot of data packages. When I sum the data packages, the sum is about 700000 data sets.
    Where is the difference coming from?
    Thanks!

    Hi ,
       If it is a full load to the cube , all the records are updated in it since in a cube data can be overwritten.
       If it is a delta load and u want to see why the difference occurs between the records transferred and added in cube ,
       u can go to the manage tab in dso , go to the contents tab ,there click change log button at the below , check the number of entries in that table , the number of entries are the added records in cube since only these records are the new records other records with the same key are already present in the cube.

  • Data package and data packet

    Hit
    i want to know the difference between data package and data packet .when this comes in sap bw
    with regards
    tushar

    Hello,
    Data package term is related to DTP which is used to load Data from PSA to further Data Targets
    Start and end routine works at package level so routine run for each package one by one .By default package have sorted data based on keys (non unique keys (characteristics )of source or target) and by setting semantic keys you can change this order.So Package having more data will take more time in processing then package have lesser data .
    Data Packet Term is related to Info Package which is used to load data from Source System to BI (PSA).
    As per SAP standard, we prefer to have 50,000 records per one data packet.
    For every data packet, it does commit & save --- so less no. of data packets required.
    If you have 1 lakh records per data packet and there is an error in the last record, the entire packet gets failed.
    Hope it helps!

  • Data Package 000001 : sent, not arrived. No data load in BW 7

    Hi gurus...i am working with BW 7 and i have already had the complete data flow from SRM to BW, some Business Content extractors and somo others generic extractors. We have created transformations, info packages and DTP's, in fact we have loaded master data and transactional data but since last monday data load is not working any more, now every time we manually run the infopackage the next waring message appears:
    Data Package 000001 : sent, not arrived
    and the data load just wait for an answer of SRM (with is the source system) or SRM waits a request from BW.
    We have reactivated the extractors in the SBIW, replicated and reactivated in BW (also we reactivate the transformations) and the data load is not working. Then we also regenerate the Data source but it does not work.
    Do you have and idea of what is happening or which could be the problem.
    Thanks

    Check in transaction SMQA in the source system. The communication might not be working. If there are any entries in RED, then select execute LUWs from the menu to manually process them.
    Read the error and try to resolve it before though.

Maybe you are looking for