Loading History Data

Hi All.
I have some data loaded in BW flow from Nov 13 to Till date.Its a daily load.Now the client wants the data before Nov. 13 also.One month data has above 2 crore records.Its an FIGL Load.As its a daily load,i have a small window of time to load the history data before the daily data load happens.So i plan to load data for 1 month at a time.
How to go about this.Repair Full i need to do?Have not done it before,can anyone guide?Any chance of PSA getting corrupt?How will delta work
?Will it affect the daily load?Do i need to make the history request red in PSA to load and then load into my flow maually when the daily chain is not going as the FIGL load goes into many other flows.Its in the Production so if anything goes wrong.Everything will be impacted.
Thanks
Aditya

My understanding was this:
You enhanced BW cube with 2 fields, move to prod. everything was fine.
On BPC Side you loaded data from BW Enhanced cube by usig DM pkges as-usual. But your BPC cube was doubled after this load.
If yes then, Check as suggested above.
One more major point here. You did bw cube enhanced as your business in BW only(guess). For added fields you may be loaded historical data.  am i right.think so.
So now your BW cube have historical data till now. When coming BPC cube, you may not be deleted data from  BPC Cube (old data) which you loaded,because its not necessary. If your BPC have some data before this load, then after this load its may doubled. because BW Cube have historical data.  BPC Cube old data was lied in bw historical data, whatever option you used at DM pkg, you get double.
Even if you loaded delta pkg at BPC side also you will get doubled. because source bw cube have old data in new request form.
for BW enhancement - BW deleted data and reloaded
For BPC - didn't deleted,without deletion, data requested from bw cube, so bpc data will be double.
Work around: if above my points are right then you just delete BPC cube data and reload whole again.
When enhancing BW cubes which are used in BPC models. need to remember about this data issue.
Thanks.

Similar Messages

  • Load history data into BI

    Hi all,
    We are in the process of migration from R/3 to ECC. The team has decided to import 3 years history data into ECC from R/3. However, the management wants to be able to report on data for the last 5 years.
    During the conversion, the master data is consolidated for the three years and the new BI system is different from the older system. What is the best way to achieve reporting on the 5 year hsitory given that master data for the 2 years will not be available?
    Anything on this?

    Hi Raj,
    I am thinking that making the required Master data part of your transaction data would be the solution.
    I would create a History DSO and Cube, then load all required transaction data into DSO, in cube  add the master data objects you need for history reporting.
    While loading from DSO to Cube fill these master data from either lookup or using "Master data Attr of" option.
    This way your history data will have REQUIRED master data within the cube.
    Anyways I am interested to see what others have to say.
    Thanks,
    Naveen Rao Kattela

  • ExcelToCI not loading correct history data.

    We are using ExcelToCI to load history data on JOB_DATA component. The action used is UPDATE as the data is loaded at Scroll - Level 1. The row is intended to be added between 2 rows i.e. between CURRENT and HISTORY (or between 2 history rows) and only a few fields need to change. The row gets inserted, however the problem is that the data in the newly inserted row is from the succedding/ current row rather than previous/ history row.
    Following is the expected behaviour of the system as per the excerpt from Administer Workforce PeopleBook 9.0 page 553:
    __When you enter a new data row, the system copies the contents of the previous row into the new row—thus__
    __you do not have to retype any information that stays the same. (Ensure that you position the cursor on the__
    __data row that you want to copy before you insert the new row.) The only new information is the effective__
    __date, which is set by default to the system date (usually today’s date).__
    The above is the normal system behaviour but the ExcelToCI is not conforming to that. Has someone faced the same issue?
    Faraz

    ExcelToCI is not really designed for and is certainly not guaranteed to work with large, complex CIs. They don't come much larger or more complex than JOB_DATA, so it shouldn't be too much of a surprise that it isn't working as expected.
    Look for the ExcelToCI_Troubleshooting_Guide.doc on Oracle Support, and/or read the "Using the Excel to Component Interface Utility" chapter in the PeopleTools Component Interfaces PeopleBook. The PeopleBook description of the Action setting has some specific advice on handling effective dated rows.
    Regards,
    Bob

  • Loading historical data with 0CO_PC_ACT_02

    Hello,
        I need to load historical values with infosource 0CO_PC_ACT_02.  I can load the current period, but get no data from R/3 for historical periods.  When I run RSA3 on R/3 for a historical period, I get no data, so I don't believe that this is a BW Issue.
        My question:
       1.  Is there a job or something on R/3 I need to run to enable this data source to pull historical data?  If so, what is it and how do I run it?
       2.  Is this data source simply not able to pull historical data?
    Thanks.
    Dave

    Hi All,
    I have same issue , any one got work around to load history data with this Extractor(0CO_PC_ACT_02) ?

  • Can i load transaction data thro flat files without loading masters?

    Hi all,
    is it possible to load history data using flat files in sales cube without having loaded any masters?
    thanks

    Hi
    You can load the transaction data without loading the master data. You have to check the option in update rules (update also if no master data exists).
    In reporting you will not be able to see the attribute and text data. Its always adivceable to load master data first and run apply hierarchy/attribute change and load the transaction data.
    AHP: Nice to see you after a long time
    REGards
    Rak

  • LO Plant Maintenance History Data Loading

    Hi Gurus,
           I added a couple of Z fields to one of the Plant Maintenance datasource. In Prod System I already have data till September 5th 2007. I need to delete data in the cube from Jan 1 2007 to today and relaod the data from Jan 1 2007 to Today so that the newly added are populated in the cube from Jan 1 2007 to today.
    What will be the best procedure to do this with out disturbing the Delta and with out asking any downtime or what is the best procedure to do this. Please reply.
    Thanks in Advance.
    Raj

    There is no way to have zero down time i.e. reporting will be impacted if you want to load historical data into newly added field.
    If you are on BI 7.0 you can use remodelling but the limitaion is that the new field values can be derived from existing data only . I am not sure where your new field is getting data from. But with this option also you need a downtime as whole table will be converted( you should compress all requests to minimize the table conversion time ).
    If you are on BW 3.5 you have to delete all data and reload it. To minimize down time delete all data from PSA -> bring Full Load with new field to PSA only. Delete all data in Cube . Delete Initialization . Load the full load from PSA to Cube . Initialize without  data transfer and then you are ready to go . From time full load is extracted from R3 to delta init there should be no posting in R3 .
    Regards
    Sanjay

  • ITunes 10.5.3, Unable to Load Class Data from Sync Services, Crashes Repeatedly

    Ever since iTunes 10.5.3 on my Windows 7 x64 box, I've been having serious problems keeping iTunes running:
    1)  Any time I connect my iPhone or iPad, or any time they attempt to sync over WiFi, I get the 'Unable to load class data from sync services' error.  It's worth mention that the iPhone or iPad do actually then proceed to sync correctly, and all media / data is synchronized successfully, as far as I've been able to tell.  I've reset sync history, restored the iPhone and iPad, manually moved the SyncServices directory, set both the iPhone and iPad up as new, completely uninstalled iTunes and the mobile device software, re-installed, and *nothing* I do fixes this issue or even seems to impact it.  Uninstalling iTunes 10.5.3 and reinstalling iTunes 10.5.2 resolves this error.
    2)  iTunes 10.5.3 has serious crashing problems on my machine, though I've searched through forums and found a lot of other people with the problem.  In my case, sporadically as iTunes access the iTunes store (for example, I can always make it happen by subscribing to the Diggnation podcast and telling it to refresh the podcast), I get a full application crash which generates the following event in the event log:
    Faulting application name: iTunes.exe, version: 10.5.3.3, time stamp: 0x4f14cc3d
    Faulting module name: KERNELBASE.dll, version: 6.1.7601.17651, time stamp: 0x4e211319
    Exception code: 0x80000003
    Fault offset: 0x0001280c
    Faulting process id: 0x564
    Faulting application start time: 0x01cce78789dc009a
    Faulting application path: D:\Program Files (x86)\iTunes\iTunes.exe
    Faulting module path: C:\Windows\syswow64\KERNELBASE.dll
    Report Id: ecc77400-537a-11e1-b809-00248c196007
    Again, uninstalling iTunes 10.5.3 and re-installing iTunes 10.5.2 fixes this issue as well, but I really shouldn't have to do that.  Is there any actual solution to these problems other than dowgrading to an older version?
    Yes, I'm running current malware protection (and my system is clean) and yes, I've already checked any other obvious culprits to make sure I don't just have a corrupt library or file somewhere that's causing it.  My Windows 7 install is completely up to date from a Windows Update perspective and also from a driver perspective (even including the motherboard and chipset drivers).  I made sure to do a system file integrity check (sfc /scannow) and it confirmed my install of Windows isn't suspect, but beyond all that I was able to reproduce these failures on a completely separate machine running Windows 7 x64, without any of my media library or data on it -- iTunes 10.5.3 just seems to be bad news on Windows 7 x64 for me, even though I know there are definitely people are using it successfully.  I don't get any other application crashes for any other applications, and I've never had an issue like this with any previous version of iTunes...
    Last but not least, in case it matters, I'm running iOS 5.0.1 on both the iPhone4 and the iPad2.  The iPhone4 is the GSM AT&T model and the iPad2 is the CDMA Verizon model.

    OK, I've given up.  Uninstalled iTunes 10.5.3 permanently.  It's a shame, 10.5.2 works perfectly but 10.5.3 is a complete and total cluster for me.
    Anyone with the same problems who hasn't done this already, just uninstall iTunes and the Apple Mobile Device Service in your Add/Remove Programs, then re-install iTunes 10.5.2.  You won't lose your library or any settings or apps, it will just go back to working again.
    I wish Apple would acknowledge this so that I'd at least know it would be fixed in the future.

  • ITunes was unable to load provider data from Sync Services. Reconnect or try again later

    i just had this message for the fist time when trying to sync my iphone 4: "iTunes was unable to load provider data from Sync Services. Reconnect or try again later"
    what i did is that:
    - i unplugged my iphone 4
    - clicked on "edit" in itunes, "Preferences", "devices", "reset synck history"
    - Uninstalled itunes
    - restarted the lap top
    - reinstalled itunes
    then i tired and the iphone synced fine. i tried it a few times and it was fine.
    but i have 2 questions:
    1- could this be cause by excessive syncing, since i do sync my phone almost twice a week.
    2- is what i did good enough for a solution or do i need to rename folders and resort to the other solutions provided?

    OK
    Here's what I did:
    1. Copy all of the contents in C:\Program Files\Common Files\Apple\Apple Application Support to C:\Program Files\Common Files\Apple\Mobile Device Support folder, overwriting anything it if asked.
    2. Rright-click on the Computer icon on your desktop (or in the start menu) and select Properties. Click on Advanced System Settings. Then clik on Environment Variables. If you don't have a user variable called PATH, click New... and add it, or select it and click Edit...
    Now, add to the beginning of the user PATH the following two directories:
    c:\program files\commmon files\apple\Apple Application Support
    c:\program files\commmon files\apple\Mobile Device Support
    Be sure to put a semi-colon (;) between the two directories.
    Click OK three times to save your settings.
    Step #2 might require a reboot.
    After these 2 steps sync works again (touch wood).
    Seems that sometimes there's an additional possibility (which I didn't perform):
    If you are still having problems after following these steps, it's possible another application uses one of the same DLLs as iTunes (but a different version) and Windows is finding it in the PATH before the Apple file paths added above. You can check this by opening a DOS command prompt window and executing the following commands:
    where libxml2.dll
    where sqlite3.dll
    where libtidy.dll
    Examine the pathnames listed by these commands. If any paths are listed before the ...\Common Files\Apple\... paths, you'll want to remove those paths from the PATH environment variable.

  • Management Portal unavailable - "cannot load management data for this resource type"

    Noticing that with the exception of virtual machines, every resource type in the (non preview) Azure Management Portal is showing a sad pink icon and the help text "cannot load management data for this resource type."
    So that's a near total portal outage for us. Anybody else experiencing this?
    I've attempted to open a ticket with developer support but I'm not sure if that's the right channel.
    Brian

    Hi Brian,
    Thanks for your posting!
    From Dashboard shown (http://azure.microsoft.com/en-us/status/#history), Currently, the Azure portal works fine in every regions.
    So I suggest you can submit a tick via this channel:
    http://www.windowsazure.com/en-us/support/contact/
    Also, you can click your right-top account name and select the "Microsoft Support" item to create free ticket.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How can I load my data faster?  Is there a SQL solution instead of PL/SQL?

    11.2.0.2
    Solaris 10 sparc
    I need to backfill invoices from a customer. The raw data has 3.1 million records. I have used pl/sql to load these invoices into our system (dev), however, our issue is the amount of time it's taking to run the load - effectively running at approx 4 hours. (Raw data has been loaded into a staging table)
    My research keeps coming back to one concept: sql is faster than pl/sql. Where I'm stuck is the need to programmatically load the data. The invoice table has a sequence on it (primary key = invoice_id)...the invoice_header and invoice_address tables use the invoice_id as a foreign key. So my script takes advantage of knowing the primary key and uses that on the subsequent inserts to the subordinate invoice_header and invoice_address tables, respectively.
    My script is below. What I'm asking is if there are other ideas on the quickest way to load this data...what am I not considering? I have to load the data in dev, qa, then production so the sequences and such change between the environments. I've dummied down the code to protect the customer; syntax and correctness of the code posted here (on the forum) is moot...it's only posted to give the framework for what I currently have.
    Any advice would be greatly appreciated; how can I load the data faster knowing that I need to know sequence values for inserts into other tables?
    DECLARE
       v_inv_id        invoice.invoice_id%TYPE;
       v_inv_addr_id    invoice_address.invoice_address_id%TYPE;
       errString        invoice_errors.sqlerrmsg%TYPE;
       v_guid          VARCHAR2 (128);
       v_str           VARCHAR2 (256);
       v_err_loc       NUMBER;
       v_count         NUMBER := 0;
       l_start_time    NUMBER;
       TYPE rec IS RECORD
          BILLING_TYPE             VARCHAR2 (256),
          CURRENCY                 VARCHAR2 (256),
          BILLING_DOCUMENT         VARCHAR2 (256),
          DROP_SHIP_IND            VARCHAR2 (256),
          TO_PO_NUMBER        VARCHAR2 (256),
          TO_PURCHASE_ORDER   VARCHAR2 (256),
          DUE_DATE                 DATE,
          BILL_DATE                DATE,
          TAX_AMT                  VARCHAR2 (256),
          PAYER_CUSTOMER           VARCHAR2 (256),
          TO_ACCT_NO          VARCHAR2 (256),
          BILL_TO_ACCT_NO          VARCHAR2 (256),
          NET_AMOUNT               VARCHAR2 (256),
          NET_AMOUNT_CURRENCY      VARCHAR2 (256),
          ORDER_DT             DATE,
          TO_CUSTOMER         VARCHAR2 (256),
          TO_NAME             VARCHAR2 (256),
          FRANCHISES       VARCHAR2 (4000),
          UPDT_DT                  DATE
       TYPE tab IS TABLE OF rec
                      INDEX BY BINARY_INTEGER;
       pltab           tab;
       CURSOR c
       IS
          SELECT   billing_type,
                   currency,
                   billing_document,
                   drop_ship_ind,
                   to_po_number,
                   to_purchase_order,
                   due_date,
                   bill_date,
                   tax_amt,
                   payer_customer,
                   to_acct_no,
                   bill_to_acct_no,
                   net_amount,
                   net_amount_currency,
                   order_dt,
                   to_customer,
                   to_name,
                   franchises,
                   updt_dt
            FROM   BACKFILL_INVOICES;
    BEGIN
       l_start_time := DBMS_UTILITY.get_time;
       OPEN c;
       LOOP
          FETCH c
          BULK COLLECT INTO pltab
          LIMIT 1000;
          v_err_loc := 1;
          FOR i IN 1 .. pltab.COUNT
          LOOP
             BEGIN
                v_inv_id :=  SEQ_INVOICE_ID.NEXTVAL;
                v_guid := 'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff');
                v_str := str_parser (pltab (i).FRANCHISES); --function to string parse  - this could be done in advance, yes.
                v_err_loc := 2;
                v_count := v_count + 1;
                INSERT INTO    invoice nologging
                     VALUES   (v_inv_id,
                               pltab (i).BILL_DATE,
                               v_guid,
                               '111111',
                               'NONE',
                               TO_TIMESTAMP (pltab (i).BILL_DATE),
                               TO_TIMESTAMP (pltab (i).UPDT_DT),
                               'READ',
                               'PAPER',
                               pltab (i).payer_customer,
                               v_str,
                               '111111');
                v_err_loc := 3;
                INSERT INTO    invoice_header nologging
                     VALUES   (v_inv_id,
                               TRIM (LEADING 0 FROM pltab (i).billing_document), --invoice_num
                               NULL,
                               pltab (i).BILL_DATE,                 --invoice_date
                               pltab (i).TO_PO_NUMBER,
                               NULL,
                               pltab (i).net_amount,
                               NULL,
                               pltab (i).tax_amt,
                               NULL,
                               NULL,
                               pltab (i).due_date,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               TO_TIMESTAMP (SYSDATE),
                               TO_TIMESTAMP (SYSDATE),
                               PLTAB (I).NET_AMOUNT_CURRENCY,
                               (SELECT   i.bc_value
                                  FROM   invsvc_owner.billing_codes i
                                 WHERE   i.bc_name = PLTAB (I).BILLING_TYPE),
                               PLTAB (I).BILL_DATE);
                v_err_loc := 4;
                INSERT INTO    invoice_address nologging
                     VALUES   (invsvc_owner.SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
                               v_inv_id,
                               'BLAH INITIAL',
                               pltab (i).BILL_DATE,
                               NULL,
                               pltab (i).to_acct_no,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               SYSTIMESTAMP,
                               NULL);
                v_err_loc := 5;
                INSERT INTO    invoice_address nologging
                     VALUES   ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
                               v_inv_id,
                               'BLAH',
                               pltab (i).BILL_DATE,
                               NULL,
                               pltab (i).TO_ACCT_NO,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               SYSTIMESTAMP,
                               NULL);
                v_err_loc := 6;
                INSERT INTO    invoice_address nologging
                     VALUES   ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
                               v_inv_id,
                               'BLAH2',
                               pltab (i).BILL_DATE,
                               NULL,
                               pltab (i).TO_CUSTOMER,
                               pltab (i).to_name,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               SYSTIMESTAMP,
                               NULL);
                v_err_loc := 7;
                INSERT INTO    invoice_address nologging
                     VALUES   ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
                               v_inv_id,
                               'BLAH3',
                               pltab (i).BILL_DATE,
                               NULL,
                               'SOME PROPRIETARY DATA',
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               SYSTIMESTAMP,
                               NULL);
                v_err_loc := 8;
                INSERT
                  INTO    invoice_event nologging (id,
                                                             eid,
                                                             root_eid,
                                                             invoice_number,
                                                             event_type,
                                                             event_email_address,
                                                             event_ts)
                VALUES   ( SEQ_INVOICE_EVENT_ID.NEXTVAL,
                          '111111',
                          '222222',
                          TRIM (LEADING 0 FROM pltab (i).billing_document),
                          'READ',
                          'some_user@some_company.com',
                          SYSTIMESTAMP);
                v_err_loc := 9;
                INSERT INTO   backfill_invoice_mapping
                     VALUES   (v_inv_id,
                               v_guid,
                               pltab (i).billing_document,
                               pltab (i).payer_customer,
                               pltab (i).net_amount);
                IF v_count = 10000
                THEN
                   COMMIT;              
                END IF;
             EXCEPTION
                WHEN OTHERS
                THEN
                   errString := SQLERRM;
                   INSERT INTO   backfill_invoice_errors
                        VALUES   (
                                    pltab (i).billing_document,
                                    pltab (i).payer_customer,
                                    errString || ' ' || v_err_loc
                   COMMIT;
             END;
          END LOOP;
          v_err_loc := 10;
          INSERT INTO   backfill_invoice_timing
               VALUES   (
                           ROUND ( (DBMS_UTILITY.get_time - l_start_time) / 100,
                                  2)
                           || ' seconds.',
                           (SELECT   COUNT (1)
                              FROM   backfill_invoice_mapping),
                           (SELECT   COUNT (1)
                              FROM   backfill_invoice_errors),
                           SYSDATE
          COMMIT;
          EXIT WHEN c%NOTFOUND;
       END LOOP;
       COMMIT;
    EXCEPTION
       WHEN OTHERS
       THEN
          errString := SQLERRM;
          INSERT INTO   backfill_invoice_errors
               VALUES   (NULL, NULL, errString || ' ' || v_err_loc);
          COMMIT;
    END;

    Hello
    You could use insert all in your case and make use of sequence.NEXTVAL and sequence.CURRVAL like so (excuse any typos - I can't test without table definitions). I've done the first 2 tables, so it's just a matter of adding the rest in...
    INSERT ALL
         INTO      invoice nologging
                    VALUES   (     SEQ_INVOICE_ID.NEXTVAL,
                                   BILL_DATE,
                                    my_guid,
                                    '111111',
                                    'NONE',
                                    CAST(BILL_DATE AS TIMESTAMP),
                                    CAST(UPDT_DT AS TIMESTAMP),
                                    'READ',
                                    'PAPER',
                                    payer_customer,
                                    parsed_francises,
                                    '111111'
         INTO      invoice_header
              VALUES   (      SEQ_INVOICE_ID.CURRVAL,
                        TRIM (LEADING 0 FROM billing_document), --invoice_num
                        NULL,
                        BILL_DATE,                 --invoice_date
                        TO_PO_NUMBER,
                        NULL,
                        net_amount,
                        NULL,
                        tax_amt,
                        NULL,
                        NULL,
                        due_date,
                        NULL,
                        NULL,
                        NULL,
                        NULL,
                        NULL,
                        SYSTIMESTAMP,
                        SYSTIMESTAMP,
                        NET_AMOUNT_CURRENCY,
                        bc_value,
                        BILL_DATE)
         SELECT 
         src.billing_type,
              src.currency,
              src.billing_document,
              src.drop_ship_ind,
              src.to_po_number,
              src.to_purchase_order,
              src.due_date,
              src.bill_date,
              src.tax_amt,
              src.payer_customer,
              src.to_acct_no,
              src.bill_to_acct_no,
              src.net_amount,
              src.net_amount_currency,
              src.order_dt,
              src.to_customer,
              src.to_name,
              src.franchises,
              src.updt_dt,
              str_parser (src.FRANCHISES) parsed_franchises,
              'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff') my_guid,
              i.bc_value
            FROM        BACKFILL_INVOICES src,
                 invsvc_owner.billing_codes i
         WHERE   i.bc_name = src.BILLING_TYPE;Some things to note
    1. Don't commit in a loop - you only add to the run time and load on the box ultimately reducing scalability and removing transactional integrity. Commit once at the end of the job.
    2. Make sure you specify the list of columns you are inserting into as well as the values or columns you are selecting. This is good practice as it protects your code from compilation issues in the event of new columns being added to tables. Also it makes it very clear what you are inserting where.
    3. If you use WHEN OTHERS THEN... to log something, make sure you either rollback or raise the exception. What you have done in your code is say - I don't care what the problem is, just commit whatever has been done. This is not good practice.
    HTH
    David
    Edited by: Bravid on Oct 13, 2011 4:35 PM

  • What are the tables will update while loading Master data ?

    Hello Experts,
    What are the tables will update while loading Master data ? And requesting you to provide more information about Master data loading and its related settings in the beginning of creation infoobjects. 

    It depends upon the type of Master data u r loading....
    In all the master data loadings, for every new value of master data an SID will be created in the SID table /BI*/S<INFOOBJECT NAME> irrespective of the type of master data.
    But the exceptional tables that get updated depending on the type of master data are.....
    If it is a time Independent master data then the /BI*/P<INFOOBJECT NAME> table gets updated with the loaded data.
    If it is a time dependent master data then the /BI*/Q<INFOOBJECT NAME> table gets updated with the loaded data.
    If the master data is of time Independent Navigational attributes then for every data load the SID table will get updated first and then the /BI*/X<INFOOBJECT NAME> table gets updated with the SID's created in the SID table (NOT WITH THE MASTER DATA).
    If the master data is of time dependent navigational attributes then for every data load the SID table will get updated first and then the /BI*/Y<INFOOBJECT NAME> table gets updated with the SID's created in the SID table (NOT WITH THE MASTER DATA).
    NOTE: As said above, For all the data in P, Q, T, X, Y tables the SID's will be created in the S table /BI*/S<INFOOBJECT NAME>
    NOTE: Irrespective of the time dependency or Independency the VIEW /BI*/M<INFOOBJECT NAME> defined on the top of /BI*/P<INFOOBJECT NAME> & /BI*/Q<INFOOBJECT NAME> tables gives the view of entire master data.
    NOTE: it is just a View and it is not a Table. So it will not have any physical storage of data.
    All the above tables are for ATTRIBUTES
    But when it comes to TEXTS, irrespective of the Time dependency or Independency, the /BI*/T<INFOOBJECT NAME> table gets updated (and of course the S table also).
    Naming Convention: /BIC/*<InfoObject Name> or /BI0/*<InfoObject Name>
    C = Customer Defined Characteristic
    0 = Standard or SAP defined Characteristic
    * = P, Q, T, X,Y, S (depending on the above said conditions)
    Thanks & regards
    Sasidhar

  • I have hierarchy data in R/3 side how will i load that data from R/3 to BW

    Hi all,
    i have my hierarchy data in the R/3 side how will i load that data from  R/3 to BW side
    Regard
    Kiran Kumar

    Hi Kiran,
    Here is the procedure:
    1.      In the Data Warehousing Workbench under Modeling, select the InfoSource tree.
    2.      Select the InfoSource (with direct update) for the InfoObject, to which you want to load the hierarchy.
    3.      Choose Additional Functions® Create Transfer Rules from the context menu of the hierarchy table object for the InfoObject. The Assign Source System dialog box appears.
    4.      Select the source system from which the hierarchy is to be loaded. The InfoSource maintenance screen appears.
    ○       If the DataSource only supports the transfer method IDoc, then only the transfer structure is displayed (tab page DataSource/Transfer Structure).
    ○       If the DataSource also supports transfer method PSA, you can maintain the transfer rules (tab page Transfer Rules).
    If it is possible and useful, we recommend that you use the transfer method PSA and set the indicator Expand Leaf Values and Node InfoObjects. You can then also load hierarchies with characteristics whose node name has a length >32.
    5.      Save your entries and go back. The InfoSource tree for the Data Warehousing Workbench is displayed.
    6.      Choose Create InfoPackage from the context menu (see Maintaining InfoPackages). The Create InfoPackage dialog box appears.
    7.      Enter the description for the InfoPackage. Select the DataSource (data element Hierarchies) that you require and confirm your entries.
    8.      On the Tab Page: Hierarchy Selection, select the hierarchy that you want to load into your BI system.
    Specify if the hierarchy should be automatically activated after loading or be marked for activation.
    Select an update method (Full Update, Insert Subtree, Update Subtree).
    If you want to load a hierarchy from an external system with BAPI functionality, make BAPI-specific restrictions, if necessary.
    9.      If you want to load a hierarchy from a flat file, maintain the tab page: external data.
    10.      Maintain the tab page: processing.
    11.      Maintain the tab page: updating.
    12.      To schedule the InfoPackage, you have the following options:
    ○       (Manually) in the scheduler, see Scheduling InfoPackages
    ○       (Automatically) using a process chain (see Loading Hierarchies Using a Process Chain)
    When you upload hierarchies, the system carries out a consistency check, making sure that the hierarchy structure is correct. Error messages are logged in the Monitor. You can get technical details about the error and how to correct it in the long text for the respective message.
    For more info visit this help pages on SAP Help:
    http://help.sap.com/saphelp_nw04s/helpdata/en/80/1a6729e07211d2acb80000e829fbfe/frameset.htm
    http://help.sap.com/saphelp_nw04s/helpdata/en/3d/320e3d89195c59e10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6729e07211d2acb80000e829fbfe/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4dae0795-0501-0010-cc96-fe3a9e8959dc
    Cheers,
    Habeeb

  • Unable to load the data from PSA to INFOCUBE

    Hi BI Experts, good afternoon.
        I am loading 3 years data( Full load ) from R/3 to Infocube.
       So loaded the data by monthwise. So i created 36 info packages.
      Everything is fine. But i got a error in Jan 2005 and Mar 2005. It is the same error in both months. That is Caller 01and caller 02 errors( Means invalid characteristics are there PSA data )
    So i deleted both PSA and Data target Requests and again i loaded the data only to PSA.
      Here i got data in PSA without fail.
      Then i tried to load the data from PSA to Infocube MANUALLY.
    But its not happening.
      One message came this
           SID 60,758 is smaller than the compress SID of cube ZIC_C03; no        request booking.
       Please give me the solution how to solve this problem.
      Thanks & Regards
         Anjali

    Hi Teja,
       Thanks for the good response.
      How can i check whether it is already compressed or not?
      Pls give me the reply.
      Thanks
              Anjali

  • Unable to load the data into HFM

    Hello,
    We created new HFM app configured that with FDM, generated output file through FDM and loaded that file through HFM directly 5-6 times, there was no issue till here.
    Then I loaded the file through FDM 4 times successfully, even for different months. But, after 4 loads I start getting Error. Attached is the error log .
    Please help us earliest..
    ** Begin fdmFM11XG6A Runtime Error Log Entry [2013-10-30-13:44:26] **
    Error:
    Code............-2147217873
    Description.....System.Runtime.InteropServices.COMException (0x80040E2F): Exception from HRESULT: 0x80040E2F
       at HSVCDATALOADLib.HsvcDataLoadClass.Load(String bstrClientFilename, String bstrClientLogFileName)
       at fdmFM11XG6A.clsFMAdapter.fDBLoad(String strLoadFile, String strErrFile, String& strDelimiter, Int16& intMethod, Boolean& blnAccumFile, Boolean& blnHasShare, Int16& intMode)
    Procedure.......clsHPDataManipulation.fDBLoad
    Component.......E:\Opt\Shared\Apps\Hyperion\Install\Oracle\Middleware\EPMSystem11R1\products\FinancialDataQuality\SharedComponents\FM11X-G6-A_1016\AdapterComponents\fdmFM11XG6A\fdmFM11XG6A.dll
    Version.........1116
    Identification:
    User............fdmadmin
    Computer Name...EMSHALGADHYFD02
    FINANCIAL MANAGEMENT Connection:
    App Name........
    Cluster Name....
    Domain............
    Connect Status.... Connection Open
    Thanks,'
    Raam

    We are working with the DB team but they have confirmed that they is no issue with the TB, the process we have followed
    As a standard process – while loading the data from FDM or manually to HFM – we don’t write any SQL query. Using the web interface – data would be loaded to HFM application. This data can we viewed by different reporting tools (smart view(excel)/HFR Report/etc.)
    There is no any official documents on oracle website which talk about Insert SQL query which is used to insert data to HFM tables. Even, Hyperion does not provide much details on its internal tables used. Hyperion does not provide much insight on internal structure of HFM system.
    As per Hyperion blogs/forums on internet –HFM stores the base level data in so called DCE tables (for example EMHFMFinal _DCE_1_2013 where EMHFMFinal  is application name, 1 identifies the Scenario and 2013 the Year).  Each row in the DCE table contains data for all periods of a given combination of dimensions (also called an intersection).
    We are trying to load same data file with a replace option( it should delete the existing data before loading the data file).

  • Unable to load the data into Cube Using DTP in the quality system

    Hi,
    I am unable to load the data from PSA to Cube using DTP in the quality system for the first time
    I am getting the error like" Data package processing terminated" and "Source TRCS 2LIS_17_NOTIF is not allowed".
    Please suggest .
    Thanks,
    Satyaprasad

    Hi,
    Some Infoobjects are missing while collecting the transport.
    I collected those objects and transported ,now its working fine.
    Many Thanks to all
    Regards,
    Satyaprasad

Maybe you are looking for

  • Logic won't quit/close/save on 10.6.8

    Hi, I hope you can help. I have recently upgraded osx from 10.6 to 10.6.8 and logic doesnt work properly now. I cant save a project, close,quit or export key commands or anything.. sometimes it crashes but other times it just blinks so i have to cmd

  • Missing text after EXPORT from PMD to PDF

    Hi, I have a document, designed through Adobe pagemaker 70, which I would like converted to PDF. Within Adobe PageMaker 70 I performed a File-->Export-->Adobe PDF, kept all the default options and managed to generate a PDF version of the *.PMD docume

  • Why can't I share word documents directly from Word to mail any more?

    Hi all I just wondered if anyone could help with this annoying problem. It is probably to do with Mail, but it manifests itself in MS Word (latest version, updates complete). I use to be able to share Word docs directly into Mail, using hte file>shar

  • How it can be achieved ?

    Hi, I am having two servers one at london another in new york. On London server I can not do any changes in database while in new york server I can do any changes and I want to replicate a data of a single table london server to new york server. Supp

  • Load_swf_into_slides

    Hi, I want to insert external swf  files into my captivate-6 project. Can you tell me how to do this. Regards, Lokesh V