Data Loading via Infopakge Vs UCBATCH01

Hello Experts,
I am new to BCS. I need to load data in BCS cube.Currently we have a process chain that loads data in BCS (Data stream load) .
In that process chain it seems that is loaded using program UCBATCH01.
There are sometimes when user need to load data manually ( like some adjustment entries). We are thinking to automate this process. So I have created a process chain for it, which picks data file from appln server to load to BCS.
In this process chain I am thinking to use Infopackage which can load data to BCS using file from appn server.
I am unable to understand if I should use UCBATCH01 program or I can use above said Infopackage ?
Please advice.
Thank you,
Murtuza.

Hello Experts,
I am new to BCS. I need to load data in BCS cube.Currently we have a process chain that loads data in BCS (Data stream load) .
In that process chain it seems that is loaded using program UCBATCH01.
There are sometimes when user need to load data manually ( like some adjustment entries). We are thinking to automate this process. So I have created a process chain for it, which picks data file from appln server to load to BCS.
In this process chain I am thinking to use Infopackage which can load data to BCS using file from appn server.
I am unable to understand if I should use UCBATCH01 program or I can use above said Infopackage ?
Please advice.
Thank you,
Murtuza.

Similar Messages

  • Data Loads via an Integration Interface

    In our Demantra 7.3 environment, Open Orders, Pricing and Trade Spend all get loaded via Integration Interfaces (IIs). Those IIs get called after ep_load_main runs, and there is nothing in the workflow after those II calls.
    So how does the data get transferred from the BIIO tables into sales_data and/or mdp_matrix? Since there is nothing in the workflow to do that, I would expect to see triggers on the BIIO tables, but there aren't any.

    Hi,
    Data is loaded from BIIO tables into Sales_Data/Mdp_matrix using a workflow step 'Transfer Step' in Demantra.
    Transfer step takes the Import integration interface as input and load data from the corresponding BIIO tables into Demantra base tables.
    Please check if there is a transfer step in the workflow after ep_load_main run.
    Thanks,
    Rohit

  • EIS data load via rules file

    Hi,
    PLease let me know the process to control the EIS data load into essbase using a rules file. I did not find the option in EIS. please help me.
    Thanks,
    pr

    In EIS
    1) You have to define the Logical OLAP Model connecting to the relational source.It defines the joins between fact table and dimension tables.
    2) Based on the OLAP Model You have to create meta outline which defines the rules for loading members and data into essbase.

  • Master Data load via DTP (Updating attribute section taking long time)

    Hi all,
    Iam loading to a Z infoobject. Its a master data load for attributes.Surprisingly, i could find that PSA  pulls records very fastly( 2 minutes) but the DTP which updates the infoobject takes a lot of time. It runs into hours.
    When observed the DTP execution monitor, which shows the breakup of time between extraction,filter,transformation,updation of attributes
    i could observe that the last step "updation of attributes for infoobject" is taking lots of time.
    The masterdata infoobject has got also two infoobjects compounded.
    In transformation ,even they are mapped.
    No of parallel processes for the DTP was set to 3 in our system.
    Job Class being "C".
    Can anyone think of what could be the reason.

    Hi,
    Check the T code ST22 for any short dump while loading this master data. There must be some short dump occured.
    There is also a chance that you are trying to load some invalid data (like ! character as a first character in the field) into the master.
    Regards,
    Yogesh.

  • I would like to transfer all data from my iPod classic to my new computer with windows 8.1. My old computer's cpu died. Utilizing iTunes which only allows iTunes albums purchased at iTunes store. The cd's were loaded via iTunes originally.

    I would like to transfer all data from my iPod classic to my new computer with windows 8.1. My old computer's cpu died. Utilizing iTunes which only allows iTunes albums purchased at iTunes store. The cd's were loaded via iTunes originally.

    Install disk drive from old computer in an external enclosure.
    Then copy the complete iTunes library from the disk drive to the disk drive in the new computer.

  • Data loaded to Power Pivot via Power Query is not yet supported in SSAS Tabular Cube

    Hello, I'm trying to create a SSAS Tabular cube from a data loaded to Power Pivot via Power Query (SAP BOBJ connector) but looks like is not yet supported.
    Any one tried this before? any workaround that make sense?
    The final goal is pull data from SAP BW, BO Universe (using PowerQuery) and be able to create a SSAS Tabular cube.
    Thanks in advance
    Sebastian

    Sebastian, 
    Depending on the size of the data from Analysis Services, one work around could be to import the data into into Excel and then make an Excel table and then use the Excel table as a data source. 
    Reeves
    Denver, CO

  • Need suggestions for imporving data load performance via SQL Loader

    Hi,
    Our requirement is to load 512 (1 GB each) files in Oracle database.
    We are using SQL loaders to load files into the DB (A partitioned table) and have tried almost all the possible options that come with sql loaders (Direct load path, parallel=true, multithreading=true, unrecoverable)
    As the tables is growing bigger in size, each file load time is increasing (It started with 5 minutes per file and has reached 2 hours per 3 files now and is increasing with every batch- Note we are loading 3 files concurrently on the target table using the parallel = true oprion of sql loader)
    Questions 1:
    My problem is that somehow multithreading is not working for us (we have multi CPU server and have enabled multithreading=true). Could it be something to do with DB setting which might be hindering the data load to be done in multiple threads?
    Question 2:
    Would gathering stats on the target table and it's partitions help improve load performance ? I'm not sure if stats improve DML's, they would definitely improve sql queries. Any thoughts?
    Question 3:
    What would be the best strategy to gather stats on this table (which would end up having 512 GB data) ?
    Question 4:
    Do you think insertions in a partitioned table (with growing sizes) would have poor performance as compared to a non-partitioned table ?
    Any other suggestions to improve performace are most welcome !!
    Thanks,
    Sachin
    Edited by: Sachin Tiwari on Mar 13, 2013 6:29 AM

    2 hours to load just 3 GB of data seems unreasonable regardless of the SQL Loader settings. It seems likely to me that the problem is not with SQL Loader but somewhere else.
    Have you generated a Statspack/ AWR/ ASH report to see where all that time is being spent? Are there triggers on the table? Are there bitmap indexes?
    Is your table partitioned in a way that is designed to improve the efficiency of loads so that all the data from one file goes into one partition? Or is data from each file getting inserted into many different partitions.
    Justin

  • Open HUB ( SAP BW ) to SAP HANA through DB Connection data loading , Delete data from table option is not working Please help any one from this forum

    Issue:
    I have SAP BW system and SAP HANA System
    SAP BW to SAP HANA connecting through a DB Connection (named HANA)
    Whenever I created any Open Hub as Destination like DB Table with the help of DB Connection, table will be created at HANA Schema level ( L_F50800_D )
    Executed the Open Hub service without checking DELETING Data from table option
    Data loaded with 16 Records from BW to HANA same
    Second time again executed from BW to HANA now 32 records came ( it is going to append )
    Executed the Open Hub service with checking DELETING Data from table option
    Now am getting short Dump DBIF_RSQL_TABLE_KNOWN getting
    If checking in SAP BW system tio SAP BW system it is working fine ..
    will this option supports through DB Connection or not ?
    Please follow the attachemnet along with this discussion and help me to resolve how ?
    From
    Santhosh Kumar

    Hi Ramanjaneyulu ,
    First of all thanks for the reply ,
    Here the issue is At OH level ( Definition Level - DESTINATION TAB and FIELD DEFINITION )
    in that there is check box i have selected already that is what my issue even though selected also
    not performing the deletion from target level .
    SAP BW - to SAP HANA via DBC connection
    1. first time from BW suppose 16 records - Dtp Executed -loaded up to HANA - 16 same
    2. second time again executed from BW - now hana side appaended means 16+16 = 32
    3. so that i used to select the check box at OH level like Deleting data from table
    4. Now excuted the DTP it throws an Short Dump - DBIF_RSQL_TABLE_KNOWN
    Now please tell me how to resolve this ? will this option is applicable for HANA mean to say like , deleting data from table option ...
    Thanks
    Santhosh Kumar

  • Data load stuck from DSO to Master data Infoobject

    Hello Experts,
    We have this issue where data load is stuck between a DSO and master data infoobject
    Data uploads from DSO( std) to master data infoobject.
    This Infoobject has display and nav attributes in it which are mapped from DSO to Infoobject.
    Now we have added a new infoobject as attribute to the master data infoobject and made it as NAV attri.
    Now when we are doing full load via DTP the load is stuck and is not processing.
    Earlier it took only 5 mns of time to complete the full load.
    Please advise what could be the reason and cause behind this.
    Regards,
    santhosh.

    Hello guys,
    Thanks for the quick response.
    But its nothing proceeding further.
    The request is still running.
    earlier this same data is loaded in 5 mns.
    Please find the screen shot.
    master data for the infoobjects are loaded as well.
    I can see in SM50 the process at P table of the infoobject the process is.
    Please advise.
    Please find the detials
    Updating attributes for InfoObject YCVGUID
    Start of Master Data Update
    Check Duplicate Key Values
    Check Data Values
    Process time dependent attributes- green.
    No Message: Process Time-Dependent Attributes- yellow
    No Message: Generates Navigation Data- yellow
    No Message: Update Master Data Attributes - yellow
    No Message: End of Master Data Update - yellow
    and nothing is going further in Sm37
    Thanks,
    Santhosh.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • How to tune data loading time in BSO using 14 rules files ?

    Hello there,
    I'm using Hyperion-Essbase-Admin-Services v11.1.1.2 and the BSO Option.
    In a nightly process using MAXL i load new data into one Essbase-cube.
    In this nightly update process 14 account-members are updated by running 14 rules files one after another.
    These rules files connect 14 times by sql-connection to the same oracle database and the same table.
    I use this procedure because i cannot load 2 or more data fields using one rules file.
    It takes a long time to load up 14 accounts one after other.
    Now my Question: How can I minimise this data loading time ?
    This is what I found on Oracle Homepage:
    What's New
    Oracle Essbase V.11.1.1 Release Highlights
    Parallel SQL Data Loads- Supports up to 8 rules files via temporary load buffers.
    In an Older Thread John said:
    As it is version 11 why not use parallel sql loading, you can specify up to 8 load rules to load data in parallel.
    Example:
    import database AsoSamp.Sample data
    connect as TBC identified by 'password'
    using multiple rules_file 'rule1','rule2'
    to load_buffer_block starting with buffer_id 100
    on error write to "error.txt";
    But this is for ASO Option only.
    Can I use it in my MAXL also for BSO ?? Is there a sample ?
    What else is possible to tune up nightly update time ??
    Thanks in advance for every tip,
    Zeljko

    Thanks a lot for your support. I’m just a little confused.
    I will use an example to illustrate my problem a bit more clearly.
    This is the basic table, in my case a view, which is queried by all 14 rules files:
    column1 --- column2 --- column3 --- column4 --- ... ---column n
    dim 1 --- dim 2 --- dim 3 --- data1 --- data2 --- data3 --- ... --- data 14
    Region -- ID --- Product --- sales --- cogs ---- discounts --- ... --- amount
    West --- D1 --- Coffee --- 11001 --- 1,322 --- 10789 --- ... --- 548
    West --- D2 --- Tea10 --- 12011 --- 1,325 --- 10548 --- ... --- 589
    West --- S1 --- Tea10 --- 14115 --- 1,699 --- 10145 --- ... --- 852
    West --- C3 --- Tea10 --- 21053 --- 1,588 --- 10998 --- ... --- 981
    East ---- S2 --- Coffee --- 15563 --- 1,458 --- 10991 --- ... --- 876
    East ---- D1 --- Tea10 --- 15894 --- 1,664 --- 11615 --- ... --- 156
    East ---- D3 --- Coffee --- 19689 --- 1,989 --- 15615 --- ... --- 986
    East ---- C1 --- Coffee --- 18897 --- 1,988 --- 11898 --- ... --- 256
    East ---- C3 --- Tea10 --- 11699 --- 1,328 --- 12156 --- ... --- 9896
    Following 3 out of 14 (load-) rules files to load the data columns into the cube:
    Rules File1:
    dim 1 --- dim 2 --- dim 3 --- sales --- ignore --- ignore --- ... --- ignore
    Rules File2:
    dim 1 --- dim 2 --- dim 3 --- ignore --- cogs --- ignore --- ... --- ignore
    Rules File14:
    dim 1 --- dim 2 --- dim 3 --- ignore --- ignore --- ignore --- ... --- amount
    Is the upper table design what GlennS mentioned as a "Data" column concept which only allows a single numeric data value ?
    In this case I cant tag two or more columns as “Data fields”. I just can tag one column as “Data field”. Other data fields I have to tag as “ignore fields during data load”. Otherwise, when I validate the rules file, an Error occurs “only one field can contain the Data Field attribute”.
    Or may I skip this error massage and just try to tag all 14 fields as “Data fields” and “load data” ?
    Please advise.
    Am I right that the other way is to reconstruct the table/view (and the rules files) like follows to load all of the data in one pass:
    dim 0 --- dim 1 --- dim 2 --- dim 3 --- data
    Account --- Region -- ID --- Product --- data
    sales --- West --- D1 --- Coffee --- 11001
    sales --- West --- D2 --- Tea10 --- 12011
    sales --- West --- S1 --- Tea10 --- 14115
    sales --- West --- C3 --- Tea10 --- 21053
    sales --- East ---- S2 --- Coffee --- 15563
    sales --- East ---- D1 --- Tea10 --- 15894
    sales --- East ---- D3 --- Coffee --- 19689
    sales --- East ---- C1 --- Coffee --- 18897
    sales --- East ---- C3 --- Tea10 --- 11699
    cogs --- West --- D1 --- Coffee --- 1,322
    cogs --- West --- D2 --- Tea10 --- 1,325
    cogs --- West --- S1 --- Tea10 --- 1,699
    cogs --- West --- C3 --- Tea10 --- 1,588
    cogs --- East ---- S2 --- Coffee --- 1,458
    cogs --- East ---- D1 --- Tea10 --- 1,664
    cogs --- East ---- D3 --- Coffee --- 1,989
    cogs --- East ---- C1 --- Coffee --- 1,988
    cogs --- East ---- C3 --- Tea10 --- 1,328
    discounts --- West --- D1 --- Coffee --- 10789
    discounts --- West --- D2 --- Tea10 --- 10548
    discounts --- West --- S1 --- Tea10 --- 10145
    discounts --- West --- C3 --- Tea10 --- 10998
    discounts --- East ---- S2 --- Coffee --- 10991
    discounts --- East ---- D1 --- Tea10 --- 11615
    discounts --- East ---- D3 --- Coffee --- 15615
    discounts --- East ---- C1 --- Coffee --- 11898
    discounts --- East ---- C3 --- Tea10 --- 12156
    amount --- West --- D1 --- Coffee --- 548
    amount --- West --- D2 --- Tea10 --- 589
    amount --- West --- S1 --- Tea10 --- 852
    amount --- West --- C3 --- Tea10 --- 981
    amount --- East ---- S2 --- Coffee --- 876
    amount --- East ---- D1 --- Tea10 --- 156
    amount --- East ---- D3 --- Coffee --- 986
    amount --- East ---- C1 --- Coffee --- 256
    amount --- East ---- C3 --- Tea10 --- 9896
    And the third way is to adjust the essbase.cfg parameters DLTHREADSPREPARE and DLTHREADSWRITE (and DLSINGLETHREADPERSTAGE)
    I just want to be sure that I understand your suggestions.
    Many thanks for awesome help,
    Zeljko

  • Multiple data loads in PSA with write optimized DSO objects

    Dear all,
    Could someone tell me how to deal with this situation?
    We are using write optimized DSO objects in our staging area. These DSO are filled with full loads from a BOB SAP environment.
    The content of these DSO u2013objects are deleted before loading, but we would like to keep the data in the PSA for error tracking and solving. This also provides the opportunity to see what are the differences between two data loads.
    For the normal operation the most recent package in the PSA should be loaded into these DSO-objects (as normal data staging in BW 3.5 and before) .
    As far as we can see, it is not possible to load only the most recent data into the staging layer. This will cause duplicate record errors when there are more data loads in the PSA.
    We all ready tried the functionality in the DTP with u201Call new records, but that only loads the oldest data package and is not processing the new PSA loads.
    Does any of you have a solution for this?
    Thanks in advance.
    Harald

    Hi Ajax,
    I did think about this, but it is more a work around. Call me naive but it should be working as it did in BW3.5!
    The proposed solution will ask a lot of maintenance afterwards. Beside that you also get a problem with changing PSA id's after the have been changed. If you use the posibility to delete the content of a PSA table via the process chain, it will fail when the datasourcese is changed due to a newly generated PSA table ID.
    Regards,
    Harald

  • Need help on: Automation of Daily Data Load

    Hi all,
    We need to start our Daily Data load from DAC by Manually. So right now my client has asked us to do Automation of Daily Data Load.
    Starting the Daily Data Load Manually(DAC) Process: First we have to check whether the ASCP Plans updated or not
    Right now we are checking whether the plans got updated or not, so for this we are using following query
    SELECT LTrim(RTrim (compile_designator)),data_completion_date,TO_CHAR(data_completion_date ,'DD-MON-YYYY HH24:MI:SS') FROM apps.msc_plans
    WHERE LTrim(RTrim (compile_designator))
    in( 'Plan01,'Plan02','Plan03','Paln04') ORDER BY 2 desc
    from this query we will able to see whether all the plans got updated or not. From all the Four Plans, two plans will get updated as of Sysdate(mm/dd/yyy) ,Timestamp(hh:mm:ss)(for example i.e. Plan01 08/25/2011 11:20:08 PM, Plan02 08/25/2011 11:45:06 PM) and rest two plans get updated on Sysdate+1(mm/dd/yyy), Timestamp(hh:mm:ss)(for example i.e. Plan03 08/26/2011 12:20:05 AM, Plan04 08/26/2011 12:45:08 AM)
    So after checking the plans , we start the Daily Load in DAC manually.
    May I know how should I convert my above sql query which I am using for checking the plans updated or not in informatica, so as to automate the Daily Load in informatica level..
    Need help.

    You cannot replicate what is done with DAC at Informatica level. DAC is a separate Oracle product that orchestrates and manages the ETL load (including Index management, etc). The reason Oracle developed DAC is because it allows you to manage a large scale DW load for a large ERP system. As suggested, you can invoke the DAC execution plan via a command but you cannot replicate everything the DAC does at Informatica level. If this helps, please mark as helpful.

  • Data Load behaviour in Essbase

    Hello all-
    I am loading data from Flat File using a server Rule File. In the rule file i have properties for a feild where in it replaces a name in flat file for member name in outline so it is somwhat like this:
    Replace With
    Canada 00-200-SE
    Belgium 00-300- SE
    and so on
    Now in my flat file there was a new member for example china & the replacement for it was not present in Rule File & when the data was loaded in the system it didnt rejected that record on the contrary it loaded the values for china in
    the region which was above it and overwrited the values for the original one.
    Is this the normal behavior of essbase , I was thinking that record should have been rejected .
    I know when we do a Lock & Send via Addin & if member is not present in outline it give you warning when you lock that sheet & eventually if you dont delete that member from the template it will load data against it in the member above it.
    Is there a waok around for this problem or this is what it is ?
    I am on Hyperion Planning / Essbase Version 9.3.1.
    Thanks

    Still thinking how does these properties effects the way data is being loaded right now. Have gone through DBAG & i dont see a reason y any of these peoperties might be affecting the load^^^Here's what I think is happening: China is not getting mapped, but the replacement for Belgium is occuring and resolves to a valid member name. Essbase sees China and doesn't recognize it (you knew all of this already).
    When the load occurs, Essbase says (okay, I am anthromorphizing, but you get the ida) "Eh, I have no idea what China is, but 00-300-SE is the last good Country member I have, I will load there." Essbase is picking the last valid member and loading to that. I liken it to a lock and send from Excel with nested dimensions and non-repeating members. Essbase "looks up" a row, finds the valid member, and loads there.
    And yes, this is in the DBAG: http://download.oracle.com/docs/cd/E12825_01/epm.111/esb_dbag/ddlload.htm#ddlload1034271
    Search for "Unknown Member Fields" -- it's all the way at the bottom of the above link.
    In fact, to save you the trip, per the DBAG:
    If you are performing a data load and Essbase encounters an unknown member name, Essbase rejects the entire record. If there is a prior record with a member name for the missing member field, Essbase continues to the next record. If there is no prior record, the data load stops. Regards,
    Cameron Lackpour

  • In which table the initial balances will be stored when loading via api

    Hi all,
    I am doing initial balances migration...
    In which table the initial balances will be stored when loading though an api.. (pay_balance_upload.process).
    First I have loaded data in to pay_balance_batch_headers and pay_balance_batch_lines tables..
    Then called the api pay_balance_upload.process. and the data was reflecting correctly in front end.
    But we need to reconcile the loaded data.. for that, can anyone please tell me in which the loaded balances would be stored..
    I have identified two tables, pay_assignment_latest_balances
    and pay_latest_balances
    when I try to create a balance from frontend I can see the data in pay_latest_balances table..
    But when I load via api I am not able to find it in any of these tables..
    In addition to that can anyone please tell me when the data will be populated in pay_assignment_latest_balances and pay_latest_balances tables.
    Awaiting for your help and quick response..
    Thanks and Regards
    Kishore

    You have followed the correct process..As Vignesh said, you can use pay_balance_pkg.get_value to make sure you have the correct values uploaded but otherwise if you are able to see the values in the front end I wouldn't worry about the latest balances tables. These tables, as the name indicate, hold only the latest values and if a payroll (ran post the initial balance upload) has been rolled back, these are deleted since they are no longer available.
    You can refer the below Metalink notes for a detailed explanation:
    The Secret Life of Initial Balance Upload with Screenshots Example [ID 60057.1]

Maybe you are looking for

  • "You must shut down your computer" error. Do I need to do something?

    Hi, A few minutes ago my Macbook Pro (late 2009 model running 10.7.2) crashed and flashed the "You must shut down your computer" sign. I rebooted and at the moment the MBP is up and running again but I am not sure whether I should take it in for serv

  • Variable png is not defined.

    Hi, I am migrating project from AS2. I have removed most of the compiler and run time errors, but I have no clue about this error. I can't track it. ReferenceError: Error #1065: Variable png is not defined. Any ideas what might be causing it? Any rep

  • Table used behind v.25

    Hello All,    I am looking for a table name which store the value for releaser in for the transaction code v.25. Thanks in advance , Krishna

  • Help with Canon Pixma MP640

    Hi all, Okay let me run down the steps I've done so far and what I've learned. Apparently the newest version of cups does not work with Canon printers it seems, because of an issue with usblp. First I got a version of Cups from the AUR with usblp ena

  • How do I cnfigure to see thumbnails in a new tab???Thanks!!

    I want to know how I configure firefox to see the thumbnails with the most visited sites when I open a new tab so i dont have to type everytime the site where I want to go, since I dont know how I dont see them anymore. I appreciatte your help!!! Tha