Data Load into two data targest from one DataSource

Hi,
I want to load data into two data targest from one DataSource. I did full load, then Initialization & delta settings for that DataSource. But I want to load data into one data target using delta InfoPackage & into 2nd data target using Full Load InfoPackage. Can I schedule execution of these 2 infopackges one delta & 2nd Full load InfoPackage for the same DataSource simaltaneously?
Regards,
Pradip

Hi,
In BI, You can achieve this. As thorugh info package, you laod data till PSA only. Then it is being loaded to other data targets.
Create two DTPs, one with FULL load and another one with Detla Load. It will work.
In R/3 Side, As sandeep has mentioned , there might be possibility that if you run full load after delta , your delta may get corrupt. i am not sure about that , but there is a possibility.
- Jaimin

Similar Messages

  • Showing report data columns into two rows instead of one row using SSRS 2012

    Hi All,
    I have a ssrs report with 13 columns and I want to show those 13 columns data into two rows instead of showing in one row. please send me step by step process.
    id     fullname     firstname      lastname        mailingaddress     billingaddress       
    city       state       zipcode   
    1       ABC                A                  C                  
        xyz                      xyz123                   york     
    PA          12345    
     homephone     cellphone          workphone          company    
    1234567890      4567890123     0123456789         ABC,Inc
    id     fullname     firstname      lastname        mailingaddress     billingaddress        
    city          state     zipcode   
    2          XYZ               X                  Z                      
         abc                     abc123           lewisburg     
    PA      54321    
     homephone     cellphone           workphone        company    
    4567890123      0123456789     1234567890         xyz,Inc
    id     fullname     firstname      lastname        mailingaddress     billingaddress        
    city          state     zipcode   
    3       BCD                  B                  D                  
    123abc                  xyz123                leesburg       PA     
     54123    
     homephone     cellphone          workphone        company    
    4567895623      0783456789     1238967890       Hey,Inc
    Thanks in advance,
    RH
    sql

    Do a right mouse click on the left, gray row marker => "Insert Row"=> "Inside Group
    - Above or Below", then you get a second detail row.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • Oracle rejecting data loads into tables (data looks fine to me)

    Oracle 10XE barfs on loading data into the two table structures below. I can provide the two .sql files containing the data if someone is willing to take a look.
    CREATE TABLE A2NoteHeader (
         noteHeaderID number(5,0) NOT NULL ,
         ParentNote number(5,0) NULL ,
         dateCreated timestamp NULL ,
         noteTitle nvarchar2 (255) ,
         deleted nvarchar2 (1)
    CREATE TABLE A2NoteLine (
         noteHeaderID number(5,0) NOT NULL ,
         noteLineID number(5,0) NOT NULL ,
         noteLine nvarchar2 (255) ,
         deleted nvarchar2 (1)
    );

    How are you loading the data - via the web based UI, or using the SQLLoad command line ?
    Do you get any error messages ?
    How is the data delimited ?
    If using SQL Loader, what does the ctl file look like ?
    Can you narrow the problem down to one or two rows that produce the problem and paste the file content here ? - more eyeballs that way.

  • I want to load data parallely into two ods.

    i want to load data parallely into two ods.

    Hai ,
       As Question is not clear........
      U can do this by going in update tab in sechdule menu.
    Regards
    Suman

  • Data load into SAP ECC from Non SAP system

    Hi Experts,
    I am very new to BODS and I have want to load historical data from non SAP source system  into SAP R/3 tables like VBAK,VBAP using BODS, Can you please provide steps/documents or guidelines on how to achieve this.
    Regards,
    Monil

    Hi
    In order to load into SAP you have the following options
    1. Use IDocs. There are several standard IDocs in ECC for specific objects (MATMAS for materials, DEBMAS for customers, etc., ) You can generate and send IDocs as messages to the SAP Target using BODS.
    2. Use LSMW programs to load into SAP Target. These programs will require input files generated in specific layouts generated using BODS.
    3. Direct Input - The direct input method is to write ABAP programs targetting on specific tables. This approach is very complex and hence a lot of thought process needs to be applied.
    The OSS Notes supplied in previous messages are all excellent guidance to steer you in the right direction on the choice of load, etc.,
    However, the data load into SAP needs to be object specific. So targetting merely the sales tables will not help as the sales document data held in VBAK and VBAP tables you mentioned are related to Articles. These tables will hold sales document data for already created articles. So if you want to specifically target these tables, then you may need to prepare an LSMW program for the purpose.
    To answer your question on whether it is possible to load objects like Materials, customers, vendors etc using BODS, it is yes you can.
    Below is a standard list of IDocs that you can use for this purpose to load into SAP ECC system from a non SAP system.
    Customer Master - DEBMAS
    Article Master - ARTMAS
    Material Master - MATMAS
    Vendor Master - CREMAS
    Purchase Info Records (PIR) - INFREC
    The list is endless.........
    In order to achieve this, you will need to get the functional design consultants to provide ETL mapping for the legacy data to IDoc target schema and fields (better to ahve sa tech table names and fields too). You should then prepare the data after putting it through the standard check table validations for each object along with any business specific conversion rules and validations applied. Having prepared this data, you can either generate flat file output for load into SAP using LSMW programs or generate IDoc messages to the target SAPsystem.
    If you are going to post IDocs directly into SAP target using BODS, you will need to create a partner profile for BODS to send IDocs and define the IDocs you need as inbound IDocs. There are few more setings like RFC connectivity, authorizations etc, in order for BODS to successfully send IDocs into the SAP Target.
    Do let me know if you need more info on any specific queries or issues you may encounter.
    kind regards
    Raghu

  • Step by step Data loading to BI 7.0 from SAP R/3 not from flat files

    Dear Guru's
    As i am new to BI 7.0, i need to extract the data from R/3 to BI. so could anyone help me in this regrds
    Thanks
    C.S.Ramesh

    if ur still not clear
    go thr this
    Hi,
    Check the following links.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/84bf4d68-0601-0010-13b5-b062adbb3e33
    check the following
    http://help.sap.com/bp_biv235/BI_EN/html/bw.htm
    business content
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/g-i/how%20to%20co-pa%20extraction%203.0x
    https://websmp203.sap-ag.de/co
    http://help.sap.com/saphelp_nw04/helpdata/en/37/5fb13cd0500255e10000000a114084/frameset.htm
    (navigate with expand left nodes)
    also co-pa
    http://help.sap.com/saphelp_nw04/helpdata/en/53/c1143c26b8bc00e10000000a114084/frameset.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/fb07ab90-0201-0010-c489-d527d39cc0c6
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1910ab90-0201-0010-eea3-c4ac84080806
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ff61152b-0301-0010-849f-839fec3771f3
    LO Extraction
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    Check these links:
    /people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
    /people/sap.user72/blog/2004/12/23/logistic-cockpit-delta-mechanism--episode-two-v3-update-when-some-problems-can-occur
    /people/sap.user72/blog/2005/01/19/logistic-cockpit-delta-mechanism--episode-three-the-new-update-methods
    /people/sap.user72/blog/2005/02/14/logistic-cockpit--when-you-need-more--first-option-enhance-it
    /people/sap.user72/blog/2005/04/19/logistic-cockpit-a-new-deal-overshadowed-by-the-old-fashioned-lis
    Re: LO-Cockpit  V1 and V2 update
    Also Refer this link:
    http://www.sap-img.com/business/lo-cockpit-step-by-step.htm
    FI-CO 'Data Extraction -Line Item Level-FI-CO
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/a7f2f294-0501-0010-11bb-80e0d67c3e4a
    FI-GL
    http://help.sap.com/saphelp_nw04/helpdata/en/c9/fe943b2bcbd11ee10000000a114084/frameset.htm
    http://help.sap.com/saphelp_470/helpdata/en/e1/8e51341a06084de10000009b38f83b/frameset.htm
    http://www.sapgenie.com/sapfunc/fi.htm
    FI-SL
    http://help.sap.com/saphelp_nw2004s/helpdata/en/28/5ccfbb45b01140a3b59298c267604f/frameset.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/41/65be27836d300ae10000000a114b54/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/ee/cd143c5db89b00e10000000a114084/frameset.htm
    How to do basic LO extraction for SAP-R3-BW
    1. Go to transaction code RSA3 and see if any data is available related to your DataSource. If data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the data by entering the application name.
    2. Go to transaction SBIW --> Settings for Application Specific Datasource --> Logistics --> Managing extract structures --> Initialization --> Filling the Setup table --> Application specific setup of statistical data --> perform setup (relevant application)
    3. In OLI*** (for example OLI7BW for Statistical setup for old documents : Orders) give the name of the run and execute. Now all the available records from R/3 will be loaded to setup tables.
    4. Go to transaction RSA3 and check the data.
    5. Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update.
    6. Go to BW system and create infopackage and under the update tab select the initialize delta process. And schedule the package. Now all the data available in the setup tables are now loaded into the data target.
    7.Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7.
    8.Go to BW system and create a new infopackage for delta loads. Double click on new infopackage. Under update tab you can see the delta update radio button.
    9.Now you can go to your data target and see the delta record.
    find your scenario and find what data sources do you need on R3 side and ensure they are active as well:
    http://help.sap.com/saphelp_nw04/helpdata/en/37/5fb13cd0500255e10000000a114084/frameset.htm
    find your scenario -> data sources -> go to R3 -> sbiw and activate required data source
    replicate data sources in BW:
    RSA1 -> source systems -> right click on your source system -> replicate
    then activate your BC:
    service.sap.com/bi -> BI Business Content -> General Information -> SAP BW Business Content - Activation
    and execute infopackage (it should be delivered with BC)
    Useful links:
    http://help.sap.com/saphelp_nw04/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm
    Pls chk this cookbook for step by step;
    http://www.erpgenie.com/sapgenie/docs/MySAP%20BW%20Cookbook%20Vol%201.pdf
    http://www.erpgenie.com/sapgenie/docs/MySAP%20BW%20Cookbook%20Vol%202.pdf
    Step by step Generic extraction:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d3219af2-0c01-0010-71ac-dbb4356cf4bf
    How To doc link
    https://www.sdn.sap.com/irj/sdn/howtoguides
    Hope this helps

  • Reg data loading into essbase using text files

    Can we load data in parallel from 2 files into the same cube using 2 different rules files? Or do we have to load one file at a time?
    Could someone clarify this?

    I do not believe that by selecting two data files and two load rules in AAS you are getting parrallel data loading. If you look at the log, you will find them to be sequential. For ASO cubes, AAS loads the data into a buffer then applies it. The only real parrallel data loading is using multiple threads for one file. Othere than that is it sequential

  • Adding leading zeros before data loaded into DSO

    Hi
    In below PROD_ID... In some ID leading zeros are missing before data loaded into BI from SRM into PROD_ID. Data type is character. If leading zeros are missing then data activation of DSO is failed due to missing zeros and have to manually add them in PSA table. I want to add leading zeros if they're missing before data loaded into DSO.... total character length is 40.. so e.g. if character is 1502 then there should be 36 zeros before it and if character is 265721 then there should be 34 zeros. Only two type of character is coming either length is 4 or 6 so there will be always need to 34 or 36 zeros in front of them if zeros are missing.
    Can we use CONVERSION_EXIT_ALPHPA_INPUT functional module ? As this is char so I'm not sure how to use in that case.. Do need to convert it first integer?
    Can someone please give me sample code? We're using BW 3.5 data flow to load data into DSO.... please give sample code and where need to write code either in rule type or in start routine...

    Hi,
    Can you check at info object level, what kind of conversion routine it used by.
    Use T code - RSD1, enter your info object and display it.
    Even at data source level also you can see external/internal format what it maintained.
    if your info object was using ALPHA conversion then it will have leading 0s automatically.
    Can you check from source how its coming, check at RSA3.
    if your receiving this issue for records only then you need to check those records.
    Thanks

  • Sql Loader loading separate date parts into a date

    Hello,
    I'm having a question on SQL Loader.
    Suppose I have a flat file with a format:
    "John Jackson", 2007, 12, 31, 23, 59, 01
    "William Smith", 2007, 12, 31, 23, 59, 02
    "Tracy vanDijk", 2007, 12, 31, 23, 59, 03
    And I like to put it in a table TEST with (name varchar(20), mydate date).
    The problem is: how to create a control file for sqlLoader that converts the seperate date-elements into a date field?
    load data
    infile 'mydata.csv'
    badfile 'badfile.log'
    discardfile 'discardfile.log'
    append
    into table TEST
    fields terminated by "," optionally enclosed by '"'          
    ( name, mydate ????)
    I'd like to put it in one step into a date field, and not using seperate tables, stored procedures, triggers and such. I think it must be possible, but I wasn't able to find a working example thus far.
    Hope you can help me out.

    A colleague came up with the BOUNDFILLER option. Searching again on the forum I created the solution, that I'd like to share with you.
    SQL> create table TEST1 (naam varchar(30), meetdatum date);
    Tabel is aangemaakt.
    mydata.csv:
    "Scott Tiger", 2006, 02, 18, 23, 59, 4
    "Frank Naude", 2007, 12, 31, 18, 00, 59
    "Fou TeDatum", 2007, 02, 31, 01, 10, 5
    "Johnny Smith", 2007, 1, 1, 12, 00 , 00
    laden.ctl:
    load data
    infile 'mydata.csv'
    badfile 'badfile.log'
    discardfile 'discardfile.log'
    into table test1
    fields terminated by "," optionally enclosed by '"'     
    trailing nullcols     
    ( naam
    ,b_yyyy boundfiller char
    ,b_mm boundfiller char
    ,b_dd boundfiller char
    ,b_hh24 boundfiller char
    ,b_mi boundfiller char
    ,b_ss boundfiller char
    ,meetdatum "to_date(:b_yyyy || :b_mm || :b_dd || :b_hh24 || :b_mi || :b_ss , 'YYYYMMDDHH24MISS')"
    SQL> alter session set nls_date_format = 'YYYY-MM-DD:HH24:MI:SS' ;
    SQL> select * from test1;
    NAAM MEETDATUM
    Scott Tiger 2006-02-18:23:59:04
    Frank Naude 2007-12-31:18:00:59
    Johnny Smith 2007-11-12:00:00:00
    The wrong date, Feb 31st, is spooled to the badfile.

  • How can i add the dimensions and data loading into planning apllications?

    Now please let me know how can i add the dimensions and data loading into planning apllication without manuallly?

    you can use tools like ODI or DIM or HAL to load metadata & data into planning applications.
    The data load can be done at the Essbase end using rules file. But metadata changes should flow from planning to essbase through any of above mentioned tools and also there are many other way to achieve the same.
    - Krish

  • Aggregating data loaded into different hierarchy levels

    I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
    I read the help in DML Reference of the OLAP Worksheet and it said the follow:
    When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
    LIMIT all_but_q4 TO ALL
    LIMIT all_but_q4 REMOVE 'Q4'
    Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
    How to do it this for more than one dimension?
    Above i wrote my case of study:
    DEFINE T_TIME DIMENSION TEXT
    T_TIME
    200401
    200402
    200403
    200404
    200405
    200406
    200407
    200408
    200409
    200410
    200411
    2004
    200412
    200501
    200502
    200503
    200504
    200505
    200506
    200507
    200508
    200509
    200510
    200511
    2005
    200512
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    2004 NA
    200412 2004
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    2005     NA
    200512 2005
    DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
    EQ -
    aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
    COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 4,00 ---> here its right!! but...
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00 ---> here must be 30,00 not 10,00
    200512 NA
    DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
    T_TIME PRUEBA2_IMPORTE_STORED
    200401 NA
    200402 NA
    200403 NA
    200404 NA
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 NA
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00
    200512 NA
    DEFINE OBJ262568349 AGGMAP
    AGGMAP
    RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
    args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
    AGGINDEX NO
    CACHE NONE
    END
    DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
    T_TIME_AGGRHIER_VSET1 = (H_TIME)
    DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
    T_TIME_AGGRDIM_VSET1 = (2005)
    Regards,
    Mel.

    Mel,
    There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
    1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
    = solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
    2. Data is loaded at both a detail level and it's ancestor, as in your example case.
    = the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
    Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
    To solve your usage case I would suggest a hierarchy that looks more like this:
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    200412 2004
    2004_SELF 2004
    2004 NA
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    200512 2005
    2005_SELF 2005
    2005 NA
    Resulting in the following cube:
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    200412 NA
    2004_SELF NA
    2004 4,00
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    200512 NA
    2005_SELF 10,00
    2005 30,00
    3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
    = this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
    4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
    = often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation.

  • How to delete the data loaded into MySQL target table using Scripts

    Hi Experts
    I created a Job with a validation transformation. If the Validation was failed the data passed the validation will be loaded into Pass table and the data failed will be loaded into failed table.
    My requirement was if the data was loaded into Failed database table then i have to delete the data loaded into the Passed table using Script.
    But in the script i have written the code as
    sql('database','delete from <tablename>');
    but as it is an SQL Query execution it is rising exception for the query.
    How can i delete the data loaded into MySQL Target table using scripts.
    Please guide me for this error
    Thanks in Advance
    PrasannaKumar

    Hi Dirk Venken
    I got the Solution, the mistake i did was the query is not correct regarding MySQL.
    sql('MySQL', 'truncate world.customer_salesfact_details')
    error query
    sql('MySQL', 'delete table world.customer_salesfact_details')
    Thanks for your concern
    PrasannaKumar

  • How to make data loaded into cube NOT ready for reporting

    Hi Gurus: Is there a way by which data loaded into cube, can be made NOT available for reporting.
    Please suggest. <removed>
    Thanks

    See, by default a request that has been loaded to a cube will be available for reporting. Bow if you have an aggregate, the system needs this new request to be rolled up to the aggregate as well, before it is available for reporting...reason? Becasue we just write queries for the cube, and not for the aggregate, so you only know if a query will hit a particular aggregate at its runtime. Which means that if a query gets data from the aggregate or the cube, it should ultimately get the same data in both the cases. Now if a request is added to the cube, but not to the aggregate, then there will be different data in both these objects. The system takes the safer route of not making the 'unrolled' up data visible at all, rather than having inconsistent data.
    Hope this helps...

  • AWM Newbie Question: How to filter data loaded into cubes/dimensions?

    Hi,
    I am trying to filter the amount of data loaded into my dimensions in AWM (e.g., I only want to load like 1-2 years worth of data for development purposes). I can't seem to find a place in AWM where you can specify a WHERE clause...is there something else I must do to filter data?
    Thanks

    Hi there,
    Which release of Oracle OLAP are you using? 10g? 11g?
    You can use database views to filter your dimension and cube data and then map these in AWM
    Thanks,
    Stuart Bunby
    OLAP Blog: http://oracleOLAP.blogspot.com
    OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
    OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
    DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html

  • Simple basic question on attribute data load for Master Data InfoObject....

    Hi
    I am a bit confused & wanted to have some clarification on the following:
    1) For loading Attributes data to Master Data InfoObject, we load it through Data Source of Type Master Data Attribute.
    But what i did, i created Data Source of type Transaction data and using the same Excel file, i was able to create transformation with Matser data Attribute and was able to load the data.
    This shows i can use Data source of type either master data attribute OR Transaction Data doen't matter while loading into master data attribute. IS THIS TRUE?
    2) I have 2 master data infoobjects (Test1 & Test2). I made Test2 as an attribute in Test1.
    My question is while loading Master data for Test1, will it also load the data into Test2 if it doen't find respective data in Test2?
    (i did all the transformation as per need.)
    Waiting for your reply....
    Thanks...

    HI Harpal,
    1. Did you make a transformation between attribute data source from source system other than flat file and DSO?...
    I think PC File is flexible in this aspect..
    2.While loading Test1 the infoobject Test2 will get loaded only if it is a navigational ttribute..(You can see this by refering the SID Table)
    Please also note that it does not load Test2 attributes.
    regards,
    Rakesh.

Maybe you are looking for

  • Safari Isn't Working After Upgrading to Yosemite

    Hi, Just upgraded my 2010 MacBook Pro to Yosemite.  Everything seems to be working except for Safari.  It would open fine (just very very very slowly).  But when I try to navigate to a web site, any web site, it goes completely unresponsive.  I have

  • Multiple Duplicate Message Problem

    Each time I check my email, I appear to be downloading all my messages off my mail server as if they have not already been dlowloaded in the past, This means that I am getting multiple and incerasing numbers of duplicate email messages in my inbox. I

  • How to define a new Rating Scale Lookup Type?

    Hi I want to define a new Rating Scale with Type 'Achievement' in the rating scales. But the Application Utilities Lookup RATING_SCALE_TYPE's Access-Level is System. So I can't add a new type. Is it possible to add a new Type? Regards Rahman

  • Error: Element type "stateful-session-descriptor" must be declared ?

    Hi, while running the ant i am getting this error. jar_ejb: [jar] Building jar: D:\bdms\src\ejbdist\std_ejb_bdms_eapm.jar ejbc: [java] [java] ERROR: ejbc found errors while parsing the descriptor for src/ejbdis t/std_ejb_bdms_eapm.jar: [java] [java]

  • How can I watch tv on my ipad

    how can i watch tv on my ipad