Selective data load and transformations

Hi,
Can youu2019ll pls clarify me this
1.Selective data load and transformations can be done in
    A.     Data package
    B.     Source system
    C.     Routine
    D.     Transformation Library-formulas
    E.     BI7 rule details
    F.     Anywhere else?
If above is correct what is the order in performance wise
2.Can anyone tell me why not all the fields are not appear in the data package data selection tab even though many include in datasource and data target.
Tks in advance
Suneth

Hi Wijey,
1.If you are talking about selective data load, you need to write a ABAP Program in the infopackage for the field for which you want to select. Otherway is to write a start routine in the transformations and delete all the records which you do not want. In the second method, you get all the data but delete unwanted data so that you process only the required data. Performancewise, you need to observe. If the selection logic is complicated and taks a lot of time, the second option is better.You try both and decide yourself as to which is better.
2. Only the fields that are marked as available for selection in the DS are available as selection in the data package. That is how the system is.
Thanks and Regards
Subray Hegde

Similar Messages

  • Selective data load using DTP

    Hi,
    We have created a data flow from once cube to other cube using Transformations. Now we would like to do selective data load from source cube to target cube. The problem is that in DTP we are not able to give the selective weeks in the Filter area because we can give filter conditions in change mode only and in production system we canu2019t go to change mode for DTP. So we struck up there. Can any one of you tell me how to do selective data load in this scenario
    Thanks in advance

    Hi,
    As a try, createe a new DTP and try to get in to change mode.It might accept that way.
    otherway round,you can go the way as manisha explained in previous post.
    Do the load and do a selective deletion. you can do selective deletion using a program.
    Cheers,
    Srinath.

  • Help XSLT Data Mapper and Transformations

    Hi guys,
    I need help in oracle ESB (XSLT Data Mapper and Transformations). I need to use the XSLT Data Mapper and Transformations using Response XML to Request XML.
    Thanks
    Vyas

    The concept is the same as BPEL. Without going into too much detail have a look at the folowing tutorial.
    http://download.oracle.com/docs/cd/B31017_01/integrate.1013/b28212/buildendtoend.htm#BEICEFJD
    Near the bottom they show how to do transformations.
    cheers
    James

  • Increse No of BGP while data load and how to bypass the DTPin Process Chain

    Hello  All,
    We want to improve the performance of the loads. Currently we are loading the data from external Data Base though DB link. Just to mention we are on BI 7 system.  We are by passing the PSA to load the data quickest. Unfortunately we cannot use PSA.  Because loads times are more when we use PSA. So we are directly accessing views on external data base.  Also external data base is indexed as per our requirement.
    Currently our DTP is set to run on the 10 parallel processes (on DTP settings for batch Batch Manager with job class A). Even though we set to 10 we can see loads are running on 3 or 4 Back ground parallel processes only. Not sure why. Does any one know why it is behaving like that and how to increase them?
    If I want to split the load into three. (Diff DTPs with Different selections).  And all three will load the data into same info provider parallel. We have the routine in the selection that will look a table to get the respective selection conditions and all three DTPs will kick off parallel as part of the process chain.
    But in some cases we only get the data for two or oneDTPs(depends on the selection conditions). In this case is there any way in routine or process chain to say that if there is no selection for that DTP then ignore that DTP or set to success for that DTP and process chain should continue.
    Really appreciate your help.

    Hi
    Sounds like a nice problemu2026
    Here is a response to your questions:
    Before I start, I just want to mention that I do not understand how you are bypassing the PSA if you are using a DTP? Be that as it may, I will respond regardless.
    When looking at performance, you need to identify where your problem is.
    First, execute your view directly on the database. Ask the DBA if you do not have access. If possible perform a database explain on the view (this can also be done from within SAPu2026I think). This step is required to ensure that the view is not the cause of your performance problem. If it is, we need to implement steps to resolve that.
    If the view performs well, consider the following SAP BI ETL design changes:
    1. Are you loading deltas or full loads. When you have performance problems u2013 the first thing to consider is to make use of the delta queue (or changing the extraction to just send deltas to BI)
    2. Drop indexes before load and re-create them after the load 
    3. Make use of the BI 7.0 write optimized DSO. This allows for much faster loads.
    4. Check if you do ABAP lookups during the load. If you do, consider loading the DSO that you are selecting on in memory and change the lookup to refer to the table in memory rather. This will save tremendous time in terms of DB I/O
    5. This will have cost implications but the BI Accelerator will allow for much faster loads
    Good luck!

  • Flash xml data loading and unloading specs

    hi i am trying to get specification information that i cannot
    find anywhere else.
    i am working a large flash project
    and i would like to load xml data into the same swf
    object/movieclip repeatedly.
    as i do not want the previously loaded items to unload i need
    to know if doing this will unload the items from the swf or just
    keep them in the library so they can be reposted without reloading.
    i cannot find any supporting documenation either way that
    tells me that if i load new content into a clip (i am aware
    levels overwrite) if it will or will not unload this content.
    thanks in advance.
    mk

    this is awful for me -- i cant even get the clip to duplicate
    -- and i thought this would be the simplest solution to keeping
    everything cached for one page before and one page after current in
    the project.
    i have used a simpler clip to test the code and see if i am
    insane.
    duplicateMovieClip(_root.circle, "prv", 5);
    prv._x = 300;
    prv._y = 300;
    prv._visible = true;
    prv.startDrag();
    this ALWAYS works when i use the _root.circle file of a green
    simple circle
    BUT
    when i change it to my main movie clip (which is loaded AND
    On screen -- it just doesnt duplicate at all!) -- i've even
    triggered it to go play frame 2 JUST IN CASE
    I've even set visibility to true JUST IN CASE
    ie all i do is change _root.circle to _root.cur
    and .... nada.
    AND _root.cur IS DEFINITELY on the screen and all xml
    components have been loaded into it. (it is a slide with a dynamic
    picture and dynamic type and it 100% works)
    has anyone had this insanity happen before?
    is this an error where flash cannot attach movie or duplicate
    a clip that has dynamic contents???

  • Select Data Source and Microsoft Security Issue

    Hi,
    Tool- Xcelsius 2008, QAaWS
    When I open dashboard, it gives message "Microsoft Office has identified a potential security concern" "Data Connection have been blocked. If you choose to enable data connection, your computer may no longer be secure. Do not enable this content unless you trust the source of this file." with <Enable> and <Discable> buttons.
    If it Enabled then leads to "Select Data Source" screen and asks details for DSN.
    At every open it shows same messages.
    Please, help if anyone knows or faced this issue.
    Regards,
    Ashish

    hi,
    this is a really old post.
    please could you specify your exact workflow ?
    what connectors your dashboard is using?
    also, what version and SP and patch are you using for Xcelsius client?
    i.e. Are you up to date with latest compatibility updates?
    regards,
    H

  • Master Data cleansing and transformation from non-SAP source systems

    Hi all,
    Our client (Media)wants to cleanse and transform his master data from non-SAP source system to be uploaded into BW (no R/3 yet). If anybody has a document regarding this topic that i could use, i will appreciate if u sent it to me.
    thanks.

    Hi,
    https://websmp203.sap-ag.de/~sapidb/011000358700001965262003
    https://websmp203.sap-ag.de/~sapidb/011000358700006591612001
    https://websmp203.sap-ag.de/~sapidb/011000358700001971392004
    https://websmp203.sap-ag.de/~form/sapnet?_SHORTKEY=01100035870000471477&_OBJECT=011000358700008927932002E
    /manfred

  • Optimizing data load and calculation

    Hi,
    I have a cube that takes more than 2 hours to load and calculates more than 3 hours (at its fastest build). There are times that my cube loads and calculates for more than 8 hours. My calculation only uses Calc All. I am very new to Essbase and couldn't find a way to minimize the build time of my cube.
    Can anybody help? Here are some stats about my cube. I hope this helps.
    Dimension Name Type Declared Size Actual Size
    ===================================================================
    ALL_ACCOUNTS DENSE 7038 6141 Accounts <5> (Dynamic Calc)
    ALL_LEDGERS SPARSE 4 3 <1> (Label Only)
    ALL_YEARS SPARSE 3 1 <1> (Label Only)
    ALL_MONTHS SPARSE 22 22 Time <7> (Active Dynamic Time Series Members: Y-T-D, Q-T-D)
    ALL_FUNCTIONS SPARSE 55 54 <9>
    ALL_AFFILIATES SPARSE 715 696 <4>
    ALL_BUSINESS_UNITS SPARSE 452 440 <3>
    ALL_MCC SPARSE 1557 1536 <3>
    Any suggestions would be greatly appreciated.
    Thanks!
    Joe

    Joe,
    There are too many potential optimizations to list and not enough detail to make any one or two suggestions. I can see some potential areas from improvemt, but your best bet is to bring in a knowledgable consultant for a couple of days to review the cube and make changes. For example, at one client, I made changes that brought a calculation down from 4 + hours to 5 minutes. It took changes to load rules, calc scripts and how they loaded their data. So it was not one thing, but mutiple changes.
    If you look at Jason's Hyperion Blog http://www.jasonwjones.com/?m=200908 , he describes taking a calculation down from 20 minutes to a few seconds. Again, nat a single change, but a combination.

  • Selective data load to InfoCube

    Dear All,
    I am facing the following problem :
    I have created staging DSOs for billing item (D1) and Order item (D2). Also i have created one InfoCube (C1) which requires combined data of order and billing and so we have direct transformation with billing DSO (D1-->C1) and in transformation routines we had look up from Order item (D2) DSO.
    Now all the deltas are running fine. But in today's delta particular Order has not retrieved, say 123, but corresponding Billing document, say 456,  has been retrieved through delta.
    So now while DTP ran for C1 cube it has not loaded that particular billing doc (456) and corresponding Order details(123).
    I thought of loading this particular data by creating new Full DTP to Cube C1. Is this approach ok?
    Please help on the same.
    Regards,
    SS

    Hi,
    Yes you can do a full load. Just make sure the selection condition in your DTP is EXACTLY THE SAME as selective delete on C1.
    I'd suggest put a consolidation DSO D3 in the position of C1. And you can always use delta update C1 from D3. In my company there are similar cases and we love the consolidation DSO.
    Regards,
    Frank

  • Data load and out put problem

    Hi Experts,
    Iam going through a peculiar problem.I have data flowing from R/3 to master data Info provider 0Requi. I have validated the data between R/3 and the master data info provider, it is found to be good. This data is being loaded from 0Requi to a staging write optimized DSO. there are routines in the Transformations for the DSO. when I check and compare the data, I found that out of 625 records in the Source only 599 were available in the target, and out of 599 records 17 records are duplicate and 29 records have not been populated from source to target.
    Any help to solve the issue will be highly appreciated and thanked with suitable points.
    Thanks and Regards
    SHL

    Thank you very much Jen, Full points to you.
    There was nothing in the error stack. Sy_Subrc in the routine was giving the problem. It has been rectified and the Data is loading fine in the development system.
    Now I am in another peculiar situation.
    The routines, after debugging, are working good in the development system but after transporting to Quality system for testing, they are failing there. Iam facing the same old problem there again. The transports are checked, they were done properly. The ABAPer is satisfied with the transported code. If you can then please guide me.Iam closing this thread and opening a new thread with Subject as " In different behavior of Routines"
    Thank you once again Jen. Full points assigned to you.
    Kind Regards
    SHL

  • Help to do data mining and transformation...

    I have a specific task to accomplish and I am wondering if Oracle Data Mining is the correct tool to use, and if not what possibly might be. Here is brief description:
    I have a table with about 500 millions rows of data per day, transactional internet traffic data. It contains about 20 columns/dimensions. The requirement is to transform this flat data into a new table that contains (as one column each) each unique variation of those dimension values recorded.
    So for example, if we have 3 dimensions of say gender, age and zip code we would determine each unique combination of those found in the actual data and write out x number of columns to identify them and store a count value for each one. The count will just tell us how many of that combination was found in the data, and the end result will be of course an aggregated table for fast querying on all observed dimensions.
    For performance reasons we want to pass through the data only only.
    We tried cubes but this takes too long (because it also tries to build out all the non-observed combinations), and we know we could try a code approach but fear this may take too long also. The problem is more of a performance one of course, with that many rows and possible combinations to consider.
    Any ideas?
    Thanks in advance.

    After doing some research I realize what I need is a cube, but one that does not contain every single dimension combination but only those that actually exist (to speed up the creation time and reduce storage space). Is this something Oracle supports? Anyone?

  • Essbase Studio data load and other

    Hi There,
    I got my cube build from my data mart with one fact table and bunch of dimensions and deploy successfully to essbase server. The questions I have are:
    1. I have another fact table with the same dimensions, I need to load the data into the cube I build. How do I load the data from Essbase Studio, should I add that new Fact table into my schemas? I know I can load the data through EAS, but it seems defeated the purpose of Essbase studio.
    2. Is there any way I can specify from essbase studio for certain, for example, account level, as TB Last, or Avg, it seems you have to apply to the whole level as TB Last, etc. from Essbase Model Properties.
    Thanks

    Donny wrote:
    Hi There,
    I got my cube build from my data mart with one fact table and bunch of dimensions and deploy successfully to essbase server. The questions I have are:>
    1. I have another fact table with the same dimensions, I need to load the data into the cube I build. How do I load the data from Essbase Studio, should I add that new Fact table into my schemas? I know I can load the data through EAS, but it seems defeated the purpose of Essbase studio.Add the second fact table to your minischema with the proper joins
    >
    2. Is there any way I can specify from essbase studio for certain, for example, account level, as TB Last, or Avg, it seems you have to apply to the whole level as TB Last, etc. from Essbase Model Properties.ypu should have columns in your account table for the property values (Time balcance values are F,L,A for first last and average respectively. Then in the Essbase properties, you would specify to use an expernal source and give it the proper column name. Same thing for the skip values, variance reporting, consolidation, etc.
    >
    Thanks

  • Data load and calc script

    Hi friend,<BR>in my cube i have one dimension that <BR>1)contain cosolidation operater ~ for all Level0 members <BR>2)for level 1 memebers haveing conslodation operator ~<BR>3)But Level 3 members consolodation operator is +<BR> <BR>if iam useing following calc will it affect on above dimension<BR>Set Update calc off;<BR>set aggmissing on;<BR>caldim(product);<BR>product is another dimesion in the cube.<BR>my question is <BR>1)if iam useing set aggmissing on willit affect on other dimension which are not calculated on that <BR>calculation script?<BR>2)iam loading date in both upper level and lower level of led dimesion,so if iam useing above calc script any impact led dimesion<BR>

    Hi,<BR>have you tried it? What happened?<BR><BR>My opinion is "agg missing" doesn't have anything with consolidation (~) in common. When (~) is used then first child value is set to upper level. So no matter if consolidation is (~) or (+) the upper value will be overwritten if agg missing is used.<BR><BR>Also bear in mind if in the same dimension combination is used for child and upper level parent the parent value will also be overwritten is agg missing is used.<BR><BR>I suggest you create a simple sample only 2 dimensions and try it out! <img src="i/expressions/face-icon-small-smile.gif" border="0"><BR><BR>By the way. I always use (first command in calc script):<BR>SET UPDATECALC OFF;<BR>to turn of intelligence calculation.<BR><BR>Hope this helps,<BR>Grofaty

  • Create Document from Data Load and Link to Transaction Record for Long Text

    Hi,
    I have a DBConnect Oracle datasource which contains a large text field.  I would like to build a process that will, as part of the load, create a text file from the content of this large field, upload the file into BW and create the document association with the transaction record.
    Is anyone aware of a HOW-TO to create the BW document entries and upload the files using ABAP?  I thought that I had seen a HOW-TO or instructions approx a year ago, but cannot locate them now.
    Thanks in advance,
    Mel W.

    Hi,
    I hope this is the how to document you were looking for:
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/8046aa90-0201-0010-5e99-962948c83331
    -Vikram

  • Dyanmic selection data load in DTP

    Hi all,
    I need to load data one after one in terms of Cal week from 2000 to 2012 thrugh DTP.
    I can write a routine in DTP for the field Cal week
    But can any one help me to guide how to load this DTP multiple times through Process chain?
    thank You.
    Regards
    Bala

    Hi,
    i think, he want to load: calweek 01.2000 per full, then 02.2000, 03.2000, ...
    => you have to create 53 * 12 = 636 DTPs
    other solution:
    - create a Z-table. there you saved the last loaded calweek.
    - create a Z-program to trigger a event.
    - insert in your dtp a filter: read z-table. add 1 week. save the new week in the z-table.
    - create a process chain. insert the dtp. insert the z-program.
    => so you have a loop. the process chain starts himself.
    => the next dtp load the next week.
    => you need a break for this loading. For example: when you are in 01.2013 create a shotdump.
    Sven

Maybe you are looking for