Best approach for Data Modelling.

Hello Experts
I am building a Customer Scorecard involving SD and Marketing in BI 7.0.
There are a couple of existing DSOs, some pushing the data into InfoCubes and some don't. All the reporting is happening from MultiProvider sitting on top of these Data Targets.
The team has a primitive design which says that I additional DSOs be created to extract data from the above mentioned couple of DSOs based on only the Objects that are needed for Customer Scorecard reporting.
This means, I am creating a couple of DSOs as per the current design which is in place.
Upon suggesting to only create a Customer Scorecard MultiProvider on top of the already existing couple of Data Targets (avoiding to recreate addtional DSOs and the hassles of loading and activating them and then loading the data into InfoCubes) and then create the BEx Queries on top of them, the Lead expressed his concerns about the impacts it could have on the existing Data Model and subsequent transports once the Model is complete..!
What is the best practice to handle a situation like this? I see there are 3 ways to go ahead with this:
1. Do as the Lead said, which means creating additional DSOs (extracting data from a couple of required existing DSOs, push this data into 1 InfoCube and then create a MultiProvider on top of this (be aware that there is another similar data model that I need to create which will also be embedded into this MultiProvider) and create BEx Reports from there.
2. Create only the InfoCubes which will extract data from the already existing DSOs (avoid creation of additional DSOs) and then create a MP from where BEx Reports are created.
3. Only create a MultiProvider on all the required and already existing DSOs and InfoCubes, making sure if reporting needs aggregated data for reporting or not and then create BEx Reports from there (avoid creation of additional DSOs, & ICs).
Note: We use Rev-Track to do the Transports.
Which one do you think would be the best way to go and what could be the implications? Eventually, the reporting is done in WAD.
Thanks for your time in advance.
Cheers,
Chandu

Hi,
Case 1 and 2 have similarities. But its purely depend user needs.
I think you may be know the difference between dso and cube.
DSO - holds detailed level data
Cube - holds aggregated data.
As per you needs use any one target only, no need to use DSO---> cube flow for existing flows.
you can decide which you want use DSO or Cube only.
Case 3. if your requirement will suffice with existing dso and at reporting level if you can manage to get the required out put then you can with it. But as my guess with existing target your requirement may won't suffice your needs.
About transports:
You can create one Rev track and assign multiple transports to it.
you can add and release transport one by one rather than all at a time.
if you release all at a time you may get some inconsistency issue and TR won't be released.
Thanks

Similar Messages

  • Best approach for data cleansing

    Hi Experts,
    I have a database with more than 600+ tables. Some of the table are very big ranging from 50 to 60 GB in production. Tables are related to each other using foreign key (no ON DELETE CASCADE). I have been asked to remove data from all these tables where
    PCode=10. Most of the tables have multiple indexes on it. I know, inner join delete operations, are going to be extremely resource intensive especially on our bigger tables. Guys, please do let me know what is the best practical approach (as per industry
    standards) to remove these data from all the table at the least possible time.
    Thanks in advance.
    Regards,
    Naveen
    Naveen J V

    Hi,
    This can be usefull
    http://stackoverflow.com/questions/159038/can-foreign-key-constraints-be-temporarily-disabled-using-t-sql
    Declare @Sql NVarchar(Max), @Tables_Name NVarchar(100), @Columns_Name NVarchar(100),@obj_id bigint,@sql_cols NVarchar(Max),@sql_cols_4_upd NVarchar(max),@SQL_COUNT_MAX NVarchar(Max)
    --SELECT * FROM sys.Columns
    DECLARE tables_c CURSOR FOR
    SELECT t.Name, t.Object_id FROM sys.Tables t
    INNER JOIN sys.columns c on t.object_id = c.object_id
    where c.name = 'PCode'
    OPEN tables_c
    FETCH NEXT FROM tables_c INTO @Tables_Name, @obj_id
    IF @@FETCH_STATUS <> 0
    PRINT ' <<None>>'
    WHILE @@FETCH_STATUS = 0
    BEGIN
    SET @Sql = 'ALTER TABLE '+@Tables_Name+' NOCHECK CONSTRAINT all' + CHAR(10)
    Set @Sql = @Sql +'DELETE'+ @Tables_Name + ' WHERE PCode = 10' + CHAR(10)
    SET @Sql = @Sql + 'ALTER TABLE '+@Tables_Name+' CHECK CONSTRAINT all' + CHAR(10)
    Print @Sql
    --EXEC (@SQL)
    FETCH NEXT FROM tables_c INTO @Tables_Name,@obj_id
    END
    CLOSE tables_c
    DEALLOCATE tables_c

  • SDO_PC, multiple SRIDs - best practise for data model?

    Hi,
    im using UTM and I am getting data covering two zones.
    all my existing data is from zone A.
    tables:
    pointcloud
    pointcloud_blk
    now im getting data with very few points from zone A and most points from zone B. It was agreed that the data delivery will be in SRID for zone B.
    so I tested whether this would work. I had two pointclouds. One with SRID A, another with SRID B. As soon as I put SRID B pointcloud inside, I could NO LONGER QUERY pointcloud with SRID A.
    So it seems to be necessary to use at least another pointcloud_blk, f.e. pointcloud_blk_[srid].
    Question: does another pointcloud_blk for each SRID suffice or do i also need a pointcloud table per SRID. the pointcloud table seems only interesting due to its EXTENT column. But on the other hand this could be queried by "function", since there are only 10 or so records (pointclouds) inside.
    PLZ share your best practises. What does work, what not.

    It is necessary to have one pointcloud_blk table for each SRID since there is a spatial index on that table.
    As for the PointCloud table itself, it is up to you. You can have pointclouds with different SRIDs in that table.
    But if you want to create spatial index on it, you have to use some function based index so that the index
    sees one SRID for the table.
    Since this table usually does not have many rows, this should work fine with one table for different SRIDs.
    siva

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • Best approach for IDOC - JDBC scenario

    Hi,
    In my scenarion I am creating sales order(ORDERS04) in R/3 system and which need to be replicated in a SQL Server system. I am sending the order to XI as an IDoc and want to use JDBC for sending data to SQL Server. I need to insert data in two tables(header & details). Is it possible without BPM?  Or what is the best approach for this?
    Thanks,
    Sri.

    Yes, this is possible without the BPM.
    Just create the Corresponding Datatype for the insertion.
    if the records to be inserted are different, then there wil be 2 different datatypes ( one for header and one for detail).
    Do a mutlimapping, where your Source is mapped into the header and details datatype and then send using the JDBC sender adapter.
    For the strucutre of your Datatype for insertion , just check this link,
    http://help.sap.com/saphelp_nw04/helpdata/en/7e/5df96381ec72468a00815dd80f8b63/content.htm
    To access any Database from XI, you will have to install the corresponding Driver on your XI server.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3867a582-0401-0010-6cbf-9644e49f1a10
    Regards,
    Bhavesh

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • Best approach for RFC call from Adapter module

    What is the best approach for making a RFC call from a <b>reciever</b> file adapter module?
    1. JCo
    2. Is it possible to make use of MappingLookupAPI classes to achieve this or those run in the mapping runtime environment only?
    3. Any other way?
    Has anybody ever tried this? Any pointers????
    Regards,
    Amol

    Hi ,
    The JCo lookup is internally the same as the Jco call. the only difference being you are not hardcoding the system related data in the code. So its easier to maintain during transportation.
    Also the JCO lookup code is more readable.
    Regards
    Vijaya

  • Best Approach for Reporting on SAP HANA Views

    Hi,
    Kindly provide information w.r.t the best approach for the reporting on HANA views for the architecture displayed below:
    We are on a lookout for information mainly around the following points:
    There are two reporting options which are known to us and listed below namely:
    Reporting on HANA views through SAP BW  (View > VirtualProvider > BEx > BI 4.1)
    Reporting on HANA views in ECC using BI 4.1 tools
            Which is the best option for reporting (please provide supportive reasons : as in advantages and limitations)?
             In case a better approach exists, please let us know of the same.
    Best approach for reporting option on a mixed scenario wherein data of BW and HANA views is to be utilized together.

    Hi Alston,
    To be honest I did not understand the architecture that you have figured out in your message.
    Do you have HANA instance as far as I understood and one ERP and BW is running on HANA. Or there might be 2 HANA instance and ERP and BW are running independently.
    Anyway If you have HANA you have many options to present data by using analytic views. Also you have BW on HANA as EDW. So for both you can use BO and Lumira as well for presenting data.
    Check this document as well: http://scn.sap.com/docs/DOC-34403

  • Best approach for building dialogs based on Java Beans

    I have a large amount of Java Beans with several properties each. These represent all the "data" in our system. We will now build a new GUI for the system and I intend to reuse the beans as far as possible. My idea is to automatically generate the configuration dialogs for each bean using the java.beans package.
    What is the best approach for achieving this? Should I use PropertyEditors or should I make my own dialog-generator using the Introspetor class or are there any other suitable solutions?
    All suggestions and tips are very welcome.
    Thanks!
    Erik

    Definitely, it is better for you to use JTable. Why not try it?

  • Best approach for uploading document using custom web part-Client OM or REST API

    Hi,
     Am using my custom upload Visual web part for uploading documents in my document library with a lot of metadata.
    This columns contain single line of text, dropdownlist, lookup columns and managed metadata columns[taxonomy] also.
    so, would like to know which is the best approach for uploading.
    curretnly I am  trying to use the traditional SSOM, server oject model.Would like to know which is the best approach for uploading files into doclibs.
    I am having hundreds of sub sites with 30+ doc libs within those sub sites. Currently  its taking few minutes to upload the  files in my dev env. am just wondering, what would happen if  the no of subsites reaches hundred!
    am looking from the performance perspective.
    my thought process is :
    1) Implement Client OM
    2) REST API
    Has anyone tried these approaches before, and which approach provides better  performance.
    if anyone has sample source code or links, pls provide the same 
    and if there any restrictions on the size of the file  uploaded?
    any suggestions are appreciated!

    Try below:
    http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx
    http://stackoverflow.com/questions/9847935/upload-a-document-to-a-sharepoint-list-from-client-side-object-model
    http://www.codeproject.com/Articles/103503/How-to-upload-download-a-document-in-SharePoint
    public void UploadDocument(string siteURL, string documentListName,
    string documentListURL, string documentName,
    byte[] documentStream)
    using (ClientContext clientContext = new ClientContext(siteURL))
    //Get Document List
    List documentsList = clientContext.Web.Lists.GetByTitle(documentListName);
    var fileCreationInformation = new FileCreationInformation();
    //Assign to content byte[] i.e. documentStream
    fileCreationInformation.Content = documentStream;
    //Allow owerwrite of document
    fileCreationInformation.Overwrite = true;
    //Upload URL
    fileCreationInformation.Url = siteURL + documentListURL + documentName;
    Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add(
    fileCreationInformation);
    //Update the metadata for a field having name "DocType"
    uploadFile.ListItemAllFields["DocType"] = "Favourites";
    uploadFile.ListItemAllFields.Update();
    clientContext.ExecuteQuery();
    If this helped you resolve your issue, please mark it Answered

  • Do u know the best approach with data....?

    I am considering the best approach for returning a resultset from a ejb to my jsp page but I dont know which approach is the best. You comment PLEASE. (As resultset cannot be serialized so returning it directly wont be considered).
    Approach A � Make a custom class having get/set variables to represent each column values in the resultset, and use the class in jsp. However, I find this tedious because whenever I add to the select statement, I have to add class variables too.
    Approach B � Manually manipulate data in resultset and put into a vector then return the vector to jsp
    Approach C � use rowsets instead and return the rowset to jsp.
    Many thanks u all...

    Hello,
    Approach A is not recommended - you would have to leave the resultset open and so leave the connection the the database open.
    Approach B is better
    Approach C - well RowSets are a new thing in 1.4 which I have not tried yet. They look useful, but is your app running on 1.4?

  • Best approach for IDs mapping..

    Hello,
    I'd like to ask you for your experiences about classical integration problem: mapping of IDs (materials, partners...)
    What is the best approach for integration between SAP and other systems? Can you give me some hints?
    Thanx, Peter

    Hi Peter,
    you have 4 ways to do it:
    1. you can do it inside an integration process:
    RFC call for checking a table with ID -> ID mappings
    (not so good as you have to use integration process)
    but very easy to biuld as this is standard 
    2. table in R/3 and changing the values in a user exit
    (you maintaint the data in a table in R/3)
    the fastest way (no calls to other programs)
    but you have to create user exits and
    this is not why you (your client) bought the XI  
    3. you can use this new RFC API
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/uuid/801376c6-0501-0010-af8c-cb69aa29941c
    which seems to be the best approach
    as you don't need BPM for this and it's a standard
    4. value mapping tables in XI...
    Regards,
    michal
    Message was edited by: Michal Krawczyk

  • Design Patterns, best approach for this app

    Hi all,
    i am starting with design patterns, and i would like to hear your opinion on what would be the best approach for this app. 
    this is basically an app for data monitoring, analysis and logging (voltage, temperature & vibration)
    i am using 3 devices for N channels (NI 9211A, NI 9215A, NI PXI 4472) all running at different rates. asynchronous.
    and signals are being processed and monitored for logging at a rate specified by the user and in realtime also. 
    individual devices can be initialized or stopped at any time
    basically i'm using 5 loops.
    *1.- GUI: Stop App, Reload Plot Names  (Event handling)
    *2.- Chart & Log:  Monitors Data and Start/Stop log data at a specified time in the GUI (State Machine)
    *3.- Temperature DAQ monitoring @ 3 S/s  (State Machine)   NI 9211A
    *4.- Voltage DAQ monitoring and scaling @ 1K kS/s (State Machine) NI 9215A
    *5.- Vibration DAQ monitoring and Analysis @ 25.6 kS/s (State Machine) NI PXI 4472
    i have attached the files for review, thanks in advance for taking the time.
    Attachments:
    V-T-G Monitor_Logger.llb ‏355 KB

    mundo wrote:
    thanks Will for your response,
    so, basically i could apply a producer/consummer architecture for just the Vibration analysis loop? or all data being collected by the Monitor/Logger loop?
    is it ok having individual loops for every DAQ device as is shown?
    thanks.
    You could use the producer/consumer architecture to split the areas where you are doing both the data collection and teh analysis in the same state machine. If one of these processes is not time critical or the data rate is slow enough you could leave it in a single state machine. I admit that I didn't look through your code but based purely on the descriptions above I would imagine that you could change the three collection state machines to use a producer/consumer architecture. I would leave your UI processing in its own loop as well as the logging process. If this logging is time critical you may want to split that as well.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • Best practice for data migration install v1.40 - Error 2732 Directory manag

    Hi
    I'm attempting to install SAP Best Practice for Data migration 1.40 on Win Server 2008 R2 (64 bit).
    Prerequisite error
    Installation program stops with missing file error
    The following file was not found
    ... \migration\InstallationWizard\BusinessObjects Data Services\setup.exe
    The file is necessary for successful installation. Please connect to internet or refer to Quick Guide (available on SAP note 1527151) for information regarding the above file.
    Windows installer log displays
    Error 2732 Directory Manager not initialized
    SAP note 1527151 does not exist or is internal.
    Any help appreciated  on what is the root cause of the error as the file does not exist in that folder in the installation zip file.
    Other prerequisite of .NET 3.5.1 met already.
    Patch is released since 20.11.2011 so I presume that it is a good installation set.
    Thanks,
    Alan

    Hi Alan,
    There are details on data migration v1.4 installations on SAP website and market place. The below link should guide to the right place. It has a power point presentation and other useful links as well.
    http://help.sap.com/saap/sap_bp/DMS_V140/DMS_US/html/index.htm
    Arun

  • Using CVS in SQL Developer for Data Modeler changes.

    Hi,
    I am fairly new to SQL Developer Data Modeler and associated version control mechanisms.
    I am prototyping the storage of database designs and version control for the same, using the Data Modeler within SQL Developer. I have SQL Developer version 3.1.07.42 and I have also installed the CVS extension.
    I can connect to our CVS server through sspi protocol and external CVS executable and am able to check out modules.
    Below is the scenario where I am facing some issue:
    I open the design from the checked out module and make changes and save it. In the File navigator, I look for the files that have been modified or added newly.
    This behaves rather inconsistently in the sense that even after clicking on refresh button, sometimes it does not get refreshed. Next I try to look for the changes in Pending Changes(CVS) window. According to the other posts, I am supposed to look at the View - Data Modeler - Pending Changes window for data modeler changes but that shows up empty always( I am not sure if it is only tied to Subversion). But I do see the modified files/ files to be added to CVS under Versioning - CVS - Pending Changes window. The issue is that when I click on the refresh button in the window, all the files just vanish and all the counts show 0. Strangely if I go to Tools - Preferences - Versioning - CVS and just click OK, the pending changes window gets populated again( the counts are inconsistent at times).
    I believe this issue is fixed and should work correctly in 3.1.07.42 but it does not seem to be case.
    Also, I m not sure if I can use this CVS functionality available in SQL Dev for data modeler or should I be using an external client such as Wincvs for check in/ check out.
    Please help.
    Thanks

    Hi Joop,
    I think you will find that in Data Modeler's Physical Model tree the same icons are used for temporary Tables and Materialized Views as in SQL Developer.
    David

Maybe you are looking for

  • Pacman isn't showing all the installed packages

    Hey guys! Since last week pacman isn't showing all the installed packages in my system. If I issue the command pacman -Q it just returns the following packages: a52dec 0.7.4-4 acl 2.2.49-2 alsa-lib 1.0.23-2 atk 1.32.0-1 attr 2.4.44-2 avahi 0.6.28-1 b

  • Filter List of Items System Window

    Hi all, How can I filter the List of Items Window where you select your items in documents? Thanks, Irina

  • Upgrading Mac Pro 1,1?

    Will these products be compatible with my Mac Pro 1,1 2x3.0Ghz Hard Drive - 1TB http://www.amazon.com/gp/product/B006GDVREI/ref=ox_sc_act_title_3?ie=UTF8&psc=1& smid=ATVPDKIKX0DER RAM - 16gb http://www.amazon.com/gp/product/B0085MH1AW/ref=ox_sc_act_t

  • Which Creative cloud membership to choose???

    Hi, So here's the thing.. I work in a team of some sorts.. We all live and work in different places.. only meet up in person 2x a year, without computers nearby.. we do have a external server to store our work.. which we can log on to. But the manage

  • Exception SQL + ORA-01086

    Hello, i have owb 9.2.0.8 install in windows 2000 server service pack 5. when i start to deploy XML file using OMBPlus oracle tool, i receive several errror follow: # Deploiement Xml - consulter le fichier deploy_xml.log Erreur interne : exception SQ