What are the BIW setup, Benefits and Challenges, it's very urgent,

Dear Guru's
I need to prepare a report to my client stating that what are the BIW Setup, Benefits and challenges. till now. can any one help me to build the browser.
thanks and regards
C.S.Ramesh

Hi,
Pls chk these to get details:
http://www.asug.com/client_files/Calendar/Upload/hpsapbi_burke.ppt
http://www.citrix.co.uk/site/resources/dynamic/partnerDocs/SAP_Market_Pitch_Final.ppt
http://www.jhu.edu/hopkinsone/Secure_Private/ProjectAreas/Sponsored/documents/ToBeProcessSPAwardSet-UpOnly.pdf#search=%22sap%20bw%20presentation%20slides%22
Hope this helps,
Regards
CSM Reddy

Similar Messages

  • What is the Differences between Caingorm2 and Parsley(Caingorm3) ? Very Urgent ...plz help me out..

    Hi all,
            I am familier with caingorm 2 , and i am new to parsley , can any one give  differences between caingorm2 and parsley(caingorm3) ?
    and also please
    1)how to create a BeanConfig.mxml configuration  file in parsly ? how many ways we can inject beans in BeanConfig.mxml
    2)and how the event dispatched in parsly and handled by parsley step-by step?
    3)please explain by taking a small example insert usename and password in to data base using LCDS ?
    thanks
    -Balu

    Hi
    You can refer the following links for your question.
    Difference between AET and EEWB
    What is the use of AET? What are the differences between AET and EEWB?
    Difference between EEWB - UI Configuration Tool - AET
    http://senthilsapcrm.wordpress.com/2010/02/04/adding-custom-fields-in-sap-crm-7-0-using-aet/
    What is the main difference between eewb and aet tool ?
    Hope it is useful.
    Thanks and regards
    Preeti Viswanath

  • What are the different "Setup tables" in OF?

    Hi all,
    what are the different "Setup tables" in OF?
    SOMASI

    setup is depends on the various tables, for example the key flexfield structure data stores in the fnd_flex_values and the gl segments data stores in the gl_code_combinations. You have to go through from each setup.
    Srini C

  • What are the versions of BW and what is the difference between them

    what are the versions of BW and what is the difference between them

    Hi Reddy,
    SAP BIW 2.0a, 2.0b
                   3.0a, 3.b
                   3.1c
                   3.5  and Now BI 7 are some of the versions.
    Major difference between BW3.5 and BI 7.0 versions:
    1. In Info sets now you can include Infocubes as well.
    2. The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. This is only for info cube.
    3. The BI accelerator (for now only for Infocubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accelerator is a separate box and would cost more.
    4. The monitoring has been improved with a new portal based cockpit. Which means you would need to have an EP guy in ur project for implementing the portal.
    5. Search functionality hass improved!! You can search any object. Not like 3.5
    6. Transformations are in and routines are passé! Yes, you can always revert to the old Tcodes.
    7. The *Data Warehousing Workbench *replaces the Administrator Workbench.
    8. Functional enhancements have been made for the Data Store object:
    New type of Data Store object, Enhanced settings for performance optimization of Data Store objects.
    9. The transformation replaces the transfer and update rules.
    10. New authorization objects have been added
    11.*Remodeling *of Info Providers supports you in Information Lifecycle Management.
    12 the DataSource: There is a new object concept for the DataSource .
    Options for direct access to data have been enhanced.
    From BI, remote activation of Data Sources is possible in SAP source systems.
    13. There are functional changes to the Persistent Staging Area (PSA).
    14. BI supports real-time data acquisition.
    15. SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise Data Warehousing (EDW). The new features/ Major differences include:
    a) Renamed ODS as Data Store.
    b) Inclusion of Write-optimized Data Store which does not have any change log and the requests need no activation
    c) Unification of Transfer and Update rules
    d) Introduction of "end routine" and "Expert Routine"
    e) Push of XML data into BI system (into PSA) without Service API or Delta Queue
    f) Introduction of BI accelerator that significantly improves the performance.
    g) Load through PSA has become a must. Info Packages are used to load data upto PSA only.
        You need to create DTP to update data from PSA to Data Target.
    Regards,
    Ram.

  • What are the advantages of compressor and it it even necessary

    what are the advantages of compressor and it it even necessary?

    Necessary for some and not for others – probably a large majority – who can by with the presets avalaible in FCX.
    The users who need Compressor are those who want to control the parameters of the encodes to get the best possible trade-off between file size and quality. Or those who want to do things like standards conversions, complex frame speed changes, better re-scaling capabilities, de-interlacing, re-interlacing, output formats beyond which are available in FCX, chapter markers for DVD and Blu-Ray authoring, batch conversions for multiple purposes through droplets, access to clusters for faster rendering.
    Russ

  • What are the units of "Width" and "Height" of a Shape?

    What are the units of  "Width" and "Height" properties of a Shape when programming?
    Something odd like points or twips or tweedles or nibbles?
    http://www.ransen.com Cad and Graphics software

    Width and Height are properties of type Single; they represent the dimensions of the shape in points, where 72 points = 1 inch.
    Regards, Hans Vogelaar (http://www.eileenslounge.com)

  • What are the settings for datasource and infopackage for flat file loading

    hI
    Im trying to load the data from flat file to DSO . can anyone tel me what are the settings for datasource and infopackage for flat file loading .
    pls let me know
    regards
    kumar

    Loading of transaction data in BI 7.0:step by step guide on how to load data from a flatfile into the BI 7 system
    Uploading of Transaction data
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( Transaction data )
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to create ODS( Data store object ) or Cube.
    • Specify name fro the ODS or cube and click create
    • From the template window select the required characteristics and key figures and drag and drop it into the DATA FIELD and KEY FIELDS
    • Click Activate.
    • Right click on ODS or Cube and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.
    4. Monitor
    Right Click data targets and select manage and in contents tab select contents to view the loaded data. There are two tables in ODS new table and active table to load data from new table to active table you have to activate after selecting the loaded data . Alternatively monitor icon can be used.
    Loading of master data in BI 7.0:
    For Uploading of master data in BI 7.0
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( master data attributes, text, hierarchies)
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to select Insert Characteristics as info provider
    • Select required info object ( Ex : Employee ID)
    • Under that info object select attributes
    • Right click on attributes and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.

  • What are the prerequisites to install and configure Oracle Coherence 3.4

    What are the prerequisites to install and configure an Oracle Coherence (3.4 Grid version) on RHEL5.0 system?
    I want to make an Oracle Coherence Gid Data system for testing purpose using 3 machines.
    What kind of network configuration is OK?
    What software should be installed in advance?
    Thank you

    Hi,
    I would read through the Testing and Tuning section of the following page.
    http://coherence.oracle.com/display/COH34UG/Usage+(Full)
    Even though you are not about to go into production I think the Production Checklist has some very useful information.
    http://coherence.oracle.com/display/COH34UG/Production+Checklist
    -Dave

  • What are the differences between Logos and LogosXT?

    What are the differences between Logos and LogosXT?

     Logos XT is a networking middle-layer maintained by the LabVIEW Network Technologies and Security group. Logos XT provides a thin layer on top of TCP/IP to simplify some common network tasks.
    The underlying foundation for NI networking is called Logos.
    I believe that the basic idea is Logos is what is going on behind the scenes at the base level and Logos XT lets you build your own networking protocols on top of Logos.  Logos XT would be used if you want to make your own networking protocol instead of using TCP/IP or UDP.
    Scott A
    SSP Product Manager
    National Instruments

  • What are the differences between jdk and sdk

    what are the differences between jdk and sdk?? thanks

    Just marketing whims.
    SDK = software development kit
    JDK = Java development kti
    Sun has use both names to refer to its SDK for Java. I forget which one they're using now, but they've switched a couple of times.

  • What are the differences between Essbase and Planning?

    What are the differences between Essbase and Planning?

    Planning is an enterprise application built around the Essbase OLAP engine.
    You can create planning applications with Essbase only, but Planning uses best practises and has built-in enterprise features.
    Brian Chow

  • What are the differences between trigger and constraints?

    what are the differences between trigger and constraints?

    Try the documentation, this would be a good starting point: How Oracle Enforces Data Integrity
    C.

  • What are the differences between iphone and iphone 5c 5s?

    What are the differences between iphone and iphone 5c 5s?

    Read here:
    http://www.apple.com/iphone/compare/
    The iPhone 5C is basically the original iPhone 5 in different colors.
    While the 5S gets the improved hardware.
    Read the link above for a feature comparison.

  • HT5295 What are the basics of creating and distributing Podcasts?

    What are the basics of creating and distributing Podcasts?

    Search the web for "creating podcasts" and you'll find a plethora of information.
    Regards.

  • What are the Relations between Journalizing and IKM?

    What is the best method to use in the following scenario:
    I have about 20 source tables with large amount of data.
    I need to create interfaces that join the source tables into target tables.
    The source tables are inserted every few secondes with about hundreds to thousands rows.
    There can be a gap of few seconds between the insert of different tables that sould be joined.
    The source and target tables are on the same Oracle instance and schema.
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?
    In general What are the relations between 'Journalizing' and 'IKM'?
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?
    I want to understand what is the role of 'Journalizing CDC'?
    Can 'IKM - Incremental Update' work without 'Journalizing'?
    Does 'Journalizing' need to have PK on the tables?
    What should i do if i can't put PK (there can be multiple identical rows)?
    Thanks in advance Yael

    Hi Yael,
    I will try and answer as many of your points as I can in one post :-)
    Journalizing is way of tracking only changed data in your source system, if your source tables had a date_modified you could always use this as a filter when scanning for changes rather than CDC, Log based CDC (Asynchronous in ODI, Logminer/Streams or Goldengate for example) removes the overhead of of placing a trigger on the source table to track changes but be aware that it doesnt fully remove the need to scan the source tables, in answer to you question about Primary keys, Oracle CDC with ODI will create an unconditional log group on the columns that you have defined in ODI as your PK, the PK columns are tracked by the database and presented in a Journal table (J$<source_table_name>) this Journal table is joined back to source table via a journalizing view (JV$<source_table_name>) to get the rest of the row (ie none PK columns) - So be aware that when ODI comes around to get all data in the Journalizing view (ie Inserts, Updates and Deletes) the source database performs a join back to the source table. You can negate this by specifying ALL source table columns in your PK in ODI - This forces all columns into the unconditional log group, the journal table etc. - You will need to tweak the JKM to then change the syntax sent to the database when starting the journal - I have done this in the past, using a flexfield in the datastore to toggle 'Full Column' / 'Primary Key Cols' to go into the JKM set up (there are a few Ebusiness suite tables with no primary key so we had to do this) - The only problem with this approach is that with no PK , you need to make sure you only get the 'last' update and in the right order to apply to your target tables, without so , you might process the update before the insert for example, and be out of sync.
    So JKM's provide a mechanism for 'Change data only' to be provided to ODI, if you want to handle deletes in your source table CDC is usefull (otherwise you dont capture the delete with a normal LKM / IKM set up)
    IKM Incremental update can be used with or without JKM's, its for integrating data into your target table, typically it will do a NOT EXISTS or a Minus when loading the integration table (I$<target_table_name>) to ensure you only get 'Changed' rows on the load into the target.
    user604062 wrote:
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?Hopefully I have explained it above, its the type of thing you really need to play around with, and throroughly review the operator logs to see what is actually going on (I think this is a very good guide to setting it up : http://soainfrastructure.blogspot.ie/2009/02/setting-up-oracle-data-integrator-odi.html)
    In general What are the relations between 'Journalizing' and 'IKM'?JKM simply presents (only) changed data to ODI, it removes the need for you to decide 'how' to get the updates and removes the need for costly scans on the source table (full source to target table comparisons, scanning for updates based on last update date etc)
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?Delete and insert into target is fine , but ask yourself how do you identify which rows to process, inserts and updates are generally OK , to spot a delete you need to compare the table in full, target table minus source table = deleted rows , do you want to copy the whole source table every time to perform this ? Are they in the same database ?
    I want to understand what is the role of 'Journalizing CDC'?Its the ODI mechanism for configuring, starting, stopping the change data capture process in the source systems , there are different KM's for seperate technologies and a few to choose for Oracle (Triggers (Synchronous), Streams / Logminer (Asynchronous), Goldengate etc)
    Can 'IKM - Incremental Update' work without 'Journalizing'?Yes of course, Without CDC your process would look something like :
    Source target ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    With CDC your process looks like :
    Source Journal (J$ table with JV$ view) ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    as you can see its the same process after the source table (there is an option in the interface to enable the J$ source , the IKM step changes with CDC as you can use 'Synchronise Journal Deletes'
    Does 'Journalizing' need to have PK on the tables?Yes - at least a logical PK in the datastore, see my reply at the top for reasons why (Log Groups, joining back the J$ table to the source table etc)
    What should i do if i can't put PK (there can be multiple identical rows)? Either talk to the source system people about adding one, or be prepared to change the JKM (and maybe LKM, IKM's) , you can try putting all columns in the PK in ODI. Ask yourself this , if you have 10 identical rows in your source and target tables, and one row gets updated - how can you identify which row in the target table to update ?
    >
    Thanks in advance YaelA lot to take in, as I advised I would reccomend you get a little test area set up and also read the Oracle database documentation on CDC as it covers a lot of the theory that ODI is simply implementing.
    Hope this helps!
    Alastair

Maybe you are looking for

  • Can not find location of method in Flash example

    The issue that I am having is that I have search high and low to find where the method "flashmo_graphic()" is defined in the example that can be found here: http://www.flashmo.com/preview/flashmo_158_heart_effect I also attached folder.  There is a "

  • Unable to change a event to a different Google calendar on iCal 5.0.3

    I have multiple shared Google Calenders on iCal 5.0.3 When I create a event and want to chage it to a different Google Calender, it changes for a few seconds and then divert back to its original state. If I follow the exact same procedure on my iPad,

  • Web positioning - no text in iweb

    I used iweb template, did not use their menus and organised my website that way. It is for my sister's restaurant and postioning is a real issue. does anyone know how I can add text that would be searched by spiders, how I can create metatags and add

  • How do I unforget a device that I want to pair with?

    I've accidentally deleted a device that I want to pair with, can I reverse this? And if so, how?

  • ECC Client switch to SCM 7.0

    Hello, We are planning to switch our ECC Development client from 120 to 130 which is already connected to SCM system and has data related to ECC 120. When switch the ECC client from 120 to 130 and run the Integration Models again, what are possibilit