Compress Indexes

I have using the dba cockpit to enable the compression of the "Biggest table" of the SAP Bank Analyzer ( reducing the table from 360GB to 100GB ). This OFFLINE process took 23 hours.
Please , advise us how to the compress also the INDEXES of the tables.

Hello Mr. Arun,
Thanks for contacting us.
Kindly use the below command to enable the compression on a particular table. But before enabling the compression, make sure that you have executed query given in note 886231 to make sure whether compression is required for this table or not.
ALTER TABLE <tabname> ACTIVATE VALUE COMPRESSION
Kindly refer below note
#1942183 - DB6: When to consider a table or index reorganization
#975352 - DB6: Configuring DB2 Auto Reorg for Space Reclamation and Index Cleanup
Hope the above information helps!
Best Regards,
Gunjan

Similar Messages

  • Loss of space after reorg of compressed index

    Hi Oracle experts,
    In 2008 we compressed indexes of ECC system as per sap note 1109743 and
    obtained high compression ratio.
    Now we have upgraded to oracle 11.2.0.2 and BRTOOLS 7.20 (10).
    While using brtools to do table reorg/compression as per note 1431296
    indexes loose compression factor.
                                                                  before                     after        compression
    GLFUNCA           PSAPGLFUNCA       41,574,400             11,468,800 enabled
    GLFUNCA~0       PSAPSTABI             699,712                    5,555,648 enabled
    GLFUNCA~1       PSAPSTABI             263,872                     2,112,384 enabled
    GLFUNCA~2       PSAPSTABI             446,528                     3,531,072 enabled
    GLFUNCA~3       PSAPSTABI          1,051,904                      5,047,872 enabled
    I have tried uncompressing and compressing indexes again but cannot compress to
    same factor, results are same using brtools or oracle command.
    I have tested on coy of system where table GLFUNCA is uncompressed and indexes compressed.
    reorg of indexes result in same bad  compression factor.
    key used for compression is same.
    any  advise, how to regain index compression factor  or anyone having similar experirence.

    Hello Daljit,
    First time I heard about this kind of behavior. I already faced index with the same size after compress, but it happened because the number of colunm was wrong defined.
    My suggestion:
    1.Uncompress one index (rebuild without compress);
    2.Identify number of column that should be compress;
    3.Rebuild with compress nologging;
    4.Active logging
    Be aware all related below notes..
    1289494 - FAQ Oracle compression
    1109743 - Use of Index Key Compression for Oracle Databases
    Regards,
    Jairo Pedroza

  • Index Compression in SAP - system/basis tables?

    Hi!
    In thread Oracle comression in SAP environments the Oracle 10g feature index compression was discussed. We are now going to implement it also. SAP and Oracle say, this can be done for any index.
    So we selected the biggest and the most frequently used indexes and analyzed them. We could save about 100GB disk space.
    But here comes my question:
    In the hitlist of our most frequently used and biggest Indexes there are also some basis table indexes.
    A few samples:
    BALHDR~0
    BALHDR~1
    BALHDR~2
    BALHDR~3
    BDCP~0
    BDCP~1
    BDCP~POS
    BDCPS~0
    BDCPS~1
    CDCLS~0
    CDHDR~0
    D010INC~0
    D010INC~1
    D010TAB~0
    D010TAB~1
    DD01L~0
    DD03L~5
    DD07L~0
    E071K~0
    E071K~ULI
    GVD_LATCHCHILDS~0
    GVD_OBJECT_DEPEN~0
    GVD_SEGSTAT~0
    QRFCTRACE~0
    QRFCTRACE~001
    QRFCTRACE~002
    REPOSRC~0
    SCPRSVALS~0
    SEOCOMPODF~0
    SMSELKRIT~0
    SRRELROLES~0
    SRRELROLES~002
    STXH~0
    STXH~REF
    STXL~0
    SWW_CONT~0
    TBTCS~1
    TODIR~0
    TRFCQOUT~5
    USR02~0
    UST04~0
    VBDATA~0
    VBMOD~0
    WBCROSSGT~0
    Is it really recommended to compress indexes of SAP Basis Tables also - especially in the area of Repository/Dictionary, t/qRFC and/or "Verbuchung" (VB...)?
    Thanx for any hint and/or comment!
    Regards,
    Volker

    Hi Volkar,
    I have succesfully tested the oracle index compression on ECC5 environment for the following tables in a sandbox environment;
    ppoix
    pcl2
    pcl4
    In total I saved around 60GB in the tablespaces.
    Before compression I started a payroll run to see what time this will take without compression.
    After compression of the indexes I re-executed the payroll which took exactly the same time as without compression (2 hours). So no impact on performance.
    Also did an update statistics in DB13 -> no impact
    With brtools: force update of specific table -> no impact
    So we are seriously thinking about to take this into production.
    I have also looked at BI environment but concluded that there was nothing to gain.
    Unfortunately our infocubes are well build meaning that the fact tables contains the actual data and the corresponding dimension tables only the surrogate IDu2019s (SIDu2019s).
    Those dimension tables are actually very small (64k) and not suitable for index compression.
    Next step will be some Workflow tables.
    Fe:
    SWW_CONT~0                   INDEX        PSAPFIN           26.583.040
    SWPNODELOG~0                 INDEX        PSAPFIN           15.589.376
    SWWLOGHIST~0                 INDEX        PSAPFIN           13.353.984
    SWWLOGHIST~1                 INDEX        PSAPFIN            8.642.560
    SWW_CONTOB~0                 INDEX        PSAPFIN            8.488.960
    SWPSTEPLOG~0                 INDEX        PSAPFIN            6.808.576
    SWW_CONTOB~A                 INDEX        PSAPFIN            6.707.200
    SWWLOGHIST~2                 INDEX        PSAPFIN            6.507.520
    SWW_WI2OBJ~Z01               INDEX        PSAPFIN            2.777.088
    SWW_WI2OBJ~0                 INDEX        PSAPFIN            2.399.232
    SWWWIHEAD~E                  INDEX        PSAPFIN            2.352.128
    SWP_NODEWI~0                 INDEX        PSAPFIN            2.304.000
    SWW_WI2OBJ~001               INDEX        PSAPFIN            2.289.664
    SWWWIHEAD~A                  INDEX        PSAPFIN            2.144.256
    SWPNODE~0                    INDEX        PSAPFIN            2.007.040
    SWWWIRET~0                   INDEX        PSAPFIN            2.004.992
    SWW_WI2OBJ~002               INDEX        PSAPFIN            1.907.712
    If you would like to know, I can post the results on workflow tables (indexes) on ECC6 environment.
    Please rewards some point if you like.
    Regards,
    Stephan van Loon

  • Fast index creation suggestions wanted

    Hi:
    I've loaded a table with a little over 100,000,000 records. The table has several indexes which I must now create. Need to do this as fast as possible.
    I've read the excellent article by Don Burleson (http://www.dba-oracle.com/oracle_tips_index_speed.htm) but still have a few questions.
    1) If the table is not partitioned, does it still make sense to use "parallel"?
    2) The digit(s) following "compress" indicate the number of consective columns at the head of the index that have duplicates. True?
    3) How will the compressed index effect query performance (vs not compressed) down the line?
    4) In the future I will be doing lots and lots of updates on the indexed columns of the records as well as lots of record deletes and inserts into/out-of the table. Will these updates/inserts/deletes run faster or slower given that the indexes are compressed vs if they were not?
    5) In order to speed up the sorting, is it better to add datafiles to the TEMP table or create more TEMP tables (remember, running "parallel")
    Thanks in Advance

    There are people who would argue that excellent and Mr. Burleson do not belong in the same sentence.
    1) Yes, you can still use parallel (and nologging) to create the index, but don't expect 20 - 30 times faster index creation.
    2) It is the number of columns to compress by, they may not neccesarily have duplicates. For a unique index the default is number of columns - 1, for a non-unique index the default is the number of columns.
    3) If you do a lot of range scans or fast full index scans on that index, then you may see some performance benefit from reading fewer blocks. If the index is mostly used in equality predicates, then the performance benefit will likely be minimal.
    4) It really depends on too many factors to predict. The performance of inserts, updates and deletes will be either
    A) Slower
    B) The same
    C) Faster
    5) If you are on 10G, then I would look at temporary tablespace groups which can be beneficial for parallel operations. If not, then allocate as much memory as possible to sort_area_size to minimize disk sorts, and add space to your temporary tablespace to avoid unable to extend. Adding additional temporary tablespaces will not help because a user can only use one temporary tablespace at a time, and parallel index creation is only one user.
    You might want to do some searching at Tom Kyte's site http://asktom.oracle.com for some more responsible answers. Tom and Don have had their disagreements in the past, and in most of them, my money would be on Tom to be corerct.
    HTH
    John

  • Advaced compression in oracle 11g

    Hi,
    We are migrating databases from oracel 10g to 11g and we are using advance compression, i have few question please help me to understand
    1. if i enable compression on tables is index also get compressed if not how i can enable compression on indexes
    2.For table compression i will take the DDL of tables from oracle 10g databases and i create the tables in oracle 11g with COMPRESS FOR ALL OPERATIONS is this the right approach
    Appreciated the inputs
    thanks

    Hi,
    I checked for one of the table ALTER TABLE MOVE COMPRESS FOR ALL OPERATIONS after upgrading to 11g from 10g and rebuild the index
    SQL> select index_name,COMPRESSION,STATUS from dba_indexes where table_name='POSITION_CUBE';
    INDEX_NAME                     COMPRESS STATUS
    TEST                           DISABLED VALIDstill compress column in dba_indexes show disabled
    so i need to compress index also , how i can achive this
    Thanks

  • Why Sort operation on clustered columstore index insert?

    Looking at the execution plan for a clustered columnstore index insert I noticed a Sort operation. My T-SQL has no sort and I understand that the clustered columnstore is not a sorted index. Why would there be a Sort operation in the execution plan?
    This is running on:
    Microsoft SQL Server 2014 - 12.0.2000.8 (X64)
     Feb 20 2014 20:04:26
     Copyright (c) Microsoft Corporation
     Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)

    Hello,
    It's because how a columnstore index works: The index is created & compressed on column Level, not on row level. SQL Server orders the data to have the same data after each other to calculate the compressed index values.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • BAM HA and Data Consistency

    Hi,
    I need to preserve consistency of data in BAM, meaning BAM itself does not require HA, but I need to guaranty there is no loss of data (even if BAM server is down). I have a single BAM server and a cluster of WLS where the JMS queues/topics are located (data is read from JMS queues/topics using EMS).
    Using any kind of topic means loss of data once BAM server is down.
    BAM does not reconnect after migration so using a queue on a migrate-able JMS server also does not seem to work.
    Using a distributed queue requires a queue for each EMS instead of using a single queue with Message Selector. And anyways there is no reconnection in this case as well.
    What would be the preferred approach than?
    Any suggestions are highly appreciated.
    Regards.

    Hi
    There are a couple of performance related measures that you can take :
    - Creating aggregates on 3-4 objects which are very popular in terms of reporting. Make sure that you don' t go overboard with this as maintaining  too many aggregates could create a memory issue.
    - If you are using BI 7.0 , then go for 'Repartitioning'.  Would help you in loading data as well as during reporting.
    - Compressing / Indexing are always available and good options.
    - I don't compressing could create an issue even if your Info cube gets populated thru multiple data sources.
    Cheers
    Umesh

  • Interview help

    Hello Everybody ,
       iam having interview for bi/bw support consultant and interview specs consists of data management techniques,improving and maintaining sap bi monitoring capabilities.solutions to support issues and understanding of BCC SAP solution and how its bw/bi configuration support the business.knowledge of wad.please send me the expected questions and answers though iam searching sdn using specs.
    Regards
    Priya

    Hi priya
    Here are some Q&A.
    Normally the production support activities include
    Scheduling
    R/3 Job Monitoring
    B/W Job Monitoring
    Taking corrective action for failed data loads.
    Working on some tickets with small changes in reports or in AWB objects.
    The activities in a typical Production Support would be as follows:
    1. Data Loading - could be using process chains or manual loads.
    2. Resolving urgent user issues - helpline activities
    3. Modifying BW reports as per the need of the user.
    4. Creating aggregates in Prod system
    5. Regression testing when version/patch upgrade is done.
    6. Creating adhoc hierarchies.
    we can perform the daily activities in Production
    1. Monitoring Data load failures thru RSMO
    2. Monitoring Process Chains Daily/weekly/monthly
    3. Perform Change run Hierarchy
    4. Check Aggr's Rollup
    To add to the above
    1)check data targets are ready for reporting,
    2) No failed or cancelled jobs in sm37 monitors and Bw Monitor.
    3) All requests are loaded for day, monthly and yearly also.
    4) Also to note down time taken for loading of critical info cubes which are used for reporting.
    5) Is there any break in any schedules from your process chains.
    As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
    a) Loads can be failed due to the invalid characters
    b) Can be because of the deadlock in the system
    c) Can be because of previous load failure , if the load is dependant on other loads
    d) Can be because of erroneous records
    e) Can be because of RFC connections
    These are some of the reasons for the load failures.
    Why there is frequent load failures during extractions? and how to analyse them?
    If these failures are related to Data, there might be data inconsistency in source system. Though we are handling properly in transfer rules. We can monitor these issues in T-code -> RSMO and PSA (failed records) and update.
    If we are talking about whole extraction process, there might be issues of work process scheduling and IDoc transfer to target system from source system. These issues can be re-initiated by canceling that specific data load and ( usually by changing Request color from Yellow - > Red in RSMO). and restart the extraction.
    What is the daily task we do in production support.How many times we will extract the data at what times.
    It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in number of records and kind of transfer rules you have provided. If transfer rules have some kind of round about transfer rules and updates rules has calculations for customized key figures... long times are expected..
    Usually You need to work on RSMO and see what records are failing.. and update from PSA.
    What are some of the frequent failures and errors?
    As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
    a) Loads can be failed due to the invalid characters
    b) Can be because of the deadlock in the system
    c) Can be because of previous load failure , if the load is dependant on other loads
    d) Can be because of erroneous records
    e) Can be because of RFC connections
    These are some of the reasons for the load failures.
    for Rfc connections:
    We use SM59 for creating RFC destinations
    Some questions
    1)     RFC connection lost.
    A) We can check out in the SM59 t-code
    RFC Des
    + R/3 conn
    CRD client (our r/3 client)
    double click..test connection in menu
    2) Invalid characters while loading.
    A) Change them in the PSA & load them.
    3) ALEREMOTE user is locked.
    A) Ask your Basis team to release the user. It is mostly ALEREMOTE.
    2) Password Changed
    3) Number of incorrect attempts to login into ALEREMOTE.
    4) USE SM12 t-code to find out are there any locks.
    4) Lower case letters not allowed.
    A) Uncheck the lower case letters check box under "general" tab in the info object.
    5) While loading the data i am getting messeage that 'Record
    the field mentioned in the errror message is not mapped to any infoboject in the transfer rule.
    6) object locked.
    A) It might be locked by some other process or a user. Also check for authorizations
    7) "Non-updated Idocs found in Source System".
    8) While loading master data, one of the datapackage has a red light error message:
    Master data/text of characteristic ZCUSTSAL already deleted .
    9) extraction job aborted in r3
    A) It might have got cancelled due to running for more than the expected time, or may be cancelled by R/3 users if it is hampering the performance.
    10) request couldnt be activated because there is another request in the psa with a smaller sid
    A)
    11) repeat of last delta not possible
    12) datasource not replicated
    A) Replicate the datasource from R/3 through source system in the AWB & assign it to the infosource and activate it again.
    13) datasource/transfer structure not active.
    A) Use the function module RS_TRANSTRU_ACTIVATE_ALL to activate it
    14) ODS activation error.
    A) ODS activation errors can occur mainly due to following reasons-
    1.Invalid characters (# like characters)
    2.Invalid data values for units/currencies etc
    3.Invalid values for data types of char & key figures.
    4.Error in generating SID values for some data.
    15. conversio routine error
    solution.check the data format in source
    16.OBJECT CANOOT BE ACTIVATED.or error when activating object
    check the consistency of the object.
    17.no data found.(in query)
    check the info provider wether data is there or not and delete unsucessful request.
    18.error generating or activating update rules.
    1. What are the extractor types?
    • Application Specific
    o BW Content FI, HR, CO, SAP CRM, LO Cockpit
    o Customer-Generated Extractors
    LIS, FI-SL, CO-PA
    • Cross Application (Generic Extractors)
    o DB View, InfoSet, Function Module
    2. What are the steps involved in LO Extraction?
    • The steps are:
    o RSA5 Select the DataSources
    o LBWE Maintain DataSources and Activate Extract Structures
    o LBWG Delete Setup Tables
    o 0LI*BW Setup tables
    o RSA3 Check extraction and the data in Setup tables
    o LBWQ Check the extraction queue
    o LBWF Log for LO Extract Structures
    o RSA7 BW Delta Queue Monitor
    3. How to create a connection with LIS InfoStructures?
    • LBW0 Connecting LIS InfoStructures to BW
    4. What is the difference between ODS and InfoCube and MultiProvider?
    • ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.
    • CUBE: Follows the star schema, we can only append data, ideal for primary reporting.
    • MultiProvider: Does not have physical data. It allows to access data from different InfoProviders (Cube, ODS, InfoObject). It is also preferred for reporting.
    5. What are Start routines, Transfer routines and Update routines?
    • Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.
    • Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.
    6. What is the difference between start routine and update routine, when, how and why are they called?
    • Start routine can be used to access InfoPackage while update routines are used while updating the Data Targets.
    7. What is the table that is used in start routines?
    • Always the table structure will be the structure of an ODS or InfoCube. For example if it is an ODS then active table structure will be the table.
    8. Explain how you used Start routines in your project?
    • Start routines are used for mass processing of records. In start routine all the records of DataPackage is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.
    9. What are Return Tables?
    • When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee.
    10. How do start routine and return table synchronize with each other?
    • Return table is used to return the Value following the execution of start routine
    11. What is the difference between V1, V2 and V3 updates?
    • V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the same time as the document update (in the application tables).
    • V2 Update: It is an Asynchronous update. Statistics update and the Document update take place as different tasks.
    o V1 & V2 don’t need scheduling.
    • Serialized V3 Update: The V3 collective update must be scheduled as a job (via LBWE). Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence may not be the same as the order in which the data was created in all scenarios. V3 update only processes the update data that is successfully processed with the V2 update.
    12. What is compression?
    • It is a process used to delete the Request IDs and this saves space.
    13. What is Rollup?
    • This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.
    14. What is table partitioning and what are the benefits of partitioning in an InfoCube?
    • It is the method of dividing a table which would enable a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning.
    15. How many extra partitions are created and why?
    • Two partitions are created for date before the begin date and after the end date.
    16. What are the options available in transfer rule?
    • InfoObject
    • Constant
    • Routine
    • Formula
    17. How would you optimize the dimensions?
    • We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.
    18. What are Conversion Routines for units and currencies in the update rule?
    • Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos.
    19. Can an InfoObject be an InfoProvider, how and why?
    • Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select “Insert characteristic as data target”. For example, we can make 0CUSTOMER as an InfoProvider and report on it.
    20. What is Open Hub Service?
    • The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.
    21. How do you transform Open Hub Data?
    • Using BADI we can transform Open Hub Data according to the destination requirement.
    22. What is ODS?
    • Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables.
    23. What are BW Statistics and what is its use?
    • They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management.
    24. What are the steps to extract data from R/3?
    • Replicate DataSources
    • Assign InfoSources
    • Maintain Communication Structure and Transfer rules
    • Create and InfoPackage
    • Load Data
    25. What are the delta options available when you load from flat file?
    • The 3 options for Delta Management with Flat Files:
    o Full Upload
    o New Status for Changed records (ODS Object only)
    o Additive Delta (ODS Object & InfoCube)
    SAP BW Interview Questions 2
    1) What is process chain? How many types are there? How many we use in real time scenario? Can we define interdependent processes with tasks like data loading, cube compression, index maintenance, master data & ods activation in the best possible performance & data integrity.
    2) What is data integrityand how can we achieve this?
    3) What is index maintenance and what is the purpose to use this in real time?
    4) When and why use infocube compression in real time?
    5) What is mean by data modelling and what will the consultant do in data modelling?
    6) How can enhance business content and what for purpose we enhance business content (becausing we can activate business content)
    7) What is fine-tuning and how many types are there and what for purpose we done tuning in real time. tuning can only be done for infocube partitions and creating aggregates or any other?
    8) What is mean by multiprovider and what purpose we use multiprovider?
    9) What is scheduled and monitored data loads and for what purpose?
    Ans # 1:
    Process chains exists in Admin Work Bench. Using these we can automate ETTL processes. These allows BW guys to schedule all activities and monitor (T Code: RSPC).
    PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any given process chain. Is a procedure either with in the SAP or external to it with a start and end. This process runs in the background.
    PROCESS CHAIN is set of such processes that are linked together in a chain. In other words each process is dependent on the previous process and dependencies are clearly defined in the process chain.
    This is normally done in order to automate a job or task that has to execute more than one process in order to complete the job or task.
    1. Check the Source System for that particular PC.
    2. Select the request ID (it will be in Header Tab) of PC
    3. Go to SM37 of Source System.
    4. Double Click on the Job.
    5. You will navigate to a screen
    6. In that Click "Job Details" button
    7. A small Pop-up Window comes
    8. In the Pop-up screen, take a note of
    a) Executing Server
    b) WP Number/PID
    9. Open a new SM37 (/OSM37) command
    10. In the Click on "Application Servers" button
    11. You can see different Application Servers.
    11. Goto Executing server, and Double Click (Point 8 (a))
    12. Goto PID (Point 8 (b))
    13. On the left most you can see a check box
    14. "Check" the check Box
    15. On the Menu Bar.. You can see "Process"
    16. In the "process" you have the Option "Cancel with Core"
    17. Click on that option. * -- Ramkumar K
    Ans # 2:
    Data Integrity is about eliminating duplicate entries in the database and achieve normalization.
    Ans # 4:
    InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
    This compression can be done through Process Chain and also manually.
    Tips by: Anand
    Ans#3
    Indexing is a process where the data is stored by indexing it. Eg: A phone book... When we write somebodys number we write it as Prasads number would be in "P" and Rajesh's number would be in "R"... The phone book process is indexing.. similarly the storing of data by creating indexes is called indexing.
    Ans#5
    Datamodeling is a process where you collect the facts..the attributes associated to facts.. navigation atributes etc.. and after you collect all these you need to decide which one you ill be using. This process of collection is done by interviewing the end users, the power users, the share holders etc.. it is generally done by the Team Lead, Project Manager or sometimes a Sr. Consultant (4-5 yrs of exp) So if you are new you dont have to worry about it....But do remember that it is a imp aspect of any datawarehousing soln.. so make sure that you have read datamodeling before attending any interview or even starting to work....
    Ans#6
    We can enhance the Business Content bby adding fields to it. Since BC is delivered by SAP Inc it may not contain all the infoobjects, infocubes etc that you want to use according to your company's data model... eg: you have a customer infocube(In BC) but your company uses a attribute for say..apt number... then instead of constructing the whole infocube you can add the above field to the existing BC infocube and get going...
    Ans#7
    Tuning is the most imp process in BW..Tuning is done the increase efficiency.... that means lowering time for loading data in cube.. lowering time for accessing a query.. lowering time for doing a drill down etc.. fine tuning=lowering time(for everything possible)...tuning can be done by many things not only by partitions and aggregates there are various things you can do... for eg: compression, etc..
    Ans#8
    Multiprovider can combine various infoproviders for reporting purposes.. like you can combine 4-5 infocubes or 2-3 infocubes and 2-3 ODS or IC, ODS and Master data.. etc.. you can refer to help.sap.com for more info...
    Ans#9
    Scheduled data load means you have scheduled the loading of data for some particular date and time you can do it in scheduler tab if infoobject... and monitored means you are monitoring that particular data load or some other loads by using transaction RSMON.
    1.Procedure for repeat delta?
    You need to make the request status to Red in monitor screen and then delete it from ODS/Cube. Then when you open infopackage again, system will prompt you for repeat delta.
    also.....
    Goto RSA7->F2->Update Mode--->Delta Repetation
    Delta repeation is done based on type of upload you are carrying on.
    1. if you are loading masterdata then most of the time you will change the QM status to red and then repeat the delta for the repeat of delta. the delta is allowed only if you make the changes.
    and some times you need to do the RnD if the repeat of delta is not allowed even after the qm status id made to red. here you have to change the QM status to red.
    If this is not the case, the source system and therefore also the extractor, have not yet received any information regarding the last delta and you must set the request to GREEN in the monitor using a QM action.
    The system then requests a delta again since the last delta request has not yet occurred for the extractor.
    Afterwards, you must reset the old request that you previously set to GREEN to RED since it was incorrect and it would otherwise be requested as a data target by an ODS.
    Caution: If the termianted request was a REPEAT request itself, always set this to RED so that the system tries to carry out a repeat again.
    To determine whether a delta or a repeat are to be requested, the system ONLY uses the status of the monitor.
    It is irrelevant whether the request is updated in a data target somewhere.
    When activating requests in an ODS, the system checks delta repeat requests for completeness and the correct sequence.
    Each green delta/repeat request in the monitor that came from the same DataSource/source system combination must be updated in the ODS before activation, which means that in this case, you must set them back to RED in the monitor using a QM action when using the solution described above.
    If the source of the data is a DataMart, it is not just the DELTARNR field that is relevant (in the roosprmsc table in the system in which the source DataMart is, which is usually your BW system since it is a Myself extraction in this case), rather the status of the request tabstrip control is relevant as well.
    Therefore, after the last delta request has terminated, go to the administration of your data source and check whether the DataMart indicator is set for the request that you wanted to update last.
    If this is NOT the case, you must NOT request a repeat since the system would also retransfer the data of the last delta but one.
    This means, you must NOT start a delta InfoPackage which then would request a repeat because the monitor is still RED. For information about how to correct this problem, refer to the following section.
    For more information about this, see also Note 873401.
    Proceed as follows:
    Delete the rest of this request from ALL updated data targets, set the terminated request to GREEN IN THE MONITOR and request a new DELTA.
    Only if the DataMart indicator is set does the system carry out a repeat correctly and transfers only this data again.
    This means, that only in this case can you leave the monitor status as it is and restart the delta InfoPackage. Then this creates a repeat request
    In addition, you can generally also reset the DATAMART indicator and then work using a delta request after you have set the incorrect request to GREEN in the monitor.
    Simply start the delta InfoPackage after you have reset the DATAMART indicator AND after you have set the last request that was terminated to GREEN in the monitor.
    After the delta request has been carried out successfully, remember to reset the old incorrect request to RED since otherwise the problems mentioned above will occur when you activate the data in a target ODS.
    What is process chain and how you used it?
    A) Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
    B) In one of our scenario we wanted to upload wholesale price infoobject which will have wholesale price for all the material. Then we wanted to load transaction data. While loading transaction data to populate wholesale price, there was a look up in the update rule on this InfoObject masterdata table. This dependency of first uploading masterdata and then uploading transaction data was done through the process chain.
    What is process chain and how you used it?
    A) We have used process chains to automate the delta loading process. Once you are finished with your design and testing you can automate the processes listed in RSPC. I have a real time example in the attachment.
    1. What is process chain and how you used it?
    Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
    2. What is transaction for creating Process Chains ?
    RSPC .
    3. Explain Colector Process ?
    Collector processes are used to manage multiple predecessor
    processes that feed into the same subsequent process. The collector
    processes available for BW are:
    AND :
    All of the direct predecessor processes must raise an event in order for subsequent processes to be executed
    OR :
    A least one predecessor process must send an event The first predecessor process that sends an event triggers the subsequent process
    Any additional predecessor processes that send an event will again trigger
    subsequent process (Only if the chain is planned as “periodic”)
    EXOR : Exclusive “OR”
    Similar to regular “OR”, but there is only ONE execution of the successor
    processes, even if several predecessor processes raise an event
    4. What are application Process ?
    Application processes represent BW activities that are typically
    performed as part of BW operations.
    Examples include:
    Data load
    Attribute/Hierarchy Change run
    Aggregate rollup
    Reporting Agent Settings
    5. Tell some facts about process Chains
    Process chains are transportable Button for writing to a change request when
    maintaining a process chain in RSPC
    Process chains available in the transport connection wizard (administrator workbench)
    If a process “dumps”, it is treated in the same manner as a failed process
    Graphical display of Process Chain Maintenance requires the 620 SAPGUI and SAP BW 3.0B Frontend GUI
    A special control background job runs to facilitate the execution of the of the other batch jobs of the process chain
    Note your BTC process distribution, and make sure that an extra BTC process is available so the supporting control job can run immediately
    6. What happens when chain is activated ?
    When a chain gets activated It will be copied into active version The processes will be planned in batch as program RSPROCESS with type and variant given as parameters with job name BI_PROCESS_<TYPE> waiting for event, except the trigger The trigger is planned as specified in its variant, if “start via meta-chain” it is not planned to batch
    7. Steps in process chains ?
    Go to transaction code-> RSPC
    Follow the Basic Flow of Process chain..
    1. Start chain
    2. Delete BasicCube indexes
    3. Load data from the source system into the PSA
    4. Load data from the PSA into the ODS object
    5. Activate data in the ODS object
    6. Load data from the ODS object in the BasicCube
    7. Create indexes after loading for the BasicCube
    Also check out theese links:
    Help on "Remedy Tickets resolution"
    production support issues
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    https://forums.sdn.sap.com/click.jspa?searchID=678788&messageID=1842076
    Production Support
    Production support issues
    Business Intelligence Old Forum (Read Only Archive)
    http://help.sap.com/saphelp_nw2004s/helpdata/en/8f/c08b3baaa59649e10000000a11402f/frameset.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/8da0cd90-0201-0010-2d9a-abab69f10045
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/19683495-0501-0010-4381-b31db6ece1e9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/36693695-0501-0010-698a-a015c6aac9e1
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/9936e790-0201-0010-f185-89d0377639db
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3507aa90-0201-0010-6891-d7df8c4722f7
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/263de690-0201-0010-bc9f-b65b3e7ba11c
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    For common data load errors check this link:
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    Re: In production Support , how i can acquire the knowledge
    Re: How to resolve tickets  its urgent
    Re: production support issues
    production support
    check it out
    /thread/152949 [original link is broken]
    production support issues
    production support issues
    Production Support Issues
    /thread/153963 [original link is broken]
    Issue log on SAP- BW Production support
    issues in production support
    Production support issues
    /thread/155620 [original link is broken]
    Production support issues
    Production support issues
    production errors
    Re: HI,wht r de errors in Support in BW
    Production Support
    /message/3267132#3267132 [original link is broken]
    Assign points if useful
    Regards,
    Hari Reddy

  • Please explain plan with 'BITMAP CONVERSION TO ROWIDS'

    Hi,
    in my 9.2.0.8 I've got plan like :
    Plan
    SELECT STATEMENT  CHOOSECost: 26,104                           
         7 TABLE ACCESS BY INDEX ROWID UMOWY Cost: 26,105  Bytes: 41  Cardinality: 1                      
              6 BITMAP CONVERSION TO ROWIDS                 
                   5 BITMAP AND            
                        2 BITMAP CONVERSION FROM ROWIDS       
                             1 INDEX RANGE SCAN UMW_PRD_KPD_KOD Cost: 406  Cardinality: 111,930 
                        4 BITMAP CONVERSION FROM ROWIDS       
                             3 INDEX RANGE SCAN UMW_PRD_KPR_KOD Cost: 13,191  Cardinality: 111,930  as far as I know Oracle is trying to combine two indexes , so if I create multicolumn index the plan should be better right ?
    Generally all bitmap conversions related to b-tree indexes are trying to combine multiple indexes to deal with or/ index combine operations right ?
    And finally what about AND_EQUAL hint is that kind of alternative for that bitmap conversion steps ?
    Regards
    Greg

    as far as I know Oracle is trying to combine two indexes , so if I create multicolumn index the plan should be better right ?Only you can really tell - but if this is supposed to be a "precision" query the optimizer thinks you don't have a good index into the target data. Don't forget to consider the benefits of compressed indexes if you do follow this route.
    Generally all bitmap conversions related to b-tree indexes are trying to combine multiple indexes to deal with or/ index combine operations right ?Bitmap conversions when there are no real bitmap indexes involved are always about combining multiple b-tree index range scans to minimise the number of reads from the table.
    And finally what about AND_EQUAL hint is that kind of alternative for that bitmap conversion steps ?AND_EQUAL was an older mechanism for combining index range scans to minimise visits to the table - it was restricted to a maximum of 5 indexes per table - the indexes had to be single column, non-unique, and the predicates had to be equality. The access method is deprecated in 10g. (See the following note, and the comments in particular, for more details: http://jonathanlewis.wordpress.com/2009/05/08/ioug-day-4/ )
    Regards
    Jonathan Lewis

  • Database options under UCM

    With regard to this thread: Few interesting facts about database under UCM 10g what Oracle database options can be effectively used under UCM? The comprehensive overview of the options can be obtained here: http://www.oracle.com/us/products/database/options/index.html
    Real Application Clusters* - this option can be used to increase the database performance and availability. It is fully transparent to applications.
    Partitioning* - this option can affect performance, enable hierarchical storage management (using cheaper hardware to store large amount of data) and help with disaster recovery (backup/restore). I believe, if documents are stored in the database, this option is a must. Even if a project does not use HSM, partitioning of large tables such as FILESTORAGE will enable: a) faster backups - once a partition is "closed", it will not change - therefore, future backups can work only with "open" partitions and unpartitioned data; b) faster restores - large tables can only be partially restored - e.g. few "last months" and the system can be running whilst restoring the remaining data. Watch out for partitioning of metadata tables, though (DOCMETA, REVISIONS, DOCUMENTS)! At least, there are no clear criteria how these tables should be partitioned - and various checks and validations may actually require to have those tables restored fully before you may perform such basic operations like check-in.
    Advance Security* and Database Vault* - these options may increase security, when content is stored in the database (no one, not even administrators might be able to reach the content unless authorized). The only drawback to that is that even if content is stored in the database, in initial phases it is anyway stored in the filesystem (vault), too, and the minimum retention period is 1 day
    I will also mention two options that might look appetizing, but UCM probably does not benefit from them too much:
    Advanced Compression* - compresses data in the database. This, and Hybric Columnar Compression used in Exadata, can do the real magic when working with structured data (just read a report from Turkcell, who compressed 600 TB to 50 TB, which means by 12). For unstructured data, such as PDF or JPEG, the effect might be very small, though. Still, if you have a chance, give it a try.
    Active Data Guard* - Data Guard is a technology for disaster recovery. Advantage of Active Data Guard is that it allows using of the secondary location for read only operations, rather than leaving it idle (stand-by); this means, you might decrease sizing of both locations. With UCM, also do not forget about CONTENT TRACKER (which might require a "write" operation even for otherwise read only ones, such as DOC_INFO, GET_SEARCH_RESULTS, or retrieving a content item), but db gurus know how to handle even that. Unfortunately, Active Data Guard cannot be used with UCM at the moment, because not all the data is stored in the database and the secondary location might not be fully synchronized.
    In my opinion, other options are not so relevant for a UCM solution.

    Compression and Deduplication of SecureFiles LOBs, which is part of the Advanced Compression Option, can potentially deliver huge space savings, and performance benefits. If the content is primarily Office documents, or XML documents, or character-based (email?), then it will likely compress very well. Also, if the same file is stored multiple times, deduplication will cause the Oracle database to store only one copy, rather than storing the same document multiple times. There's more info on Advanced Compression here: http://www.oracle.com/us/products/database/options/advanced-compression/index.html

  • Aggregates with Fixed values

    HI
    We have a requirement to create an aggregate on one of our infocubes. Dical period is going to be included as part of this aggregate. I would like to know if it is possible to set this aggregate to be populated for a fixed range of periods eg: 008/2009 - 012/2009.
    I know that we can set this to one fixed date like 008/2009 but is a range of period possible? If so how can I do that.
    Thanks.

    Hi,
    To the best of my knowledge, it is not possible to give a range as a Fixed Value for Aggregate.
    However, I recommend to go for other possible options like partioning, compression, Indexing, OLAP Cache, DB Statistics etc to improve the performance of the query.
    Regards,
    Sekhar.

  • SAP BI issues.

    Hi Experts,
    I am a novice candidate pursuing for SAP BW/BI opportunities and  appearing for different interviews in SAP BW/BI. Following are some of the questions that I was fired by the interviewers. I appreciate your time and consideration for answering my queries.
    1)How do you initialize the setup tables for filling the data of just past three years ?
    And after full load from set up tables and executing the delta, later there is a requirement of adding more application tables in the datasource of LO cockpit.How should we go about without getting duplicate records in the source system and retaining the original delta
    2) what are the general different transformations issues and methods to solve it?
    3) How to handle DTP issues like if number of records were 1000 in DSO object and the Infocube received just 900, bad characters problem,etc
    4)  RDA background process flow?
    5) How to have plan/actual comparisons?lets say Profitability analysis.
       Plan infocubes contains which data with respect to actual infocube? Can you explain with example?
    6) In business content activation,how to exclude the objects that have been already activated.
    7)Does change run affect all aggregates or only aggregates containing that master data which is undergoing change run is affected?
    8) Can somebody send me sample Functional requirements design documents/blueprints/detal design docs.
    9) what are functional support issues in SAP BI implementations?
    10) The fastest and best method to improve query performance?
        If I am right, is it Cache settings?
    11) I am also preparing for certification exam BI 7.0 which is on march 7th? I need some sample questions.
    12) Need more information to prepare for SAP BI functional analyst and developer interviews.
    Thanks
    Mujtaba.
    Lot of points will be given for urgent replies.
    Edited by: Nazeeruddin Mujtaba Mohammed on Feb 16, 2008 7:20 PM

    Hi Jacky
    6) In business content activation,how to exclude the objects that have been already activated.
         Just you have to select the particular object ,context menu ,select Donot install below.
    12) Need more information to prepare for SAP BI functional analyst and developer interviews.
    Normally the production support activities include
    Scheduling
    R/3 Job Monitoring
    B/W Job Monitoring
    Taking corrective action for failed data loads.
    Working on some tickets with small changes in reports or in AWB objects.
    The activities in a typical Production Support would be as follows:
    1. Data Loading - could be using process chains or manual loads.
    2. Resolving urgent user issues - helpline activities
    3. Modifying BW reports as per the need of the user.
    4. Creating aggregates in Prod system
    5. Regression testing when version/patch upgrade is done.
    6. Creating adhoc hierarchies.
    we can perform the daily activities in Production
    1. Monitoring Data load failures thru RSMO
    2. Monitoring Process Chains Daily/weekly/monthly
    3. Perform Change run Hierarchy
    4. Check Aggr's Rollup
    To add to the above
    1)check data targets are ready for reporting,
    2) No failed or cancelled jobs in sm37 monitors and Bw Monitor.
    3) All requests are loaded for day, monthly and yearly also.
    4) Also to note down time taken for loading of critical info cubes which are used for reporting.
    5) Is there any break in any schedules from your process chains.
    As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
    a) Loads can be failed due to the invalid characters
    b) Can be because of the deadlock in the system
    c) Can be because of previous load failure , if the load is dependant on other loads
    d) Can be because of erroneous records
    e) Can be because of RFC connections
    These are some of the reasons for the load failures.
    Why there is frequent load failures during extractions? and how to analyse them?
    If these failures are related to Data, there might be data inconsistency in source system. Though we are handling properly in transfer rules. We can monitor these issues in T-code -> RSMO and PSA (failed records) and update.
    If we are talking about whole extraction process, there might be issues of work process scheduling and IDoc transfer to target system from source system. These issues can be re-initiated by canceling that specific data load and ( usually by changing Request color from Yellow - > Red in RSMO). and restart the extraction.
    What is the daily task we do in production support.How many times we will extract the data at what times.
    It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in number of records and kind of transfer rules you have provided. If transfer rules have some kind of round about transfer rules and updates rules has calculations for customized key figures... long times are expected..
    Usually You need to work on RSMO and see what records are failing.. and update from PSA.
    What are some of the frequent failures and errors?
    As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
    a) Loads can be failed due to the invalid characters
    b) Can be because of the deadlock in the system
    c) Can be because of previous load failure , if the load is dependant on other loads
    d) Can be because of erroneous records
    e) Can be because of RFC connections
    These are some of the reasons for the load failures.
    for Rfc connections:
    We use SM59 for creating RFC destinations
    Some questions
    1)     RFC connection lost.
    A) We can check out in the SM59 t-code
    RFC Des
    + R/3 conn
    CRD client (our r/3 client)
    double click..test connection in menu
    2) Invalid characters while loading.
    A) Change them in the PSA & load them.
    3) ALEREMOTE user is locked.
    A) Ask your Basis team to release the user. It is mostly ALEREMOTE.
    2) Password Changed
    3) Number of incorrect attempts to login into ALEREMOTE.
    4) USE SM12 t-code to find out are there any locks.
    4) Lower case letters not allowed.
    A) Uncheck the lower case letters check box under "general" tab in the info object.
    5) While loading the data i am getting messeage that 'Record
    the field mentioned in the errror message is not mapped to any infoboject in the transfer rule.
    6) object locked.
    A) It might be locked by some other process or a user. Also check for authorizations
    7) "Non-updated Idocs found in Source System".
    8) While loading master data, one of the datapackage has a red light error message:
    Master data/text of characteristic ZCUSTSAL already deleted .
    9) extraction job aborted in r3
    A) It might have got cancelled due to running for more than the expected time, or may be cancelled by R/3 users if it is hampering the performance.
    10) request couldnt be activated because there is another request in the psa with a smaller sid
    A)
    11) repeat of last delta not possible
    12) datasource not replicated
    A) Replicate the datasource from R/3 through source system in the AWB & assign it to the infosource and activate it again.
    13) datasource/transfer structure not active.
    A) Use the function module RS_TRANSTRU_ACTIVATE_ALL to activate it
    14) ODS activation error.
    A) ODS activation errors can occur mainly due to following reasons-
    1.Invalid characters (# like characters)
    2.Invalid data values for units/currencies etc
    3.Invalid values for data types of char & key figures.
    4.Error in generating SID values for some data.
    15. conversio routine error
    solution.check the data format in source
    16.OBJECT CANOOT BE ACTIVATED.or error when activating object
    check the consistency of the object.
    17.no data found.(in query)
    check the info provider wether data is there or not and delete unsucessful request.
    18.error generating or activating update rules.
    1. What are the extractor types?
    • Application Specific
    o BW Content FI, HR, CO, SAP CRM, LO Cockpit
    o Customer-Generated Extractors
    LIS, FI-SL, CO-PA
    • Cross Application (Generic Extractors)
    o DB View, InfoSet, Function Module
    2. What are the steps involved in LO Extraction?
    • The steps are:
    o RSA5 Select the DataSources
    o LBWE Maintain DataSources and Activate Extract Structures
    o LBWG Delete Setup Tables
    o 0LI*BW Setup tables
    o RSA3 Check extraction and the data in Setup tables
    o LBWQ Check the extraction queue
    o LBWF Log for LO Extract Structures
    o RSA7 BW Delta Queue Monitor
    3. How to create a connection with LIS InfoStructures?
    • LBW0 Connecting LIS InfoStructures to BW
    4. What is the difference between ODS and InfoCube and MultiProvider?
    • ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.
    • CUBE: Follows the star schema, we can only append data, ideal for primary reporting.
    • MultiProvider: Does not have physical data. It allows to access data from different InfoProviders (Cube, ODS, InfoObject). It is also preferred for reporting.
    5. What are Start routines, Transfer routines and Update routines?
    • Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.
    • Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.
    6. What is the difference between start routine and update routine, when, how and why are they called?
    • Start routine can be used to access InfoPackage while update routines are used while updating the Data Targets.
    7. What is the table that is used in start routines?
    • Always the table structure will be the structure of an ODS or InfoCube. For example if it is an ODS then active table structure will be the table.
    8. Explain how you used Start routines in your project?
    • Start routines are used for mass processing of records. In start routine all the records of DataPackage is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.
    9. What are Return Tables?
    • When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee.
    10. How do start routine and return table synchronize with each other?
    • Return table is used to return the Value following the execution of start routine
    11. What is the difference between V1, V2 and V3 updates?
    • V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the same time as the document update (in the application tables).
    • V2 Update: It is an Asynchronous update. Statistics update and the Document update take place as different tasks.
    o V1 & V2 don’t need scheduling.
    • Serialized V3 Update: The V3 collective update must be scheduled as a job (via LBWE). Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence may not be the same as the order in which the data was created in all scenarios. V3 update only processes the update data that is successfully processed with the V2 update.
    12. What is compression?
    • It is a process used to delete the Request IDs and this saves space.
    13. What is Rollup?
    • This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.
    14. What is table partitioning and what are the benefits of partitioning in an InfoCube?
    • It is the method of dividing a table which would enable a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning.
    15. How many extra partitions are created and why?
    • Two partitions are created for date before the begin date and after the end date.
    16. What are the options available in transfer rule?
    • InfoObject
    • Constant
    • Routine
    • Formula
    17. How would you optimize the dimensions?
    • We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.
    18. What are Conversion Routines for units and currencies in the update rule?
    • Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos.
    19. Can an InfoObject be an InfoProvider, how and why?
    • Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select “Insert characteristic as data target”. For example, we can make 0CUSTOMER as an InfoProvider and report on it.
    20. What is Open Hub Service?
    • The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.
    21. How do you transform Open Hub Data?
    • Using BADI we can transform Open Hub Data according to the destination requirement.
    22. What is ODS?
    • Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables.
    23. What are BW Statistics and what is its use?
    • They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management.
    24. What are the steps to extract data from R/3?
    • Replicate DataSources
    • Assign InfoSources
    • Maintain Communication Structure and Transfer rules
    • Create and InfoPackage
    • Load Data
    25. What are the delta options available when you load from flat file?
    • The 3 options for Delta Management with Flat Files:
    o Full Upload
    o New Status for Changed records (ODS Object only)
    o Additive Delta (ODS Object & InfoCube)
    SAP BW Interview Questions 2
    1) What is process chain? How many types are there? How many we use in real time scenario? Can we define interdependent processes with tasks like data loading, cube compression, index maintenance, master data & ods activation in the best possible performance & data integrity.
    2) What is data integrityand how can we achieve this?
    3) What is index maintenance and what is the purpose to use this in real time?
    4) When and why use infocube compression in real time?
    5) What is mean by data modelling and what will the consultant do in data modelling?
    6) How can enhance business content and what for purpose we enhance business content (becausing we can activate business content)
    7) What is fine-tuning and how many types are there and what for purpose we done tuning in real time. tuning can only be done for infocube partitions and creating aggregates or any other?
    8) What is mean by multiprovider and what purpose we use multiprovider?
    9) What is scheduled and monitored data loads and for what purpose?
    Ans # 1:
    Process chains exists in Admin Work Bench. Using these we can automate ETTL processes. These allows BW guys to schedule all activities and monitor (T Code: RSPC).
    PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any given process chain. Is a procedure either with in the SAP or external to it with a start and end. This process runs in the background.
    PROCESS CHAIN is set of such processes that are linked together in a chain. In other words each process is dependent on the previous process and dependencies are clearly defined in the process chain.
    This is normally done in order to automate a job or task that has to execute more than one process in order to complete the job or task.
    1. Check the Source System for that particular PC.
    2. Select the request ID (it will be in Header Tab) of PC
    3. Go to SM37 of Source System.
    4. Double Click on the Job.
    5. You will navigate to a screen
    6. In that Click "Job Details" button
    7. A small Pop-up Window comes
    8. In the Pop-up screen, take a note of
    a) Executing Server
    b) WP Number/PID
    9. Open a new SM37 (/OSM37) command
    10. In the Click on "Application Servers" button
    11. You can see different Application Servers.
    11. Goto Executing server, and Double Click (Point 8 (a))
    12. Goto PID (Point 8 (b))
    13. On the left most you can see a check box
    14. "Check" the check Box
    15. On the Menu Bar.. You can see "Process"
    16. In the "process" you have the Option "Cancel with Core"
    17. Click on that option. * --
    Ans # 2:
    Data Integrity is about eliminating duplicate entries in the database and achieve normalization.
    Ans # 4:
    InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
    This compression can be done through Process Chain and also manually.
    Ans#3
    Indexing is a process where the data is stored by indexing it. Eg: A phone book... When we write somebodys number we write it as Prasads number would be in "P" and Rajesh's number would be in "R"... The phone book process is indexing.. similarly the storing of data by creating indexes is called indexing.
    Ans#5
    Datamodeling is a process where you collect the facts..the attributes associated to facts.. navigation atributes etc.. and after you collect all these you need to decide which one you ill be using. This process of collection is done by interviewing the end users, the power users, the share holders etc.. it is generally done by the Team Lead, Project Manager or sometimes a Sr. Consultant (4-5 yrs of exp) So if you are new you dont have to worry about it....But do remember that it is a imp aspect of any datawarehousing soln.. so make sure that you have read datamodeling before attending any interview or even starting to work....
    Ans#6
    We can enhance the Business Content bby adding fields to it. Since BC is delivered by SAP Inc it may not contain all the infoobjects, infocubes etc that you want to use according to your company's data model... eg: you have a customer infocube(In BC) but your company uses a attribute for say..apt number... then instead of constructing the whole infocube you can add the above field to the existing BC infocube and get going...
    Ans#7
    Tuning is the most imp process in BW..Tuning is done the increase efficiency.... that means lowering time for loading data in cube.. lowering time for accessing a query.. lowering time for doing a drill down etc.. fine tuning=lowering time(for everything possible)...tuning can be done by many things not only by partitions and aggregates there are various things you can do... for eg: compression, etc..
    Ans#8
    Multiprovider can combine various infoproviders for reporting purposes.. like you can combine 4-5 infocubes or 2-3 infocubes and 2-3 ODS or IC, ODS and Master data.. etc.. you can refer to help.sap.com for more info...
    Ans#9
    Scheduled data load means you have scheduled the loading of data for some particular date and time you can do it in scheduler tab if infoobject... and monitored means you are monitoring that particular data load or some other loads by using transaction RSMON.
    1.Procedure for repeat delta?
    You need to make the request status to Red in monitor screen and then delete it from ODS/Cube. Then when you open infopackage again, system will prompt you for repeat delta.
    also.....
    Goto RSA7->F2->Update Mode--->Delta Repetation
    Delta repeation is done based on type of upload you are carrying on.
    1. if you are loading masterdata then most of the time you will change the QM status to red and then repeat the delta for the repeat of delta. the delta is allowed only if you make the changes.
    and some times you need to do the RnD if the repeat of delta is not allowed even after the qm status id made to red. here you have to change the QM status to red.
    If this is not the case, the source system and therefore also the extractor, have not yet received any information regarding the last delta and you must set the request to GREEN in the monitor using a QM action.
    The system then requests a delta again since the last delta request has not yet occurred for the extractor.
    Afterwards, you must reset the old request that you previously set to GREEN to RED since it was incorrect and it would otherwise be requested as a data target by an ODS.
    Caution: If the termianted request was a REPEAT request itself, always set this to RED so that the system tries to carry out a repeat again.
    To determine whether a delta or a repeat are to be requested, the system ONLY uses the status of the monitor.
    It is irrelevant whether the request is updated in a data target somewhere.
    When activating requests in an ODS, the system checks delta repeat requests for completeness and the correct sequence.
    Each green delta/repeat request in the monitor that came from the same DataSource/source system combination must be updated in the ODS before activation, which means that in this case, you must set them back to RED in the monitor using a QM action when using the solution described above.
    If the source of the data is a DataMart, it is not just the DELTARNR field that is relevant (in the roosprmsc table in the system in which the source DataMart is, which is usually your BW system since it is a Myself extraction in this case), rather the status of the request tabstrip control is relevant as well.
    Therefore, after the last delta request has terminated, go to the administration of your data source and check whether the DataMart indicator is set for the request that you wanted to update last.
    If this is NOT the case, you must NOT request a repeat since the system would also retransfer the data of the last delta but one.
    This means, you must NOT start a delta InfoPackage which then would request a repeat because the monitor is still RED. For information about how to correct this problem, refer to the following section.
    For more information about this, see also Note 873401.
    Proceed as follows:
    Delete the rest of this request from ALL updated data targets, set the terminated request to GREEN IN THE MONITOR and request a new DELTA.
    Only if the DataMart indicator is set does the system carry out a repeat correctly and transfers only this data again.
    This means, that only in this case can you leave the monitor status as it is and restart the delta InfoPackage. Then this creates a repeat request\
    In addition, you can generally also reset the DATAMART indicator and then work using a delta request after you have set the incorrect request to GREEN in the monitor.
    Simply start the delta InfoPackage after you have reset the DATAMART indicator AND after you have set the last request that was terminated to GREEN in the monitor.
    After the delta request has been carried out successfully, remember to reset the old incorrect request to RED since otherwise the problems mentioned above will occur when you activate the data in a target ODS.
    What is process chain and how you used it?
    A) Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
    B) In one of our scenario we wanted to upload wholesale price infoobject which will have wholesale price for all the material. Then we wanted to load transaction data. While loading transaction data to populate wholesale price, there was a look up in the update rule on this InfoObject masterdata table. This dependency of first uploading masterdata and then uploading transaction data was done through the process chain.
    What is process chain and how you used it?
    A) We have used process chains to automate the delta loading process. Once you are finished with your design and testing you can automate the processes listed in RSPC. I have a real time example in the attachment.
    1. What is process chain and how you used it?
    Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
    2. What is transaction for creating Process Chains ?
    RSPC .
    3. Explain Colector Process ?
    Collector processes are used to manage multiple predecessor
    processes that feed into the same subsequent process. The collector
    processes available for BW are:
    AND :
    All of the direct predecessor processes must raise an event in order for subsequent processes to be executed
    OR :
    A least one predecessor process must send an event The first predecessor process that sends an event triggers the subsequent process
    Any additional predecessor processes that send an event will again trigger
    subsequent process (Only if the chain is planned as “periodic”)
    EXOR : Exclusive “OR”
    Similar to regular “OR”, but there is only ONE execution of the successor
    processes, even if several predecessor processes raise an event
    4. What are application Process ?
    Application processes represent BW activities that are typically
    performed as part of BW operations.
    Examples include:
    Data load
    Attribute/Hierarchy Change run
    Aggregate rollup
    Reporting Agent Settings
    5. Tell some facts about process Chains
    Process chains are transportable Button for writing to a change request when
    maintaining a process chain in RSPC
    Process chains available in the transport connection wizard (administrator workbench)
    If a process “dumps”, it is treated in the same manner as a failed process
    Graphical display of Process Chain Maintenance requires the 620 SAPGUI and SAP BW 3.0B Frontend GUI
    A special control background job runs to facilitate the execution of the of the other batch jobs of the process chain
    Note your BTC process distribution, and make sure that an extra BTC process is available so the supporting control job can run immediately
    6. What happens when chain is activated ?
    When a chain gets activated It will be copied into active version The processes will be planned in batch as program RSPROCESS with type and variant given as parameters with job name BI_PROCESS_<TYPE> waiting for event, except the trigger The trigger is planned as specified in its variant, if “start via meta-chain” it is not planned to batch
    7. Steps in process chains ?
    Go to transaction code-> RSPC
    Follow the Basic Flow of Process chain..
    1. Start chain
    2. Delete BasicCube indexes
    3. Load data from the source system into the PSA
    4. Load data from the PSA into the ODS object
    5. Activate data in the ODS object
    6. Load data from the ODS object in the BasicCube
    7. Create indexes after loading for the BasicCube
    Regards,
    Hari

  • Updated Information About TimeCapsule Errors!

    I updated the firmware on my TimeCapsule and now, I no longer see the spinning arrow icon spinning! Also, when I go into TimeMachine, after several seconds, it closes and goes back to the desktop. The WiFi seems to be working ok and when I check for the latest back, the times and dates seems to be current. However, when I try to do a manual backup, the TimeMachine appears and then after a while, disappears and then reappears and disappears again and again!
    Update One: I just notice that when I go into my backup drive in the TimeCapsule, the icon on the desktop keeps disappearing and reappearing every few seconds!
    Update Two: Another odd thing happens. When the TimeCapsule is turned on and the WiFi connection is active, my MacBook Pro Retina-desktop refreshes every 1 minute (I timed it)! I even unplug it and the desktop was acting normal.
    What the **** is going on? By the way, I did upgrade to maverick and I am using a MacBook Pro Retina.

    What’s new in Oracle9i
         Automatic undo management
         Automatic Segment Space Management
         Rename Column
         Rename Constraints
         Data Compression
         Index Key Compression
         Flashback Query
         Object Privileges by DBA
         Default Temporary Tablespace
         Resumable Space Allocation
         List Partitioning
         Range-list Partitioning
         External Tables
         Tuning Advisories
         Dynamic Sampling of Optimizer Statistics
         Dynamic Database Parameters
         Dynamic Memory Management / Automatic Memory Tuning
         Server Parameter File
         Oracle Managed Files
         Using multiple block sizes
         Multi-table INSERT
         Index Skip Scan
         DBMS_METADATA
         DBMS_XPLAN
         ANSI Style Outer Joins
         Random Sampling of Data
         New Data Types
         “Upsert” or the MERGE statement
    regards

  • Differnet between 8i and 9i

    There is any different between oracle 8i and 9i.
    recently we upgraded from 8i to 9i after that the applications are working very fast.
    anybody know what is the reason.
    Thank, Raghu.K

    just a glance, new features in 9i
    What’s new in Oracle9i?
         Automatic undo management
         Automatic Segment Space Management
         Rename Column
         Rename Constraints
         Data Compression
         Index Key Compression
         Flashback Query
         Object Privileges by DBA
         Default Temporary Tablespace
         Resumable Space Allocation
         List Partitioning
         Range-list Partitioning
         External Tables
         Tuning Advisories
         Dynamic Sampling of Optimizer Statistics
         Dynamic Database Parameters
         Dynamic Memory Management / Automatic Memory Tuning
         Server Parameter File
         Oracle Managed Files
         Using multiple block sizes
         Multi-table INSERT
         Index Skip Scan
         DBMS_METADATA
         DBMS_XPLAN
         ANSI Style Outer Joins
         Random Sampling of Data
         New Data Types
         “Upsert” or the MERGE statement
    regards

  • What is the Statistical Analysis

    Hi,
    What is the Statistical Analysis(I need Tcode and What would i check in this) and Indexes Analysis(I need Tcode and What would i check in this) and RSRV also.
    Please do the needful for me.
    Thanks

    Hi Gali,
    Basically statistical analysis is done for the system design purpose.
    you can use transaction DB02 where you can check for how much space does your object needs to store all the data based on the storage type it is going to use.
    Suppose your scheduling option is From PSA and then data target,then from this stats you can determine how much space your data in BW will take.
    You can go to option "Detailed analysis" in this transaction and give your object name and there you have various options to get the information on like compression,Index type.
    Basically it gives you an idea about all the database aspect of your system and your object.
    In RSRV is bcaically to check the data cosistency of your objects like cubes ODS, Mater data where you can correct the SID's of your object.So after your developments and data loads and you just need to check the data conistency through RSRV.
    Hope this helps.
    If helpful please assign points
    Message was edited by:
            Ajeet Kumar Singh

Maybe you are looking for