Avoid redundancy in dropdownlistbox choice

Hi Experts,
I have a table and a dropdownlistbox.
What's the easiest possibilities to avoid that double entries are shown in the dropdownlistbox?
Thank you very much in advance!

Hi ,
For a DDLB(DropdownListBox) we have to pass an internal table to display values...
So the best way would be to use select distinct while populating the internal table....
What we normally do is use a table of type TIHTTPNVP.
eg.
<%
  DATA : gt_tab type TIHTTPNVP,
          gs_tab type IHTTPNVP
          itab type standard table of ZTABLE,
          wa type ZTABLE.
select distinct FIELD1 from ZTABLE into corresponding fields of table itab.
loop at itab into wa.
  gs_tab-NAME = wa-FIELD1.
  gs_tab-VALUE = wa-FIELD1.
  append gs_tab to gt_tab.
endloop.
%>
Then pass this gt_tab to ur DDLB...!
eg.
<htmlb:dropdownListBox id = "DDLB"
table             = "<%= gt_tab %>"
nameOfKeyColumn   = "NAME"
nameOfValueColumn = "VALUE"
width             = "100%"
onSelect          = "onInputProcessing"
/>
Hope this helps.
<b><i>Do reward each useful answer..!</i></b>
Thanks,
Tatvagna.
Message was edited by:
        Tatvagna Shah

Similar Messages

  • [svn:fx-trunk] 12883: Remove the skin classes from the halo theme project to avoid redundancy with the airframework /framework swcs.

    Revision: 12883
    Revision: 12883
    Author:   [email protected]
    Date:     2009-12-12 15:53:50 -0800 (Sat, 12 Dec 2009)
    Log Message:
    Remove the skin classes from the halo theme project to avoid redundancy with the airframework/framework swcs.
    QE notes: No
    Doc notes: No
    Bugs: SDK-24293
    Reviewer: Glenn
    Tests run: Checkintests, smattering of Halo and AIR mustella tests
    Is noteworthy for integration: Yes
    Ticket Links:
        http://bugs.adobe.com/jira/browse/SDK-24293
    Modified Paths:
        flex/sdk/trunk/frameworks/projects/framework/src/FrameworkClasses.as
        flex/sdk/trunk/frameworks/projects/halo/build.xml
    Added Paths:
        flex/sdk/trunk/frameworks/projects/framework/src/mx/skins/halo/WindowBackground.as
    Removed Paths:
        flex/sdk/trunk/frameworks/projects/halo/assets/
        flex/sdk/trunk/frameworks/projects/halo/src/HaloClasses.as
        flex/sdk/trunk/frameworks/projects/halo/src/mx/skins/

  • How to Avoid Redundant Indexes on Indexed Views

    Let's say I have an indexed view - so far it only has a clustered index. I want to avoid redundant indexes. So let's say the view joins Customers and Orders. If I've already put a nonclustered index on every column in both base tables, it's a pretty sure
    bet that adding nonclustered indexes to the view will be redundant, right? Or not?  I'm not understanding this.

    The indexes on the view are not redundant even if on the same columns as the base tables.  The indexes on view columns materialize the result after joins and aggregations in the view query.
    Columns should generally be indexed only if they are used in join or where clauses.  An index on every column may be overkill. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Help me to avoide redundancy in the  following code

    hi ,
    for selecting req fields from tables vbap,vbak n makt, i use the following inner join
    but it takes many duplicate records from the table.. pls correct the following coding
    SELECT VBAK~KUNNR
    VBAK~VBELN
    VBAK~AUDAT
    VBAK~VBTYP
    VBAP~MATNR
    VBAP~ZMENG
    VBAP~NETPR
    VBAP~NETWR
    MAKT~MAKTX
    INTO TABLE IT_OUTPUT
    FROM VBAK
    INNER JOIN VBAP ON VBAKVBELN EQ VBAPVBELN
    INNER JOIN MAKT ON VBAPMATNR EQ MAKTMATNR
    WHERE VBAKVBELN IN SALESDOC AND VBAKAUDAT IN DOCDATE
    AND VBAPMATNR IN MATNR AND VBAPZMENG IN TRGQTY.
    thanks n regards,

    Hi experts.. i am having the same problem regarding data redundancy when i use the following codes:
    PARAMETERS: P_IORDER LIKE EKKN-AUFNR.
    SELECT-OPTIONS S_JORDER FOR EKKN-EBELN.
    TYPES: BEGIN OF t_output,
             EBELN  LIKE EKKN-EBELN,
             SAKTO  LIKE EKKN-SAKTO,
             EBELP  LIKE EKPO-EBELP,
             TXZ01  LIKE EKPO-TXZ01,
             PACKNO LIKE EKPO-PACKNO,
             SUB_PACKNO LIKE ESLL-SUB_PACKNO,
           END OF t_output,
           BEGIN OF t_output2,
             PACKNO LIKE ESLL-PACKNO,
             SRVPOS LIKE ESLL-SRVPOS,
             KTEXT1 LIKE ESLL-KTEXT1,
             MENGE LIKE ESLL-MENGE,
             MEINS LIKE ESLL-MEINS,
             TBTWR LIKE ESLL-TBTWR,
           END OF t_output2.
    DATA: i_output TYPE STANDARD TABLE OF t_output WITH HEADER LINE,
               i_output2 TYPE STANDARD TABLE OF t_output2 WITH HEADER LINE WITH
                   KEY PACKNO.
    IF S_JORDER = 0.
        SELECT aEBELN aSAKTO bEBELP bTXZ01 cPACKNO cSUB_PACKNO
        INTO CORRESPONDING FIELDS OF TABLE i_output
        FROM EKKN AS a
        INNER JOIN EKPO AS b ON aEBELN = bEBELN
        LEFT OUTER JOIN ESLL AS c ON bPACKNO = cPACKNO
        WHERE aAUFNR = P_IORDER AND BEBELN = '4500006740'.
        SELECT
            PACKNO
            SRVPOS
            KTEXT1
            MENGE
            MEINS
            TBTWR
        FROM ESLL
        INTO TABLE i_output2
        FOR ALL ENTRIES IN i_output
        WHERE PACKNO = i_output-sub_packno.
    ENDIF.
    LOOP AT i_output.
    READ TABLE i_output2 WITH TABLE KEY PACKNO = i_output-SUB_PACKNO.
    *Declare PACKNO as your table key
    IF SY-SUBRC EQ 0.
      WRITE: / i_output-EBELN,
           15  i_output-EBELP,
           25  i_output-TXZ01,
           55  i_output2-SRVPOS,
           65  i_output2-KTEXT1,
           82  i_output2-MENGE,
           105 i_output2-MEINS,
           112 i_output2-TBTWR,
           130 i_output-SAKTO.
    today is just my 3rd week of being an abap developer and everything is still new to me and I am still not that familiar with it.. that's why i need your help. Thanks!

  • Avoiding redundant XML data

    I have a large number of instance documents with a structure much like this:
    <dataset>
         <processInformation>     
              <dataEntryBy person="42"/>     
              <person number="42" name="Joe Lastname" address="Goethestr. 10" telephone="0049 177 555 8787" email="[email protected]"/>     
              <person number="18" name="Sam Other" address="8422 King Rd." telephone="001 713 555 4774" email="[email protected]"/>     
         </processInformation>
         <someothercontent>
         </someothercontent>
    </dataset>
    The person's number attribute is a key used to reference the person elsewhere in the document.
    Now when I specifiy in my schema that I want the <person> elements to be stored in an extra table using xdb:SQLInline="false", XML DB will generate a person table and put all the <person> elements in there.
    My problem is: how can I avoid that I get duplicate entries in that auto-generated person table, but instead the already existing record is referenced when I insert an instance document that contains a person that is already in the person table? Does anyone know if there a more elegant way than manually separating the <person> from the instance document, performing checks and inserting it into the person table?
    Thanks,
    Oli

    Hi Oliver
    To avoid duplicates in the nested table you should create a unique index on it.
    But on my opinion is not possible to reuse the already stored elements...
    Therefore, if you want to avoid duplicate, you should split the information in two separate tables and then define a foreign key.
    Chris

  • Avoid reduncy in PFCG.

    Hi All,
    I am working as Sap Security Consultant. Can any one please guide me how to reduce / avoid Redundency in PFCG, so that same roles are not assigned to the users.
    Best regards,
    Shashi.

    Hi,
    The best solution to avoid redundancy in PFCG is to make the entry CONDENSE_MENU_PFCG in the customizing table SSM_CUST with the values NO and YES.
    In the Role maintenance PFCG:
    Chooses utilities menu - Settings and it will show a dialog box where you have to check "Do not insert existing entries"

  • SAP BI INTERVIEW QUESTIONS

    Hi Friends,
    I was  face some Interview.
    Please send answers to the questions?
    How many data Fields  and key fields we can create in DSO?
    You can overwrite key fields or Data Fields?
    Which up date we use in Delta queue extraction( v1 or v2 or v3)
    Which message we get when transported request is failed?
    what is the Structural difference between Infoucbe and DSO
    Data Loading is taking huge time when we extract data from source system to BI system/ how to solve?(Before it took 3-4 Hours now data loading takes 4 days)

    What is the difference between Display Attribute and
    Navigational Attribute? How to make display attribute and navigational
    attribute?
    How to load flat file data?
    How to load Hierarchy file data?
    What is HACR?
    How to maintain HACR?
    If any issue in HACR then how to resolve the issue?
    What is Baby Cube?
    Why we are creating Aggregates?
    What is the use of Aggregates?
    Is there
    any particular field on that we can create Aggregates or we can maintain
    Aggregate on any field?
    What is
    the different DSO available? And what is the difference between those DSO?
    What is
    replacement path?
    What are
    the extractor types?
    • Application Specific
    o BW Content FI, HR, CO, SAP CRM, LO Cockpit
    o Customer-Generated Extractors
    LIS, FI-SL, CO-PA
    • Cross Application (Generic Extractors)
    o DB View, InfoSet, Function Module
    2. What are the steps involved in LO Extraction?
    • The steps are:
    o RSA5 Select the DataSources
    o LBWE Maintain DataSources and Activate Extract Structures
    o LBWG Delete Setup Tables
    o 0LI*BW Setup tables
    o RSA3 Check extraction and the data in Setup tables
    o LBWQ Check the extraction queue
    o LBWF Log for LO Extract Structures
    o RSA7 BW Delta
    Queue Monitor
    3. How to create a connection with LIS InfoStructures?
    • LBW0 Connecting LIS InfoStructures to BW
    4. What is the difference between ODS and InfoCube and MultiProvider?
    • ODS: Provides granular data, allows overwrite and data is in transparent
    tables, ideal for drilldown and RRI.
    • CUBE: Follows the star schema, we can only append data, ideal for primary
    reporting.
    • MultiProvider: Does not have physical data. It allows to access data from
    different InfoProviders (Cube, ODS, InfoObject). It is also preferred for
    reporting.
    5. What are Start routines, Transfer routines and Update routines?
    • Start Routines: The start routine is run for each DataPackage after the data
    has been written to the PSA and before the transfer rules have been executed.
    It allows complex computations for a key figure or a characteristic. It has no
    return value. Its purpose is to execute preliminary calculations and to store
    them in global DataStructures. This structure or table can be accessed in the
    other routines. The entire DataPackage in the transfer structure format is used
    as a parameter for the routine.
    • Transfer / Update Routines: They are defined at the InfoObject level. It is
    like the Start Routine. It is independent of the DataSource. We can use this to
    define Global Data and Global Checks.
    6. What is the difference between start routine and update routine, when, how
    and why are they called?
    • Start routine can be used to access InfoPackage while update routines are
    used while updating the Data Targets.
    7. What is the table that is used in start routines?
    • Always the table structure will be the structure of an ODS or InfoCube. For
    example if it is an ODS then active table structure will be the table.
    8. Explain how you used Start routines in your project?
    • Start routines are used for mass processing of records. In start routine all
    the records of DataPackage is available for processing. So we can process all
    these records together in start routine. In one of scenario, we wanted to apply
    size % to the forecast data. For example if material M1 is forecasted to say
    100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra
    Large 20%), we wanted to have 4 records against one single record that is
    coming in the info package. This is achieved in start routine.
    9. What are Return Tables?
    • When we want to return multiple records, instead of single value, we use the
    return table in the Update Routine. Example: If we have total telephone expense
    for a Cost Center, using a return table we can get
    expense per employee.
    10. How do start routine and return table synchronize with each other?
    • Return table is used to return the Value following the execution of start
    routine
    11. What is the difference
    between V1, V2 and V3 updates?
    • V1 Update: It is a Synchronous update. Here the Statistics update is carried
    out at the same time as the document update (in the application
    tables).
    • V2 Update: It is an Asynchronous update. Statistics update and the Document
    update take place as different tasks.
    o V1 & V2 don't need scheduling.
    • Serialized V3 Update: The V3 collective update must be scheduled as a job
    (via LBWE). Here, document data is collected in the order it was created and
    transferred into the BW as a batch job. The transfer sequence may not be the
    same as the order in which the data was created in all scenarios. V3 update
    only processes the update data that is successfully processed with the V2
    update.
    12. What is compression?
    • It is a process used to delete the Request IDs and this saves space.
    13. What is Rollup?
    • This is used to load new DataPackages (requests) into the InfoCube
    aggregates. If we have not performed a rollup then the new InfoCube data will
    not be available while reporting on the aggregate.
    14. What is table partitioning and what are the benefits of partitioning in an
    InfoCube?
    • It is the method of dividing a table which would enable a quick reference.
    SAP uses fact file partitioning to improve performance. We can partition only
    at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as
    data is stored in the relevant partitions. Also table maintenance becomes
    easier. Oracle,
    Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL
    Server, IBM DB2/400 do not support table portioning.
    15. How many extra partitions are created and why?
    • Two partitions are created for date before the begin date and after the end
    date.
    16. What are the options available in transfer rule?
    • InfoObject
    • Constant
    • Routine
    • Formula
    17. How would you optimize the dimensions?
    • We should define as many dimensions as possible and we have to take care that
    no single dimension crosses more than 20% of the fact table size.
    18. What are Conversion Routines for units and currencies in the update rule?
    • Using this option we can write ABAP
    code for Units / Currencies conversion. If we enable this flag then unit of Key
    Figure appears in the ABAP code as an additional parameter. For example, we can
    convert units in Pounds to Kilos.
    19. Can an InfoObject be an InfoProvider, how and why?
    • Yes, when we want to report on Characteristics or Master Data. We have to
    right click on the InfoArea and select "Insert characteristic as data
    target". For example, we can make 0CUSTOMER as an InfoProvider and report
    on it.
    20. What is Open Hub Service?
    • The Open Hub Service enables us to distribute data from an SAP BW system into
    external Data Marts, analytical applications, and other applications. We can
    ensure controlled distribution using several systems. The central object for
    exporting data is the InfoSpoke. We can define the source and the target object
    for the data. BW becomes a hub of an enterprise data warehouse.
    The distribution of data becomes clear through central monitoring from the
    distribution status in the BW system.
    21. How do you transform Open
    Hub Data?
    • Using BADI we can transform Open Hub Data according to the destination
    requirement.
    22. What is ODS?
    • Operational DataSource is used for detailed storage of data. We can overwrite
    data in the ODS. The data is stored in transparent tables.
    23. What are BW Statistics and what is its use?
    • They are group of Business Content InfoCubes which are used to measure
    performance for Query and Load Monitoring. It also shows the usage of
    aggregates, OLAP and Warehouse management
    http://www.ittestpapers.com/articles/713/3/SAP-BW-Interview-Questions---Part-A/Page3.html
    Communication Structure and Transfer
    rules
    • Create and InfoPackage
    • Load Data
    25. What are the delta options available when you load from flat file?
    • The 3 options for Delta Management with Flat Files:
    o Full Upload
    o New Status for Changed records (ODS Object only)
    o Additive Delta (ODS Object & InfoCube)
    Q) Under which menu path is the Test Workbench to be found, including in
    earlier Releases?
    The menu path is: Tools - ABAP Workbench - Test - Test Workbench.
    Q) I want to delete a BEx query that is in Production system through request. Is
    anyone aware about it?
    A) Have you tried the RSZDELETE transaction?
    Q) Errors while monitoring process chains.
    A) During data loading. Apart from them, in process chains you add so many
    process types, for example after loading data into Info Cube, you rollup data
    into aggregates, now this rolling up of data into aggregates is a process type
    which you keep after the process type for loading data into Cube. This rolling
    up into aggregates might fail.
    Another one is after you load data into ODS, you activate ODS data (another
    process type) this might also fail.
    Q) In Monitor----- Details (Header/Status/Details) à Under Processing (data
    packet): Everything OK à Context menu of Data Package 1 (1 Records): Everything
    OK ---- Simulate update. (Here we can debug update rules or transfer rules.)
    SM50 à Program/Mode à Program à Debugging & debug this work process.
    Q) PSA Cleansing.
    A) You know how to edit PSA. I don't think you can delete single records. You
    have to delete entire PSA data for a request.
    Q) Can we make a datasource to support delta.
    A) If this is a custom (user-defined) datasource you can make the datasource
    delta enabled. While creating datasource from RSO2, after entering datasource
    name and pressing create, in the next screen there is one button at the top,
    which says generic delta. If you want more details about this there is a
    chapter in Extraction book, it's in last pages u find out.
    Generic delta services: -
    Supports delta extraction for generic extractors according to:
    Time stamp
    Calendar day
    Numeric pointer, such as document number & counter
    Only one of these attributes can be set as a delta attribute.
    Delta extraction is supported for all generic extractors, such as tables/views,
    SAP Query and function modules
    The delta queue (RSA7) allows you to monitor the current status of the delta
    attribute
    Q) Workbooks, as a general rule, should be transported with the
    role.
    Here are a couple of scenarios:
    1. If both the workbook and its role have been previously transported, then the
    role does not need to be part of the transport.
    2. If the role exists in both dev and the target system but the workbook has
    never been transported, and then you have a choice of transporting the role
    (recommended) or just the workbook. If only the workbook is transported, then
    an additional step will have to be taken after import: Locate the WorkbookID
    via Table RSRWBINDEXT (in Dev and verify the same exists in the target system)
    and proceed to manually add it to the role in the target system via Transaction
    Code PFCG -- ALWAYS use control c/control v copy/paste for manually adding!
    3. If the role does not exist in the target system you should transport both
    the role and workbook. Keep in mind that a workbook is an object unto itself
    and has no dependencies on other objects. Thus, you do not receive an error
    message from the transport of 'just a workbook' -- even though it may not be
    visible, it will exist (verified via Table RSRWBINDEXT).
    Overall, as a general rule, you should transport roles with workbooks.
    Q) How much time does it take to extract 1 million (10 lackhs) of records into
    an infocube?
    A. This depends, if you have complex coding in update rules it will take longer
    time, or else it will take less than 30 minutes.
    Q) What are the five ASAP Methodologies?
    A: Project plan, Business Blue print, Realization, Final preparation & Go-Live - support.
    1. Project Preparation: In this phase, decision makers define clear project
    objectives and an efficient decision making process ( i.e. Discussions with the
    client, like what are his needs and requirements etc.). Project managers
    will be involved in this phase (I guess).
    A Project Charter is issued and an implementation strategy is outlined in this
    phase.
    2. Business Blueprint: It is a detailed documentation of your company's
    requirements. (i.e. what are the objects we need to develop are modified
    depending on the client's requirements).
    3. Realization: In this only, the implementation of the project takes place (development
    of objects etc) and we are involved in the project from here only.
    4. Final Preparation: Final preparation before going live i.e. testing,
    conducting pre-go-live, end user training etc.
    End user training is given that is in the client site you train them how to
    work with the new environment, as they are new to the technology.
    5. Go-Live & support: The project has gone live and it is into production.
    The Project team will be supporting the end users.
    Q) What is landscape of R/3 & what is landscape of BW. Landscape of R/3 not
    sure.
    Then Landscape of b/w: u have the development system, testing system, production system
    Development system: All the implementation part is done in this sys. (I.e.,
    Analysis of objects developing, modification etc) and from here the objects are
    transported to the testing system, but before transporting an initial test
    known as Unit testing
    (testing of objects) is done in the development sys.
    Testing/Quality system: quality check is done in this system and integration
    testing is done.
    Production system: All the extraction part takes place in this sys.
    Q) How do you measure the size of infocube?
    A: In no of records.
    Q). Difference between infocube and ODS?
    A: Infocube is structured as star schema (extended) where a fact table is
    surrounded by different dim table that are linked with DIM'ids. And the data
    wise, you will have aggregated data in the cubes. No overwrite functionality
    ODS is a flat structure (flat table) with no star schema concept and which will
    have granular data (detailed level). Overwrite functionality.
    Flat file
    datasources does not support 0recordmode in extraction.
    x before, -after, n new, a add, d delete, r reverse
    Q) Difference between display attributes and navigational attributes?
    A: Display attribute is one, which is used only for display purpose in the
    report. Where as navigational attribute is used for drilling down in the
    report. We don't need to maintain Navigational attribute in the cube as a
    characteristic (that is the advantage) to drill down.
    Q. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
    A: But how is it possible? If you load it manually twice, then you can delete
    it by requestID.
    Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
    Sure you can. ODS is nothing but a table.
    Q. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?
    A) Yes of course. For example, for loading text and hierarchies we use
    different data sources but the same InfoSource.
    Q. BRIEF THE DATAFLOW IN BW.
    A) Data flows from transactional system to analytical system (BW). DataSources
    on the transactional system needs to be replicated on BW side and attached to
    infosource and update rules respectively.
    Q. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER
    RULES?
    Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
    FULL and DELTA.
    Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in LIS THEN
    WHAT IS THE PROCEDURE IN LO-COCKPIT?
    No LIS in LO cockpit. We will have datasources and can be maintained (append
    fields). Refer white paper
    on LO-Cockpit extractions.
    Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?
    A) Initially we don't delete the setup tables but when we do change in extract
    structure we go for it. We r changing the extract structure right, that means
    there are some newly added fields in that which r not before. So to get the
    required data ( i.e.; the data which is required is taken and to avoid
    redundancy) we delete n then fill the setup tables.
    To refresh the statistical data.
    The extraction set up reads the dataset that you want to process such as,
    customers orders with the tables like VBAK, VBAP) & fills the relevant communication
    structure with the data. The data is stored in cluster
    tables from where it is read when the initialization is run. It is important
    that during initialization phase, no one generates or modifies application
    data, at least until the tables can be set up.
    Q) SIGNIFICANCE of ODS?
    It holds granular data (detailed level).
    Q) WHERE THE PSA DATA IS STORED?
    In PSA table.
    Q) WHAT IS DATA SIZE?
    The volume of data one data target holds (in no. of records)
    Q) Different types of INFOCUBES.
    Basic, Virtual (remote, sap remote and multi)
    Virtual Cube is used for example, if you consider railways reservation all the
    information has to be updated online. For designing the Virtual cube you have
    to write the function module that is linking to table, Virtual cube it is like
    a the structure, when ever the table is updated the virtual cube will fetch the
    data from table and display report Online... FYI.. you will get the information
    : https://www.sdn.sap.com/sdn
    /index.sdn and search for Designing Virtual Cube and you will get
    a good material designing the Function Module
    Q) INFOSET QUERY.
    Can be made of ODS's and Characteristic InfoObjects with masterdata.
    Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
    In R/3 or in BW? 2 in R/3 and 2 in BW
    Q) ROUTINES?
    Exist in the InfoObject, transfer routines, update routines and start routine
    Q) BRIEF SOME STRUCTURES USED IN BEX.
    Rows and Columns, you can create structures.
    Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
    Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes &
    Characteristic values.
    Variable Types are
    Manual entry /default value
    Replacement path
    SAP exit
    Customer exit
    Authorization
    Q) HOW MANY LEVELS YOU CAN GO IN REPORTING?
    You can drill down to any level by using Navigational attributes and jump
    targets.
    Q) WHAT ARE INDEXES?
    Indexes are data base indexes, which help in retrieving data fastly.
    Q) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
    Help! Refer documentation
    Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED?
    No.
    Q) WHAT IS THE SIGNIFICANCE OF KPI'S?
    KPI's indicate the performance of a company. These are key figures
    Q) AFTER THE DATA EXTRACTION
    WHAT IS THE IMAGE POSITION.
    After image (correct me if I am wrong)
    Q) REPORTING AND RESTRICTIONS.
    Help! Refer documentation.
    Q) TOOLS USED FOR PERFORMANCE TUNING.
    ST22, Number ranges, delete indexes before load. Etc
    Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA DAILY.
    There should be some tool to run the job daily (SM37 jobs)
    Q) AUTHORIZATIONS.
    Profile generator
    Q) WEB REPORTING.
    What are you expecting??
    Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.
    Of course
    Q) PROCEDURES OF REPORTING ON MULTICUBES
    Refer help. What are you expecting? MultiCube works on Union condition
    Q) EXPLAIN TRANPSORTATION OF OBJECTS?
    Dev---àQ and Dev-------àP
    Q) What types of partitioning are there for BW?
    There are two Partitioning Performance aspects for BW (Cube & PSA)
    Query Data Retrieval
    Performance Improvement:
    Partitioning by (say) Date Range improves data retrieval by making best use of
    database [data range] execution plans and indexes (of say Oracle database engine).
    B) Transactional Load Partitioning Improvement:
    Partitioning based on expected load volumes and data element sizes. Improves
    data loading into PSA and Cubes by infopackages (Eg. without timeouts).
    Q) How can I compare data in R/3 with data in a BW Cube after the daily delta
    loads? Are there any standard procedures for checking them or matching the
    number of records?
    A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the
    number of records extracted. Then go to BW Monitor to check the number of
    records in the PSA and check to see if it is the same & also in the monitor
    header tab.
    A) RSA3 is a simple extractor checker program that allows you to rule out
    extracts problems in R/3. It is simple to use, but only really tells you if the
    extractor works. Since records that get updated into Cubes/ODS structures are
    controlled by Update Rules, you will not be able to determine what is in the
    Cube compared to what is in the R/3 environment. You will need to compare
    records on a 1:1 basis against records in R/3 transactions for the functional
    area in question. I would recommend enlisting the help of the end user community
    to assist since they presumably know the data.
    To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute
    and you will see the record count, you can also go to display that data. You
    are not modifying anything so what you do in RSA3 has no effect on data quality
    afterwards. However, it will not tell you how many records should be expected
    in BW for a given load. You have that information in the monitor RSMO during
    and after data loads. From RSMO for a given load you can determine how many
    records were passed through the transfer rules from R/3, how many targets were
    updated, and how many records passed through the Update Rules. It also gives
    you error messages from the PSA.
    Q) Types of Transfer Rules?
    A) Field to Field mapping, Constant, Variable & routine.
    Q) Types of Update Rules?
    A) (Check box), Return table
    Q) Transfer Routine?
    A) Routines, which we write in, transfer rules.
    Q) Update Routine?
    A) Routines, which we write in Update rules
    Q) What is the difference between writing a routine in transfer rules and
    writing a routine in update rules?
    A) If you are using the same InfoSource to update data in more than one data
    target its better u write in transfer rules because u can assign one InfoSource
    to more than one data target & and what ever logic u write in update rules
    it is specific to particular one data target.
    Q) Routine with Return Table.
    A) Update rules generally only have one return value. However, you can create a
    routine in the tab strip key figure calculation, by choosing checkbox Return
    table. The corresponding key figure routine then no longer has a return value,
    but a return table. You can then generate as many key figure values, as you
    like from one data record.
    Q) Start routines?
    A) Start routines u can write in both updates rules and transfer rules, suppose
    you want to restrict (delete) some records based on conditions before getting
    loaded into data targets, then you can specify this in update rules-start
    routine.
    Ex: - Delete Data_Package ani ante it will delete a record based on the
    condition
    Q) X & Y Tables?
    X-table = A table to link material SIDs with SIDs for time-independent
    navigation attributes.
    Y-table = A table to link material SIDs with SIDS for time-dependent navigation
    attributes.
    There are four types of sid tables
    X time independent navigational attributes sid tables
    Y time dependent navigational attributes sid tables
    H hierarchy sid tables
    I hierarchy structure sid tables
    Q) Filters & Restricted Key figures (real time example)
    Restricted KF's u can have for an SD cube: billed quantity, billing value, no:
    of billing documents as RKF's.
    Q) Line-Item Dimension (give me an real time example)
    Line-Item Dimension: Invoice no: or Doc no: is a real time example
    Q) What does the number in the 'Total' column in Transaction RSA7 mean?
    A) The 'Total' column displays the number of LUWs that were written in the
    delta queue and that have not yet been confirmed. The number includes the LUWs
    of the last delta request (for repetition of a delta request) and the LUWs for
    the next delta request. A LUW only disappears from the RSA7 display when it has
    been transferred to the BW System and a new delta request has been received
    from the BW System.
    Q) How to know in which table (SAP BW) contains Technical Name / Description
    and creation data of a particular Reports. Reports that are created using BEx
    Analyzer.
    A) There is no such table in BW if you want to know such details while you are
    opening a particular query press properties button you will come to know all
    the details that you wanted.
    You will find your information about technical names and description about
    queries in the following tables. Directory of all reports (Table RSRREPDIR) and
    Directory of the reporting component elements (Table RSZELTDIR) for workbooks
    and the connections to queries check Where- used list for reports in workbooks
    (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table
    RSRWBINDEXT)
    Q) What is a LUW in the delta queue?
    A) A LUW from the point of view of the delta queue can be an individual
    document, a group of documents from a collective run or a whole data packet of
    an application
    extractor.
    Q) Why does the number in the 'Total' column in the overview screen of
    Transaction RSA7 differ from the number of data records that is displayed when
    you call the detail view?
    A) The number on the overview screen corresponds to the total of LUWs (see also
    first question) that were written to the qRFC queue and that have not yet been
    confirmed. The detail screen displays the records contained in the LUWs. Both,
    the records belonging to the previous delta request and the records that do not
    meet the selection conditions of the preceding delta init requests are filtered
    out. Thus, only the records that are ready for the next delta request are
    displayed on the detail screen. In the detail screen of Transaction RSA7, a
    possibly existing customer exit is not taken into account.
    Q) Why does Transaction RSA7 still display LUWs on the overview screen after
    successful delta loading?
    A) Only when a new delta has been requested does the source system learn that
    the previous delta was successfully loaded to the BW System. Then, the LUWs of
    the previous delta may be confirmed (and also deleted). In the meantime, the
    LUWs must be kept for a possible delta request repetition. In particular, the
    number on the overview screen does not change when the first delta was loaded
    to the BW System.
    Q) Why are selections not taken into account when the delta queue is filled?
    A) Filtering according to selections takes place when the system reads from the
    delta queue. This is necessary for reasons of performance.
    Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has
    also been loaded successfully?
    It is most likely that this is a DataSource that does not send delta data to
    the BW System via the delta queue but directly via the extractor (delta for
    master data using ALE change pointers). Such a DataSource should not be
    displayed in RSA7. This error is corrected with BW 2.0B Support Package 11.
    Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the
    loading procedure from the delta queue?
    A) The impact is limited. If performance problems are related to the loading
    process from the delta queue, then refer to the application-specific notes (for
    example in the CO-PA area, in the logistics cockpit area and so on).
    Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as
    effective for the delta queue as for a full update. Please note, however, that
    LUWs are not split during data loading for consistency reasons. This means that
    when very large LUWs are written to the DeltaQueue, the actual package size may
    differ considerably from the MAXSIZE and MAXLINES parameters.
    Q) Why does it take so long to display the data in the delta queue (for example
    approximately 2 hours)?
    A) With Plug In 2001.1 the display was changed: the user has the option of
    defining the amount of data to be displayed, to restrict it, to selectively
    choose the number of a data record, to make a distinction between the 'actual'
    delta data and the data intended for repetition and so on.
    Q) What is the purpose of function 'Delete data and meta data in a queue' in
    RSA7? What exactly is deleted?
    A) You should act with extreme caution when you use the deletion function in
    the delta queue. It is comparable to deleting an InitDelta in the BW System and
    should preferably be executed there. You do not only delete all data of this
    DataSource for the affected BW System, but also lose the entire information
    concerning the delta initialization. Then you can only request new deltas after
    another delta initialization.
    When you delete the data, the LUWs kept in the qRFC queue for the corresponding
    target system are confirmed. Physical deletion only takes place in the qRFC
    outbound queue if there are no more references to the LUWs.
    The deletion function is for example intended for a case where the BW System,
    from which the delta initialization was originally executed, no longer exists
    or can no longer be accessed.
    Q) Why does it take so long to delete from the delta queue (for example half a
    day)?
    A) Import PlugIn 2000.2 patch 3. With this patch the performance during
    deletion is considerably improved.
    Q) Why is the delta queue not updated when you start the V3 update in the
    logistics cockpit area?
    A) It is most likely that a delta initialization had not yet run or that the
    delta initialization was not successful. A successful delta initialization (the
    corresponding request must have QM status 'green' in the BW System) is a
    prerequisite for the application data being written in the delta queue.
    Q) What is the relationship between RSA7 and the qRFC monitor (Transaction
    SMQ1)?
    A) The qRFC monitor basically displays the same data as RSA7. The internal
    queue name must be used for selection on the initial screen of the qRFC
    monitor. This is made up of the prefix 'BW, the client and the short name of
    the DataSource. For DataSources whose name are 19 characters long or shorter,
    the short name corresponds to the name of the DataSource. For DataSources whose
    name is longer than 19 characters (for delta-capable DataSources only possible
    as of PlugIn 2001.1) the short name is assigned in table ROOSSHORTN.
    In the qRFC monitor you cannot distinguish between repeatable and new LUWs.
    Moreover, the data of a LUW is displayed in an unstructured manner there.
    Q) Why are the data in the delta queue although the V3 update was not started?
    A) Data was posted in background. Then, the records are updated directly in the
    delta queue (RSA7). This happens in particular during automatic goods receipt
    posting (MRRS). There is no duplicate transfer of records to the BW system. See
    Note 417189.
    Q) Why does button 'Repeatable' on the RSA7 data details screen not only show
    data loaded into BW during the last delta but also data that were newly added,
    i.e. 'pure' delta records?
    A) Was programmed in a way that the request in repeat mode fetches both
    actually repeatable (old) data and new data from the source system.
    Q) I loaded several delta inits with various selections. For which one is the
    delta loaded?
    A) For delta, all selections made via delta inits are summed up. This means, a
    delta for the 'total' of all delta initializations is loaded.
    Q) How many selections for delta inits are possible in the system?
    A) With simple selections (intervals without complicated join conditions or
    single values), you can make up to about 100 delta inits. It should not be
    more.
    With complicated selection conditions, it should be only up to 10-20 delta
    inits.
    Reason: With many selection conditions that are joined in a complicated way,
    too many 'where' lines are generated in the generated ABAP
    source code that may exceed the memory limit.
    Q) I intend to copy the source system, i.e. make a client copy. What will
    happen with may delta? Should I initialize again after that?
    A) Before you copy a source client or source system, make sure that your deltas
    have been fetched from the DeltaQueue into BW and that no delta is pending.
    After the client copy, an inconsistency might occur between BW delta tables and
    the OLTP delta tables as described in Note 405943. After the client copy, Table
    ROOSPRMSC will probably be empty in the OLTP since this table is
    client-independent. After the system copy, the table will contain the entries
    with the old logical system name that are no longer useful for further delta
    loading from the new logical system. The delta must be initialized in any case
    since delta depends on both the BW system and the source system. Even if no
    dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you
    should expect that the delta have to be initialized after the copy.
    Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of
    processes?
    A) Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW
    queues only after informing the BW Support or only if this is explicitly
    requested in a note for component 'BC-BW' or 'BW-WHM-SAPI'.
    Q) Despite of the delta request being started after completion of the
    collective run (V3 update), it does not contain all documents. Only another
    delta request loads the missing documents into BW. What is the cause for this
    "splitting"?
    A) The collective run submits the open V2 documents for processing to the task
    handler, which processes them in one or several parallel update processes in an
    asynchronous way. For this reason, plan a sufficiently large "safety time
    window" between the end of the collective run in the source system and the
    start of the delta request in BW. An alternative solution where this problem
    does not occur is described in Note 505700.
    Q) Despite my deleting the delta init, LUWs are still written into the
    DeltaQueue?
    A) In general, delta initializations and deletions of delta inits should always
    be carried out at a time when no posting takes place. Otherwise, buffer
    problems may occur: If a user started the internal mode at a time when the
    delta initialization was still active, he/she posts data into the queue even
    though the initialization had been deleted in the meantime. This is the case in
    your system.
    Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some
    entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What
    do these statuses mean? Which values in the field 'Status' mean what and which
    values are correct and which are alarming? Are the statuses BW-specific or
    generally valid in qRFC?
    A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read
    once either in a delta request or in a repetition of the delta request.
    However, this does not mean that the record has successfully reached the BW
    yet. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that
    the record has been written into the DeltaQueue and will be loaded into the BW
    with the next delta request or a repetition of a delta. In any case only the
    statuses READ, READY and RECORDED in both tables are considered to be valid.
    The status EXECUTED in TRFCQOUT can occur temporarily. It is set before
    starting a DeltaExtraction for all records with status READ present at that
    time. The records with status EXECUTED are usually deleted from the queue in
    packages within a delta request directly after setting the status before
    extracting a new delta. If you see such records, it means that either a process
    which is confirming and deleting records which have been loaded into the BW is
    successfully running at the moment, or, if the records remain in the table for
    a longer period of time with status EXECUTED, it is likely that there are
    problems with deleting the records which have already been successfully been
    loaded into the BW. In this state, no more deltas are loaded into the BW. Every
    other status is an indicator for an error or an inconsistency. NOSEND in SMQ1
    means nothing (see note 378903).
    The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.
    Q) The extract structure was changed when the DeltaQueue was empty. Afterwards
    new delta records were written to the DeltaQueue. When loading the delta into
    the PSA, it shows that some fields were moved. The same result occurs when the
    contents of the DeltaQueue are listed via the detail display. Why are the data
    displayed differently? What can be done?
    Make sure that the change of the extract structure is also reflected in the
    database and that all servers are synchronized. We recommend to reset the
    buffers using Transaction $SYNC. If the extract structure change is not
    communicated synchronously to the server where delta records are being created,
    the records are written with the old structure until the new structure has been
    generated. This may have disastrous consequences for the delta.
    When the problem occurs, the delta needs to be re-initialized.
    Q) How and where can I control whether a repeat delta is requested?
    A) Via the status of the last delta in the BW Request Monitor. If the request
    is RED, the next load will be of type 'Repeat'. If you need to repeat the last
    load for certain reasons, set the request in the monitor to red manually. For
    the contents of the repeat see Question 14. Delta requests set to red despite
    of data being already updated lead to duplicate records in a subsequent repeat,
    if they have not been deleted from the data targets concerned before.
    Q) As of PI 2003.1, the Logistic Cockpit offers various types of update
    methods. Which update method is recommended in logistics? According to which
    criteria should the decision be made? How can I choose an update method in
    logistics?
    See the recommendation in Note 505700.
    Q) Are there particular recommendations regarding the data volume the
    DeltaQueue may grow to without facing the danger of a read failure due to
    memory problems?
    A) There is no strict limit (except for the restricted number range of the
    24-digit QCOUNT counter in the LUW management table - which is of no practical
    importance, however - or the restrictions regarding the volume and number of
    records in a database table).
    When estimating "smooth" limits, both the number of LUWs is important
    and the average data volume per LUW. As a rule, we recommend to bundle data
    (usually documents) already when writing to the DeltaQueue to keep number of
    LUWs small (partly this can be set in the applications, e.g. in the Logistics
    Cockpit). The data volume of a single LUW should not be considerably larger
    than 10% of the memory available to the work process for data extraction
    (in a 32-bit architecture with a memory volume of about 1GByte per work
    process, 100 Mbytes per LUW should not be exceeded). That limit is of rather
    small practical importance as well since a comparable limit already applies
    when writing to the DeltaQueue. If the limit is observed, correct reading is
    guaranteed in most cases.
    If the number of LUWs cannot be reduced by bundling application transactions,
    you should at least make sure that the data are fetched from all connected BWs
    as quickly as possible. But for other, BW-specific, reasons, the frequency
    should not be higher than one DeltaRequest per hour.
    To avoid memory problems, a program-internal limit ensures that never more than
    1 million LUWs are read and fetched from the database per DeltaRequest. If this
    limit is reached within a request, the DeltaQueue must be emptied by several
    successive DeltaRequests. We recommend, however, to try not to reach that limit
    but trigger the fetching of data from the connected BWs already when the number
    of LUWs reaches a 5-digit value.
    Q) I would like to display the date the data was uploaded on the
    report. Usually, we load the transactional data nightly. Is there any easy way
    to include this information on the report for users? So that they know the
    validity of the report.
    A) If I understand your requirement correctly, you want to display the date on
    which data was loaded into the data target from which the report is being
    executed. If it is so, configure your workbook to display the text elements in
    the report. This displays the relevance of data field, which is the date on which
    the data load has taken place.
    Q) Can we filter the fields at Transfer Structure?
    Q) Can we load data directly into infoobject with out extraction is it
    possible.
    Yes. We can copy from other infoobject if it is same. We load data from PSA if
    it is already in PSA.
    Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA, IF WE R SHEDULED DAILY, WEEKLY
    AND MONTHLY.
    a) We can set the time.
    Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING ON OFFSHORE PROJECTS.
    THROUGH WHICH NETWORK.
    a) VPN…………….Virtual
    Private Network, VPN is nothing but one sort of network
    where we can connect to the client systems sitting in offshore through RAS
    (Remote access server).
    Q) HOW CAN U ANALIZE THE PROJECT AT FIRST?
    Prepare Project Plan and Environment
    Define Project Management
    Standards and
    Procedures
    Define Implementation Standards and Procedures
    Testing & Go-live + supporting.
    Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL CUBES IF ONE
    CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
    Go to TCode sm66 then see which one is locked select that pid from there and
    goto sm12
    TCode then unlock it this is happened when lock errors are occurred when u
    scheduled.
    Q) Can anybody tell me how to add a navigational attribute in the BEx report in
    the rows?
    A) Expand dimension under left side panel (that is infocube panel) select than
    navigational attributes drag and drop under rows panel.
    Q) IF ANY TRASACTION CODE LIKE SMPT OR STMT.
    In current systems (BW 3.0B and R/3 4.6B) these Tcodes don't exist!
    Q) WHAT IS TRANSACTIONAL CUBE?
    A) Transactional InfoCubes differ from standard InfoCubes in that the former
    have an improved write access performance level. Standard InfoCubes are
    technically optimized for read-only access and for a comparatively small number
    of simultaneous accesses. Instead, the transactional InfoCube was developed to
    meet the demands of SAP Strategic Enterprise Management (SEM), meaning that,
    data is written to the InfoCube (possibly by several users at the same time)
    and re-read as soon as possible. Standard Basic cubes are not suitable for
    this.
    Q) Is there any way to delete cube contents within update rules from an ODS
    data source? The reason for this would be to delete (or zero out) a cube record
    in an "Open Order" cube if the open order quantity was 0.
    I've tried using the 0recordmode but that doesn't work. Also, would it
    be easier to write a program that would be run after the load and delete
    the records with a zero open qty?
    A) START routine for update rules u can write ABAP code.
    A) Yap, you can do it. Create a start routine in Update rule.
    It is not "Deleting cube contents with update rules" It is only
    possible to avoid that some content is updated into the InfoCube using the
    start routine. Loop at all the records and delete the record that has the
    condition. "If the open order quantity was 0" You have to think also
    in before and after images in case of a delta upload. In that case you may
    delete the change record and keep the old and after the change the wrong
    information.
    Q) I am not able to access a node in hierarchy directly using variables for
    reports. When I am using Tcode RSZV it is giving a message that it doesn't
    exist in BW 3.0 and it is embedded in BEx. Can any one tell me the other
    options to get the same functionality in BEx?
    A) Tcode RSZV is used in the earlier version of 3.0B only. From 3.0B onwards,
    it's possible in the Query Designer (BEx) itself. Just right click on the
    InfoObject for which you want to use as variables and precede further selecting
    variable type and proce

  • Useful Interview Questions and Answere

    Hi All
    Some of the Real time question.
    Q) Under which menu path is the Test Workbench to be found, including in earlier Releases?
    The menu path is: Tools - ABAP Workbench - Test - Test Workbench.
    Q) I want to delete a BEx query that is in Production system through request. Is anyone aware about it?
    A) Have you tried the RSZDELETE transaction?
    Q) Errors while monitoring process chains.
    A) During data loading. Apart from them, in process chains you add so many process types, for example after loading data into Info Cube, you rollup data into aggregates, now this rolling up of data into aggregates is a process type which you keep after the process type for loading data into Cube. This rolling up into aggregates might fail.
    Another one is after you load data into ODS, you activate ODS data (another process type) this might also fail.
    Q) In Monitor----- Details (Header/Status/Details) à Under Processing (data packet): Everything OK à Context menu of Data Package 1 (1 Records): Everything OK -
    Simulate update. (Here we can debug update rules or transfer rules.)
    SM50 à Program/Mode à Program à Debugging & debug this work process.
    Q) PSA Cleansing.
    A) You know how to edit PSA. I don't think you can delete single records. You have to delete entire PSA data for a request.
    Q) Can we make a datasource to support delta.
    A) If this is a custom (user-defined) datasource you can make the datasource delta enabled. While creating datasource from RSO2, after entering datasource name and pressing create, in the next screen there is one button at the top, which says generic delta. If you want more details about this there is a chapter in Extraction book, it's in last pages u find out.
    Generic delta services: -
    Supports delta extraction for generic extractors according to:
    Time stamp
    Calendar day
    Numeric pointer, such as document number & counter
    Only one of these attributes can be set as a delta attribute.
    Delta extraction is supported for all generic extractors, such as tables/views, SAP Query and function modules
    The delta queue (RSA7) allows you to monitor the current status of the delta attribute
    Q) Workbooks, as a general rule, should be transported with the role.
    Here are a couple of scenarios:
    1. If both the workbook and its role have been previously transported, then the role does not need to be part of the transport.
    2. If the role exists in both dev and the target system but the workbook has never been transported, and then you have a choice of transporting the role (recommended) or just the workbook. If only the workbook is transported, then an additional step will have to be taken after import: Locate the WorkbookID via Table RSRWBINDEXT (in Dev and verify the same exists in the target system) and proceed to manually add it to the role in the target system via Transaction Code PFCG -- ALWAYS use control c/control v copy/paste for manually adding!
    3. If the role does not exist in the target system you should transport both the role and workbook. Keep in mind that a workbook is an object unto itself and has no dependencies on other objects. Thus, you do not receive an error message from the transport of 'just a workbook' -- even though it may not be visible, it will exist (verified via Table RSRWBINDEXT).
    Overall, as a general rule, you should transport roles with workbooks.
    Q) How much time does it take to extract 1 million (10 lackhs) of records into an infocube?
    A. This depends, if you have complex coding in update rules it will take longer time, or else it will take less than 30 minutes.
    Q) What are the five ASAP Methodologies?
    A: Project plan, Business Blue print, Realization, Final preparation & Go-Live - support.
    1. Project Preparation: In this phase, decision makers define clear project objectives and an efficient decision making process (i.e. Discussions with the client, like what are his needs and requirements etc.). Project managers will be involved in this phase (I guess).
    A Project Charter is issued and an implementation strategy is outlined in this phase.
    2. Business Blueprint: It is a detailed documentation of your company's requirements. (i.e. what are the objects we need to develop are modified depending on the client's requirements).
    3. Realization: In this only, the implementation of the project takes place (development of objects etc) and we are involved in the project from here only.
    4. Final Preparation: Final preparation before going live i.e. testing, conducting pre-go-live, end user training etc.
    End user training is given that is in the client site you train them how to work with the new environment, as they are new to the technology.
    5. Go-Live & support: The project has gone live and it is into production. The Project team will be supporting the end users.
    Q) What is landscape of R/3 & what is landscape of BW. Landscape of R/3 not sure.
    Then Landscape of b/w: u have the development system, testing system, production system
    Development system: All the implementation part is done in this sys. (I.e., Analysis of objects developing, modification etc) and from here the objects are transported to the testing system, but before transporting an initial test known as Unit testing (testing of objects) is done in the development sys.
    Testing/Quality system: quality check is done in this system and integration testing is done.
    Production system: All the extraction part takes place in this sys.
    Q) How do you measure the size of infocube?
    A: In no of records.
    Q). Difference between infocube and ODS?
    A: Infocube is structured as star schema (extended) where a fact table is surrounded by different dim table that are linked with DIM'ids. And the data wise, you will have aggregated data in the cubes. No overwrite functionality
    ODS is a flat structure (flat table) with no star schema concept and which will have granular data (detailed level). Overwrite functionality.
    Flat file datasources does not support 0recordmode in extraction.
    x before, -after, n new, a add, d delete, r reverse
    Q) Difference between display attributes and navigational attributes?
    A: Display attribute is one, which is used only for display purpose in the report. Where as navigational attribute is used for drilling down in the report. We don't need to maintain Navigational attribute in the cube as a characteristic (that is the advantage) to drill down.
    Q. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
    A: But how is it possible? If you load it manually twice, then you can delete it by requestID.
    Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
    Sure you can. ODS is nothing but a table.
    Q. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?
    A) Yes of course. For example, for loading text and hierarchies we use different data sources but the same InfoSource.
    Q. BRIEF THE DATAFLOW IN BW.
    A) Data flows from transactional system to analytical system (BW). DataSources on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively.
    Q. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER RULES?
    Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
    FULL and DELTA.
    Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LO-COCKPIT?
    No LIS in LO cockpit. We will have datasources and can be maintained (append fields). Refer white paper on LO-Cockpit extractions.
    Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?
    A) Initially we don't delete the setup tables but when we do change in extract structure we go for it. We r changing the extract structure right, that means there are some newly added fields in that which r not before. So to get the required data (i.e.; the data which is required is taken and to avoid redundancy) we delete n then fill the setup tables.
    To refresh the statistical data. The extraction set up reads the dataset that you want to process such as, customers orders with the tables like VBAK, VBAP) & fills the relevant communication structure with the data. The data is stored in cluster tables from where it is read when the initialization is run. It is important that during initialization phase, no one generates or modifies application data, at least until the tables can be set up.
    Q) SIGNIFICANCE of ODS?
    It holds granular data (detailed level).
    Q) WHERE THE PSA DATA IS STORED?
    In PSA table.
    Q) WHAT IS DATA SIZE?
    The volume of data one data target holds (in no. of records)
    Q) Different types of INFOCUBES.
    Basic, Virtual (remote, sap remote and multi)
    Virtual Cube is used for example, if you consider railways reservation all the information has to be updated online. For designing the Virtual cube you have to write the function module that is linking to table, Virtual cube it is like a the structure, when ever the table is updated the virtual cube will fetch the data from table and display report Online... FYI.. you will get the information : https://www.sdn.sap.com/sdn/index.sdn and search for Designing Virtual Cube and you will get a good material designing the Function Module
    Q) INFOSET QUERY.
    Can be made of ODS's and Characteristic InfoObjects with masterdata.
    Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
    In R/3 or in BW? 2 in R/3 and 2 in BW
    Q) ROUTINES?
    Exist in the InfoObject, transfer routines, update routines and start routine
    Q) BRIEF SOME STRUCTURES USED IN BEX.
    Rows and Columns, you can create structures.
    Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
    Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes & Characteristic values.
    Variable Types are
    Manual entry /default value
    Replacement path
    SAP exit
    Customer exit
    Authorization
    Q) HOW MANY LEVELS YOU CAN GO IN REPORTING?
    You can drill down to any level by using Navigational attributes and jump targets.
    Q) WHAT ARE INDEXES?
    Indexes are data base indexes, which help in retrieving data fastly.
    Q) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
    Help! Refer documentation
    Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED?
    No.
    Q) WHAT IS THE SIGNIFICANCE OF KPI'S?
    KPI's indicate the performance of a company. These are key figures
    Q) AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.
    After image (correct me if I am wrong)
    Q) REPORTING AND RESTRICTIONS.
    Help! Refer documentation.
    Q) TOOLS USED FOR PERFORMANCE TUNING.
    ST22, Number ranges, delete indexes before load. Etc
    Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA DAILY.
    There should be some tool to run the job daily (SM37 jobs)
    Q) AUTHORIZATIONS.
    Profile generator
    Q) WEB REPORTING.
    What are you expecting??
    Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.
    Of course
    Q) PROCEDURES OF REPORTING ON MULTICUBES
    Refer help. What are you expecting? MultiCube works on Union condition
    Q) EXPLAIN TRANPSORTATION OF OBJECTS?
    Dev-àQ and Dev-----àP
    Q) What types of partitioning are there for BW?
    There are two Partitioning Performance aspects for BW (Cube & PSA)
    Query Data Retrieval Performance Improvement:
    Partitioning by (say) Date Range improves data retrieval by making best use of database execution plans and indexes (of say Oracle database engine).
    B) Transactional Load Partitioning Improvement:
    Partitioning based on expected load volumes and data element sizes. Improves data loading into PSA and Cubes by infopackages (Eg. without timeouts).
    Q) How can I compare data in R/3 with data in a BW Cube after the daily delta loads? Are there any standard procedures for checking them or matching the number of records?
    A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the number of records extracted. Then go to BW Monitor to check the number of records in the PSA and check to see if it is the same & also in the monitor header tab.
    A) RSA3 is a simple extractor checker program that allows you to rule out extracts problems in R/3. It is simple to use, but only really tells you if the extractor works. Since records that get updated into Cubes/ODS structures are controlled by Update Rules, you will not be able to determine what is in the Cube compared to what is in the R/3 environment. You will need to compare records on a 1:1 basis against records in R/3 transactions for the functional area in question. I would recommend enlisting the help of the end user community to assist since they presumably know the data.
    To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute and you will see the record count, you can also go to display that data. You are not modifying anything so what you do in RSA3 has no effect on data quality afterwards. However, it will not tell you how many records should be expected in BW for a given load. You have that information in the monitor RSMO during and after data loads. From RSMO for a given load you can determine how many records were passed through the transfer rules from R/3, how many targets were updated, and how many records passed through the Update Rules. It also gives you error messages from the PSA.
    Q) Types of Transfer Rules?
    A) Field to Field mapping, Constant, Variable & routine.
    Q) Types of Update Rules?
    A) (Check box), Return table
    Q) Transfer Routine?
    A) Routines, which we write in, transfer rules.
    Q) Update Routine?
    A) Routines, which we write in Update rules
    Q) What is the difference between writing a routine in transfer rules and writing a routine in update rules?
    A) If you are using the same InfoSource to update data in more than one data target its better u write in transfer rules because u can assign one InfoSource to more than one data target & and what ever logic u write in update rules it is specific to particular one data target.
    Q) Routine with Return Table.
    A) Update rules generally only have one return value. However, you can create a routine in the tab strip key figure calculation, by choosing checkbox Return table. The corresponding key figure routine then no longer has a return value, but a return table. You can then generate as many key figure values, as you like from one data record.
    Q) Start routines?
    A) Start routines u can write in both updates rules and transfer rules, suppose you want to restrict (delete) some records based on conditions before getting loaded into data targets, then you can specify this in update rules-start routine.
    Ex: - Delete Data_Package ani ante it will delete a record based on the condition
    Q) X & Y Tables?
    X-table = A table to link material SIDs with SIDs for time-independent navigation attributes.
    Y-table = A table to link material SIDs with SIDS for time-dependent navigation attributes.
    There are four types of sid tables
    X time independent navigational attributes sid tables
    Y time dependent navigational attributes sid tables
    H hierarchy sid tables
    I hierarchy structure sid tables
    Q) Filters & Restricted Key figures (real time example)
    Restricted KF's u can have for an SD cube: billed quantity, billing value, no: of billing documents as RKF's.
    Q) Line-Item Dimension (give me an real time example)
    Line-Item Dimension: Invoice no: or Doc no: is a real time example
    Q) What does the number in the 'Total' column in Transaction RSA7 mean?
    A) The 'Total' column displays the number of LUWs that were written in the delta queue and that have not yet been confirmed. The number includes the LUWs of the last delta request (for repetition of a delta request) and the LUWs for the next delta request. A LUW only disappears from the RSA7 display when it has been transferred to the BW System and a new delta request has been received from the BW System.
    Q) How to know in which table (SAP BW) contains Technical Name / Description and creation data of a particular Reports. Reports that are created using BEx Analyzer.
    A) There is no such table in BW if you want to know such details while you are opening a particular query press properties button you will come to know all the details that you wanted.
    You will find your information about technical names and description about queries in the following tables. Directory of all reports (Table RSRREPDIR) and Directory of the reporting component elements (Table RSZELTDIR) for workbooks and the connections to queries check Where- used list for reports in workbooks (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table RSRWBINDEXT)
    Q) What is a LUW in the delta queue?
    A) A LUW from the point of view of the delta queue can be an individual document, a group of documents from a collective run or a whole data packet of an application extractor.
    Q) Why does the number in the 'Total' column in the overview screen of Transaction RSA7 differ from the number of data records that is displayed when you call the detail view?
    A) The number on the overview screen corresponds to the total of LUWs (see also first question) that were written to the qRFC queue and that have not yet been confirmed. The detail screen displays the records contained in the LUWs. Both, the records belonging to the previous delta request and the records that do not meet the selection conditions of the preceding delta init requests are filtered out. Thus, only the records that are ready for the next delta request are displayed on the detail screen. In the detail screen of Transaction RSA7, a possibly existing customer exit is not taken into account.
    Q) Why does Transaction RSA7 still display LUWs on the overview screen after successful delta loading?
    A) Only when a new delta has been requested does the source system learn that the previous delta was successfully loaded to the BW System. Then, the LUWs of the previous delta may be confirmed (and also deleted). In the meantime, the LUWs must be kept for a possible delta request repetition. In particular, the number on the overview screen does not change when the first delta was loaded to the BW System.
    Q) Why are selections not taken into account when the delta queue is filled?
    A) Filtering according to selections takes place when the system reads from the delta queue. This is necessary for reasons of performance.
    Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has also been loaded successfully?
    It is most likely that this is a DataSource that does not send delta data to the BW System via the delta queue but directly via the extractor (delta for master data using ALE change pointers). Such a DataSource should not be displayed in RSA7. This error is corrected with BW 2.0B Support Package 11.
    Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the loading procedure from the delta queue?
    A) The impact is limited. If performance problems are related to the loading process from the delta queue, then refer to the application-specific notes (for example in the CO-PA area, in the logistics cockpit area and so on).
    Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as effective for the delta queue as for a full update. Please note, however, that LUWs are not split during data loading for consistency reasons. This means that when very large LUWs are written to the DeltaQueue, the actual package size may differ considerably from the MAXSIZE and MAXLINES parameters.
    Q) Why does it take so long to display the data in the delta queue (for example approximately 2 hours)?
    A) With Plug In 2001.1 the display was changed: the user has the option of defining the amount of data to be displayed, to restrict it, to selectively choose the number of a data record, to make a distinction between the 'actual' delta data and the data intended for repetition and so on.
    Q) What is the purpose of function 'Delete data and meta data in a queue' in RSA7? What exactly is deleted?
    A) You should act with extreme caution when you use the deletion function in the delta queue. It is comparable to deleting an InitDelta in the BW System and should preferably be executed there. You do not only delete all data of this DataSource for the affected BW System, but also lose the entire information concerning the delta initialization. Then you can only request new deltas after another delta initialization.
    When you delete the data, the LUWs kept in the qRFC queue for the corresponding target system are confirmed. Physical deletion only takes place in the qRFC outbound queue if there are no more references to the LUWs.
    The deletion function is for example intended for a case where the BW System, from which the delta initialization was originally executed, no longer exists or can no longer be accessed.
    Q) Why does it take so long to delete from the delta queue (for example half a day)?
    A) Import PlugIn 2000.2 patch 3. With this patch the performance during deletion is considerably improved.
    Q) Why is the delta queue not updated when you start the V3 update in the logistics cockpit area?
    A) It is most likely that a delta initialization had not yet run or that the delta initialization was not successful. A successful delta initialization (the corresponding request must have QM status 'green' in the BW System) is a prerequisite for the application data being written in the delta queue.
    Q) What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)?
    A) The qRFC monitor basically displays the same data as RSA7. The internal queue name must be used for selection on the initial screen of the qRFC monitor. This is made up of the prefix 'BW, the client and the short name of the DataSource. For DataSources whose name are 19 characters long or shorter, the short name corresponds to the name of the DataSource. For DataSources whose name is longer than 19 characters (for delta-capable DataSources only possible as of PlugIn 2001.1) the short name is assigned in table ROOSSHORTN.
    In the qRFC monitor you cannot distinguish between repeatable and new LUWs. Moreover, the data of a LUW is displayed in an unstructured manner there.
    Q) Why are the data in the delta queue although the V3 update was not started?
    A) Data was posted in background. Then, the records are updated directly in the delta queue (RSA7). This happens in particular during automatic goods receipt posting (MRRS). There is no duplicate transfer of records to the BW system. See Note 417189.
    Q) Why does button 'Repeatable' on the RSA7 data details screen not only show data loaded into BW during the last delta but also data that were newly added, i.e. 'pure' delta records?
    A) Was programmed in a way that the request in repeat mode fetches both actually repeatable (old) data and new data from the source system.
    Q) I loaded several delta inits with various selections. For which one is the delta loaded?
    A) For delta, all selections made via delta inits are summed up. This means, a delta for the 'total' of all delta initializations is loaded.
    Q) How many selections for delta inits are possible in the system?
    A) With simple selections (intervals without complicated join conditions or single values), you can make up to about 100 delta inits. It should not be more.
    With complicated selection conditions, it should be only up to 10-20 delta inits.
    Reason: With many selection conditions that are joined in a complicated way, too many 'where' lines are generated in the generated ABAP source code that may exceed the memory limit.
    Q) I intend to copy the source system, i.e. make a client copy. What will happen with may delta? Should I initialize again after that?
    A) Before you copy a source client or source system, make sure that your deltas have been fetched from the DeltaQueue into BW and that no delta is pending. After the client copy, an inconsistency might occur between BW delta tables and the OLTP delta tables as described in Note 405943. After the client copy, Table ROOSPRMSC will probably be empty in the OLTP since this table is client-independent. After the system copy, the table will contain the entries with the old logical system name that are no longer useful for further delta loading from the new logical system. The delta must be initialized in any case since delta depends on both the BW system and the source system. Even if no dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you should expect that the delta have to be initialized after the copy.
    Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of processes?
    A) Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW queues only after informing the BW Support or only if this is explicitly requested in a note for component 'BC-BW' or 'BW-WHM-SAPI'.
    Q) Despite of the delta request being started after completion of the collective run (V3 update), it does not contain all documents. Only another delta request loads the missing documents into BW. What is the cause for this "splitting"?
    A) The collective run submits the open V2 documents for processing to the task handler, which processes them in one or several parallel update processes in an asynchronous way. For this reason, plan a sufficiently large "safety time window" between the end of the collective run in the source system and the start of the delta request in BW. An alternative solution where this problem does not occur is described in Note 505700.
    Q) Despite my deleting the delta init, LUWs are still written into the DeltaQueue?
    A) In general, delta initializations and deletions of delta inits should always be carried out at a time when no posting takes place. Otherwise, buffer problems may occur: If a user started the internal mode at a time when the delta initialization was still active, he/she posts data into the queue even though the initialization had been deleted in the meantime. This is the case in your system.
    Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What do these statuses mean? Which values in the field 'Status' mean what and which values are correct and which are alarming? Are the statuses BW-specific or generally valid in qRFC?
    A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read once either in a delta request or in a repetition of the delta request. However, this does not mean that the record has successfully reached the BW yet. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that the record has been written into the DeltaQueue and will be loaded into the BW with the next delta request or a repetition of a delta. In any case only the statuses READ, READY and RECORDED in both tables are considered to be valid. The status EXECUTED in TRFCQOUT can occur temporarily. It is set before starting a DeltaExtraction for all records with status READ present at that time. The records with status EXECUTED are usually deleted from the queue in packages within a delta request directly after setting the status before extracting a new delta. If you see such records, it means that either a process which is confirming and deleting records which have been loaded into the BW is successfully running at the moment, or, if the records remain in the table for a longer period of time with status EXECUTED, it is likely that there are problems with deleting the records which have already been successfully been loaded into the BW. In this state, no more deltas are loaded into the BW. Every other status is an indicator for an error or an inconsistency. NOSEND in SMQ1 means nothing (see note 378903).
    The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.
    Q) The extract structure was changed when the DeltaQueue was empty. Afterwards new delta records were written to the DeltaQueue. When loading the delta into the PSA, it shows that some fields were moved. The same result occurs when the contents of the DeltaQueue are listed via the detail display. Why are the data displayed differently? What can be done?
    Make sure that the change of the extract structure is also reflected in the database and that all servers are synchronized. We recommend to reset the buffers using Transaction $SYNC. If the extract structure change is not communicated synchronously to the server where delta records are being created, the records are written with the old structure until the new structure has been generated. This may have disastrous consequences for the delta.
    When the problem occurs, the delta needs to be re-initialized.
    Q) How and where can I control whether a repeat delta is requested?
    A) Via the status of the last delta in the BW Request Monitor. If the request is RED, the next load will be of type 'Repeat'. If you need to repeat the last load for certain reasons, set the request in the monitor to red manually. For the contents of the repeat see Question 14. Delta requests set to red despite of data being already updated lead to duplicate records in a subsequent repeat, if they have not been deleted from the data targets concerned before.
    Q) As of PI 2003.1, the Logistic Cockpit offers various types of update methods. Which update method is recommended in logistics? According to which criteria should the decision be made? How can I choose an update method in logistics?
    See the recommendation in Note 505700.
    Q) Are there particular recommendations regarding the data volume the DeltaQueue may grow to without facing the danger of a read failure due to memory problems?
    A) There is no strict limit (except for the restricted number range of the 24-digit QCOUNT counter in the LUW management table - which is of no practical importance, however - or the restrictions regarding the volume and number of records in a database table).
    When estimating "smooth" limits, both the number of LUWs is important and the average data volume per LUW. As a rule, we recommend to bundle data (usually documents) already when writing to the DeltaQueue to keep number of LUWs small (partly this can be set in the applications, e.g. in the Logistics Cockpit). The data volume of a single LUW should not be considerably larger than 10% of the memory available to the work process for data extraction (in a 32-bit architecture with a memory volume of about 1GByte per work process, 100 Mbytes per LUW should not be exceeded). That limit is of rather small practical importance as well since a comparable limit already applies when writing to the DeltaQueue. If the limit is observed, correct reading is guaranteed in most cases.
    If the number of LUWs cannot be reduced by bundling application transactions, you should at least make sure that the data are fetched from all connected BWs as quickly as possible. But for other, BW-specific, reasons, the frequency should not be higher than one DeltaRequest per hour.
    To avoid memory problems, a program-internal limit ensures that never more than 1 million LUWs are read and fetched from the database per DeltaRequest. If this limit is reached within a request, the DeltaQueue must be emptied by several successive DeltaRequests. We recommend, however, to try not to reach that limit but trigger the fetching of data from the connected BWs already when the number of LUWs reaches a 5-digit value.
    Q) I would like to display the date the data was uploaded on the report. Usually, we load the transactional data nightly. Is there any easy way to include this information on the report for users? So that they know the validity of the report.
    A) If I understand your requirement correctly, you want to display the date on which data was loaded into the data target from which the report is being executed. If it is so, configure your workbook to display the text elements in the report. This displays the relevance of data field, which is the date on which the data load has taken place.
    Q) Can we filter the fields at Transfer Structure?
    Q) Can we load data directly into infoobject with out extraction is it possible.
    Yes. We can copy from other infoobject if it is same. We load data from PSA if it is already in PSA.
    Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA, IF WE R SHEDULED DAILY, WEEKLY AND MONTHLY.
    a) We can set the time.
    Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING ON OFFSHORE PROJECTS. THROUGH WHICH NETWORK.
    a) VPN…………….Virtual Private Network, VPN is nothing but one sort of network where we can connect to the client systems sitting in offshore through RAS (Remote access server).
    Q) HOW CAN U ANALIZE THE PROJECT AT FIRST?
    Prepare Project Plan and Environment
    Define Project Management Standards and
    Procedures
    Define Implementation Standards and Procedures
    Testing & Go-live + supporting.
    Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL CUBES IF ONE CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
    Go to TCode sm66 then see which one is locked select that pid from there and goto sm12
    TCode then unlock it this is happened when lock errors are occurred when u scheduled.
    Q) Can anybody tell me how to add a navigational attribute in the BEx report in the rows?
    A) Expand dimension under left side panel (that is infocube panel) select than navigational attributes drag and drop under rows panel.
    Q) IF ANY TRASACTION CODE LIKE SMPT OR STMT.
    In current systems (BW 3.0B and R/3 4.6B) these Tcodes don't exist!
    Q) WHAT IS TRANSACTIONAL CUBE?
    A) Transactional InfoCubes differ from standard InfoCubes in that the former have an improved write access performance level. Standard InfoCubes are technically optimized for read-only access and for a comparatively small number of simultaneous accesses. Instead, the transactional InfoCube was developed to meet the demands of SAP Strategic Enterprise Management (SEM), meaning that, data is written to the InfoCube (possibly by several users at the same time) and re-read as soon as possible. Standard Basic cubes are not suitable for this.
    Q) Is there any way to delete cube contents within update rules from an ODS data source? The reason for this would be to delete (or zero out) a cube record in an "Open Order" cube if the open order quantity was 0.
    I've tried using the 0recordmode but that doesn't work. Also, would it
    be easier to write a program that would be run after the load and delete
    the records with a zero open qty?
    A) START routine for update rules u can write ABAP code.
    A) Yap, you can do it. Create a start routine in Update rule.
    It is not "Deleting cube contents with update rules" It is only possible to avoid that some content is updated into the InfoCube using the start routine. Loop at all the records and delete the record that has the condition. "If the open order quantity was 0" You have to think also in before and after images in case of a delta upload. In that case you may delete the change record and keep the old and after the change the wrong information.
    Q) I am not able to access a node in hierarchy directly using variables for reports. When I am using Tcode RSZV it is giving a message that it doesn't exist in BW 3.0 and it is embedded in BEx. Can any one tell me the other options to get the same functionality in BEx?
    A) Tcode RSZV is used in the earlier version of 3.0B only. From 3.0B onwards, it's possible in the Query Designer (BEx) itself. Just right click on the InfoObject for which you want to use as variables and precede further selecting variable type and processing types.
    Q) Wondering how can I get the values, for an example, if I run a report for month range 01/2004 - 10/2004 then monthly value is actually divide by the number of months that I selected. Which variable should I use?
    Q) Why is it every time I switch from Info Provider to InfoObject or from one item to another while in modeling I always get this message " Reading Data " or "constructing workbench" in it runs for minutes.... anyway to stop this?
    Q) Can any one give me info on how the BW delta works also would like to know about 'before image and after image' am currently in a BW project and have to write start routines for delta load.
    Q) I am very new to BW. I would like to clarify a doubt regarding Delta extractor. If I am correct, by using delta extractors the data that has already been scheduled will not be uploaded again. Say for a specific scenario, Sales. Now I have uploaded all the sales order created till yesterday into the cube. Now say I make changes to any of the open record, which was already uploaded. Now what happens when I schedule it again? Will the same record be uploaded again with the changes or will the changes get affected to the previous record.
    A)
    Q) In BW we need to write abap routines. I wish to know when and what type of abap routines we got to write. Also, are these routines written in update rules? I will be glad, if this is clarified with real-time scenarios and few examples?
    A) Over here we write our routines in the start routines in the update rules or in the transfer structure (you can choose between writing them in the start routines or directly behind the different characteristics. In the transfer structure you just click on the yellow triangle behind a characteristic and choose "routine". In the update rules you can choose "start routine" or click on the triangle with the green square behind an individual characteristic. Usually we only use start routine when it does not concern one single characteristic (for example when you have to read the same table for 4 characteristics). I hope this helps.
    We used ABAP Routines for example:
    To convert to Uppercase (transfer structure)
    To convert Values out of a third party tool with different keys into the same keys as our SAP System uses (transfer structure)
    To select only a part of the data for from an infosource updating the InfoCube (Start Routine) etc.
    Q) What is ODS?
    A) An ODS object acts as a storage location for consolidated and cleaned-up transaction data (transaction data or master data, for example) on the document (atomic) level.
    This data can be evaluated using a BEx query.
    Standard ODS Object
    Transactional ODS object:
    The data is immediately available here for reporting. For implementation, compare with the Transactional ODS Object.
    A transactional ODS object differs from a standard ODS object in the way it prepares data. In a standard ODS object, data is stored in different versions ((new) delta, active, (change log) modified), where as a transactional ODS object contains the data in a single version. Therefore, data is stored in precisely the same form in which it was written to the transactional ODS object by the application. In BW, you can use a transaction ODS object as a data target for an analysis process.
    The transactional ODS object is also required by diverse applications, such as SAP Strategic Enterprise Management (SEM) for example, as well as other external applications.
    Transactional ODS objects allow data to be available quickly. The data from this kind of ODS object is accessed transactionally, that is, data is written to the ODS object (possibly by several users at the same time) and reread as soon as possible.
    It offers no replacement for the standard ODS object. Instead, an additional function displays those that can be used for special applications.
    The transactional ODS object simply consists of a table for active data. It retrieves its data from external systems via fill- or delete- APIs. The loading process is not supported by the BW system. The advantage to the way it is structured is that data is easy to access. They are made available for reporting immediately after being loaded.
    Q) What does InfoCube contains?
    A) Each InfoCube has one FactTable & a maximum of 16 (13+3 system defined, time, unit & data packet) dimensions.
    Q) What does FACT Table contain?
    A FactTable consists of KeyFigures.
    Each Fact Table can contain a maximum of 233 key figures.
    Dimension can contain up to 248 freely available characteristics.
    Q) How many dimensions are in a CUBE?
    A) 16 dimensions. (13 user defined & 3 system pre-defined )
    Q) What does SID Table contain?
    SID keys linked with dimension table & master data tables (attributes, texts, hierarchies)
    Q) What does ATTRIBUTE Table contain?
    Master attribute data
    Q) What does TEXT Table contain?
    Master text data, short text, long text, medium text & language key if it is language dependent
    Q) What does Hierarchy table contain?

    Hi DST
    This is a great effort and gesture. thank you on behalf of all the newbies.
    PJ

  • Tethering import (Auto) forces change to DNG. Can't find setting to change

    I am trying to set up auto importing by tethered shooting with a Canon 1Ds (but don't think brand and camera matter.)
    I never use DNG, and am not aware of, and after a lot of searching, cannot find anywhere in the LR4.3 preferences at which I specified DNG conversion. In the Auto0Import dialog there is a File-handling tab that offers one choice: how to spell the dng suffix. The files I want to import will come in as CR2., and testing this with other folder destinations and with Canon's EOS Utility prove that the camera is "sending" CR2 files.
    This is making me crazy, I cannot find a dialog to control this or a Pref that is forcing this behavior.
    Thanks, in advance, for help. Training my assistant in this process in a couple of hours so need to resolve what's happenong soon. Shoot for this is in a couple of days.
    jonathan7007

    No problem jonathan,
    As I said, I do a lot of tethered shooting so this lands squarely in my wheelhouse. Part of my job involves busy days shooting thousands of frames of child models... Many products and variations so I feel your pain on keeping things organized on the fly. Here's a few notes:
    First one to clarify -- when you use auto-import, Lightroom will import files as they are added to the watched folder and move them to your specified destination. They do not remain in the watched folder. After import, the watched folder is empty. It sounds like you think the process results in two copies of the file.
    Next, I'm curious to hear that you think native tethering is more stable than EOS Utility. It may be your system - I use Macs - but I find that EOS Utility works great while native tethering is good but has issues recovering from dropped connections. On the mac I tell the computer to launch EOS Utility whenever the camera is connected and the process is almost seamless.
    Another benefit to auto-import for the type of shooting you'll be doing is that you have a quick fallback if the USB tethering fails (and it likely will at the worst time). Simply pull the USB cable, swap in a blank memory card (to avoid redundant files) and shoot to card. Use a card reader to copy new images to the watched folder and they'll import just like the rest. No need to configure a new import dialogue with file naming, destination, etc. You already did that in the auto import prefs. It may not sound like a big deal, but in practice it's a huge way to avoid friction when things get crazed.
    But don't blindly take my word on all that if it's contrary to your experience -- do whatever seems to be most intuitive and/or reliable with your setup.
    We don't do any keywording where I work, but we do use metadata presets to populate product info and workflow related data on import so I can't speak much to that aspect of your process, but getting whatever info you can into the tools provided by the auto-import prefs is beneficial. Better to have it applied at capture. If you haven't read Peter Krogh's DAM Book, it's a great resource.
    Finally, I recently made some notes on the nuts and bolts of tethering in this post.
    Probably not perfect, but it works for me.
    best of luck,
    Andy

  • FAQS FOR SAP BW

    HI,
    friends i need some FAQS for the interviews .
    thanks and regards
    shafeeq ahmed

    Hi Shafeed,
    Here are some questions and answers.
    1)name the two table that providedetail information about data source
    2)how and when can you control whether repeat delta is requested?
    3)how can you improve the performance of aquery
    4)how to prevent duplicate record in at the data target level
    5)what is virtual cube?its significance
    6)diff methods of generic datasource
    7)how to connect anew data target to an existing data flow
    8)what is partition
    9) SAP batch process
    10)how do you improve the info cube design preformance
    12)is there any diff between repair run/repaire request.if yes then please tell me in detail
    13)difference between process chain and infopackage group
    diff between partition/aggregate
    ANS--&#61664;
    1)   Hi Santosh
    Please find some of the answers here...
    For Q 3) Query Performance can be improved by making the Aggregates having all the Chars & KF used in Query.
    Q 5) Virtual Cube : InfoProvider with transaction data that is not stored in the object itself, but which is read directly for analysis and reporting purposes. The relevant data can be from the BI system or from other SAP or non-SAP systems.
    VirtualProviders only allow read access to data.
    Q 6) Diff Methods of Generic datasource using Transaction RSO2 :
    a) Extraction from DB Table or View
    b) Extraction from SAP Query
    c) Extraction by Function Module
    2)  Important BW datasource relevant tables
    ROOSOURCE: Table Header for SAP BW OLTP Sources
    RODELTAM: BW Delta Process
    ROOSFIELD: DataSource Fields
    ROOSGEN: Generated Objects for OLTP Source, Last changed date and who etc.
    3)     For Q 8) i think you mean table partition
    You use partition to improve performance. You can only partiton on 0CALMONTH or 0FISCPER
    4)     1. ROOSOURCE
    6. Generic Extarction using 1.Views 2. Infoset Queries , 3. Function modules
    5)     Hi Santosh
    Pls note down the Q& ANS
    Some of the Real time question.
    Q) Under which menu path is the Test Workbench to be found, including in earlier Releases?
    The menu path is: Tools - ABAP Workbench - Test - Test Workbench.
    Q) I want to delete a BEx query that is in Production system through request. Is anyone aware about it?
    A) Have you tried the RSZDELETE transaction?
    Q) Errors while monitoring process chains.
    A) During data loading. Apart from them, in process chains you add so many process types, for example after loading data into Info Cube, you rollup data into aggregates, now this rolling up of data into aggregates is a process type which you keep after the process type for loading data into Cube. This rolling up into aggregates might fail.
    Another one is after you load data into ODS, you activate ODS data (another process type) this might also fail.
    Q) In Monitor----- Details (Header/Status/Details) à Under Processing (data packet): Everything OK à Context menu of Data Package 1 (1 Records): Everything OK -
    Simulate update. (Here we can debug update rules or transfer rules.)
    SM50 à Program/Mode à Program à Debugging & debug this work process.
    Q) PSA Cleansing.
    A) You know how to edit PSA. I don't think you can delete single records. You have to delete entire PSA data for a request.
    Q) Can we make a datasource to support delta.
    A) If this is a custom (user-defined) datasource you can make the datasource delta enabled. While creating datasource from RSO2, after entering datasource name and pressing create, in the next screen there is one button at the top, which says generic delta. If you want more details about this there is a chapter in Extraction book, it's in last pages u find out.
    Generic delta services: -
    Supports delta extraction for generic extractors according to:
    Time stamp
    Calendar day
    Numeric pointer, such as document number & counter
    Only one of these attributes can be set as a delta attribute.
    Delta extraction is supported for all generic extractors, such as tables/views, SAP Query and function modules
    The delta queue (RSA7) allows you to monitor the current status of the delta attribute
    Q) Workbooks, as a general rule, should be transported with the role.
    Here are a couple of scenarios:
    1. If both the workbook and its role have been previously transported, then the role does not need to be part of the transport.
    2. If the role exists in both dev and the target system but the workbook has never been transported, and then you have a choice of transporting the role (recommended) or just the workbook. If only the workbook is transported, then an additional step will have to be taken after import: Locate the WorkbookID via Table RSRWBINDEXT (in Dev and verify the same exists in the target system) and proceed to manually add it to the role in the target system via Transaction Code PFCG -- ALWAYS use control c/control v copy/paste for manually adding!
    3. If the role does not exist in the target system you should transport both the role and workbook. Keep in mind that a workbook is an object unto itself and has no dependencies on other objects. Thus, you do not receive an error message from the transport of 'just a workbook' -- even though it may not be visible, it will exist (verified via Table RSRWBINDEXT).
    Overall, as a general rule, you should transport roles with workbooks.
    Q) How much time does it take to extract 1 million (10 lackhs) of records into an infocube?
    A. This depends, if you have complex coding in update rules it will take longer time, or else it will take less than 30 minutes.
    Q) What are the five ASAP Methodologies?
    A: Project plan, Business Blue print, Realization, Final preparation & Go-Live - support.
    1. Project Preparation: In this phase, decision makers define clear project objectives and an efficient decision making process (i.e. Discussions with the client, like what are his needs and requirements etc.). Project managers will be involved in this phase (I guess).
    A Project Charter is issued and an implementation strategy is outlined in this phase.
    2. Business Blueprint: It is a detailed documentation of your company's requirements. (i.e. what are the objects we need to develop are modified depending on the client's requirements).
    3. Realization: In this only, the implementation of the project takes place (development of objects etc) and we are involved in the project from here only.
    4. Final Preparation: Final preparation before going live i.e. testing, conducting pre-go-live, end user training etc.
    End user training is given that is in the client site you train them how to work with the new environment, as they are new to the technology.
    5. Go-Live & support: The project has gone live and it is into production. The Project team will be supporting the end users.
    Q) What is landscape of R/3 & what is landscape of BW. Landscape of R/3 not sure.
    Then Landscape of b/w: u have the development system, testing system, production system
    Development system: All the implementation part is done in this sys. (I.e., Analysis of objects developing, modification etc) and from here the objects are transported to the testing system, but before transporting an initial test known as Unit testing (testing of objects) is done in the development sys.
    Testing/Quality system: quality check is done in this system and integration testing is done.
    Production system: All the extraction part takes place in this sys.
    Q) How do you measure the size of infocube?
    A: In no of records.
    Q). Difference between infocube and ODS?
    A: Infocube is structured as star schema (extended) where a fact table is surrounded by different dim table that are linked with DIM'ids. And the data wise, you will have aggregated data in the cubes. No overwrite functionality
    ODS is a flat structure (flat table) with no star schema concept and which will have granular data (detailed level). Overwrite functionality.
    Flat file datasources does not support 0recordmode in extraction.
    x before, -after, n new, a add, d delete, r reverse
    Q) Difference between display attributes and navigational attributes?
    A: Display attribute is one, which is used only for display purpose in the report. Where as navigational attribute is used for drilling down in the report. We don't need to maintain Navigational attribute in the cube as a characteristic (that is the advantage) to drill down.
    Q. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
    A: But how is it possible? If you load it manually twice, then you can delete it by requestID.
    Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
    Sure you can. ODS is nothing but a table.
    Q. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?
    A) Yes of course. For example, for loading text and hierarchies we use different data sources but the same InfoSource.
    Q. BRIEF THE DATAFLOW IN BW.
    A) Data flows from transactional system to analytical system (BW). DataSources on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively.
    Q. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER RULES?
    Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
    FULL and DELTA.
    Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LO-COCKPIT?
    No LIS in LO cockpit. We will have datasources and can be maintained (append fields). Refer white paper on LO-Cockpit extractions.
    Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?
    A) Initially we don't delete the setup tables but when we do change in extract structure we go for it. We r changing the extract structure right, that means there are some newly added fields in that which r not before. So to get the required data (i.e.; the data which is required is taken and to avoid redundancy) we delete n then fill the setup tables.
    To refresh the statistical data. The extraction set up reads the dataset that you want to process such as, customers orders with the tables like VBAK, VBAP) & fills the relevant communication structure with the data. The data is stored in cluster tables from where it is read when the initialization is run. It is important that during initialization phase, no one generates or modifies application data, at least until the tables can be set up.
    Q) SIGNIFICANCE of ODS?
    It holds granular data (detailed level).
    Q) WHERE THE PSA DATA IS STORED?
    In PSA table.
    Q) WHAT IS DATA SIZE?
    The volume of data one data target holds (in no. of records)
    Q) Different types of INFOCUBES.
    Basic, Virtual (remote, sap remote and multi)
    Virtual Cube is used for example, if you consider railways reservation all the information has to be updated online. For designing the Virtual cube you have to write the function module that is linking to table, Virtual cube it is like a the structure, when ever the table is updated the virtual cube will fetch the data from table and display report Online... FYI.. you will get the information : https://www.sdn.sap.com/sdn/index.sdn and search for Designing Virtual Cube and you will get a good material designing the Function Module
    Q) INFOSET QUERY.
    Can be made of ODS's and Characteristic InfoObjects with masterdata.
    Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
    In R/3 or in BW? 2 in R/3 and 2 in BW
    Q) ROUTINES?
    Exist in the InfoObject, transfer routines, update routines and start routine
    Q) BRIEF SOME STRUCTURES USED IN BEX.
    Rows and Columns, you can create structures.
    Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
    Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes & Characteristic values.
    Variable Types are
    Manual entry /default value
    Replacement path
    SAP exit
    Customer exit
    Authorization
    Q) HOW MANY LEVELS YOU CAN GO IN REPORTING?
    You can drill down to any level by using Navigational attributes and jump targets.
    Q) WHAT ARE INDEXES?
    Indexes are data base indexes, which help in retrieving data fastly.
    Q) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
    Help! Refer documentation
    Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED?
    No.
    Q) WHAT IS THE SIGNIFICANCE OF KPI'S?
    KPI's indicate the performance of a company. These are key figures
    Q) AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.
    After image (correct me if I am wrong)
    Q) REPORTING AND RESTRICTIONS.
    Help! Refer documentation.
    Q) TOOLS USED FOR PERFORMANCE TUNING.
    ST22, Number ranges, delete indexes before load. Etc
    Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA DAILY.
    There should be some tool to run the job daily (SM37 jobs)
    Q) AUTHORIZATIONS.
    Profile generator
    Q) WEB REPORTING.
    What are you expecting??
    Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.
    Of course
    Q) PROCEDURES OF REPORTING ON MULTICUBES
    Refer help. What are you expecting? MultiCube works on Union condition
    Q) EXPLAIN TRANPSORTATION OF OBJECTS?
    Dev-àQ and Dev-----àP
    Q) What types of partitioning are there for BW?
    There are two Partitioning Performance aspects for BW (Cube & PSA)
    Query Data Retrieval Performance Improvement:
    Partitioning by (say) Date Range improves data retrieval by making best use of database execution plans and indexes (of say Oracle database engine).
    B) Transactional Load Partitioning Improvement:
    Partitioning based on expected load volumes and data element sizes. Improves data loading into PSA and Cubes by infopackages (Eg. without timeouts).
    Q) How can I compare data in R/3 with data in a BW Cube after the daily delta loads? Are there any standard procedures for checking them or matching the number of records?
    A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the number of records extracted. Then go to BW Monitor to check the number of records in the PSA and check to see if it is the same & also in the monitor header tab.
    A) RSA3 is a simple extractor checker program that allows you to rule out extracts problems in R/3. It is simple to use, but only really tells you if the extractor works. Since records that get updated into Cubes/ODS structures are controlled by Update Rules, you will not be able to determine what is in the Cube compared to what is in the R/3 environment. You will need to compare records on a 1:1 basis against records in R/3 transactions for the functional area in question. I would recommend enlisting the help of the end user community to assist since they presumably know the data.
    To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute and you will see the record count, you can also go to display that data. You are not modifying anything so what you do in RSA3 has no effect on data quality afterwards. However, it will not tell you how many records should be expected in BW for a given load. You have that information in the monitor RSMO during and after data loads. From RSMO for a given load you can determine how many records were passed through the transfer rules from R/3, how many targets were updated, and how many records passed through the Update Rules. It also gives you error messages from the PSA.
    Q) Types of Transfer Rules?
    A) Field to Field mapping, Constant, Variable & routine.
    Q) Types of Update Rules?
    A) (Check box), Return table
    Q) Transfer Routine?
    A) Routines, which we write in, transfer rules.
    Q) Update Routine?
    A) Routines, which we write in Update rules
    Q) What is the difference between writing a routine in transfer rules and writing a routine in update rules?
    A) If you are using the same InfoSource to update data in more than one data target its better u write in transfer rules because u can assign one InfoSource to more than one data target & and what ever logic u write in update rules it is specific to particular one data target.
    Q) Routine with Return Table.
    A) Update rules generally only have one return value. However, you can create a routine in the tab strip key figure calculation, by choosing checkbox Return table. The corresponding key figure routine then no longer has a return value, but a return table. You can then generate as many key figure values, as you like from one data record.
    Q) Start routines?
    A) Start routines u can write in both updates rules and transfer rules, suppose you want to restrict (delete) some records based on conditions before getting loaded into data targets, then you can specify this in update rules-start routine.
    Ex: - Delete Data_Package ani ante it will delete a record based on the condition
    Q) X & Y Tables?
    X-table = A table to link material SIDs with SIDs for time-independent navigation attributes.
    Y-table = A table to link material SIDs with SIDS for time-dependent navigation attributes.
    There are four types of sid tables
    X time independent navigational attributes sid tables
    Y time dependent navigational attributes sid tables
    H hierarchy sid tables
    I hierarchy structure sid tables
    Q) Filters & Restricted Key figures (real time example)
    Restricted KF's u can have for an SD cube: billed quantity, billing value, no: of billing documents as RKF's.
    Q) Line-Item Dimension (give me an real time example)
    Line-Item Dimension: Invoice no: or Doc no: is a real time example
    Q) What does the number in the 'Total' column in Transaction RSA7 mean?
    A) The 'Total' column displays the number of LUWs that were written in the delta queue and that have not yet been confirmed. The number includes the LUWs of the last delta request (for repetition of a delta request) and the LUWs for the next delta request. A LUW only disappears from the RSA7 display when it has been transferred to the BW System and a new delta request has been received from the BW System.
    Q) How to know in which table (SAP BW) contains Technical Name / Description and creation data of a particular Reports. Reports that are created using BEx Analyzer.
    A) There is no such table in BW if you want to know such details while you are opening a particular query press properties button you will come to know all the details that you wanted.
    You will find your information about technical names and description about queries in the following tables. Directory of all reports (Table RSRREPDIR) and Directory of the reporting component elements (Table RSZELTDIR) for workbooks and the connections to queries check Where- used list for reports in workbooks (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table RSRWBINDEXT)
    Q) What is a LUW in the delta queue?
    A) A LUW from the point of view of the delta queue can be an individual document, a group of documents from a collective run or a whole data packet of an application extractor.
    Q) Why does the number in the 'Total' column in the overview screen of Transaction RSA7 differ from the number of data records that is displayed when you call the detail view?
    A) The number on the overview screen corresponds to the total of LUWs (see also first question) that were written to the qRFC queue and that have not yet been confirmed. The detail screen displays the records contained in the LUWs. Both, the records belonging to the previous delta request and the records that do not meet the selection conditions of the preceding delta init requests are filtered out. Thus, only the records that are ready for the next delta request are displayed on the detail screen. In the detail screen of Transaction RSA7, a possibly existing customer exit is not taken into account.
    Q) Why does Transaction RSA7 still display LUWs on the overview screen after successful delta loading?
    A) Only when a new delta has been requested does the source system learn that the previous delta was successfully loaded to the BW System. Then, the LUWs of the previous delta may be confirmed (and also deleted). In the meantime, the LUWs must be kept for a possible delta request repetition. In particular, the number on the overview screen does not change when the first delta was loaded to the BW System.
    Q) Why are selections not taken into account when the delta queue is filled?
    A) Filtering according to selections takes place when the system reads from the delta queue. This is necessary for reasons of performance.
    Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has also been loaded successfully?
    It is most likely that this is a DataSource that does not send delta data to the BW System via the delta queue but directly via the extractor (delta for master data using ALE change pointers). Such a DataSource should not be displayed in RSA7. This error is corrected with BW 2.0B Support Package 11.
    Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the loading procedure from the delta queue?
    A) The impact is limited. If performance problems are related to the loading process from the delta queue, then refer to the application-specific notes (for example in the CO-PA area, in the logistics cockpit area and so on).
    Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as effective for the delta queue as for a full update. Please note, however, that LUWs are not split during data loading for consistency reasons. This means that when very large LUWs are written to the DeltaQueue, the actual package size may differ considerably from the MAXSIZE and MAXLINES parameters.
    Q) Why does it take so long to display the data in the delta queue (for example approximately 2 hours)?
    A) With Plug In 2001.1 the display was changed: the user has the option of defining the amount of data to be displayed, to restrict it, to selectively choose the number of a data record, to make a distinction between the 'actual' delta data and the data intended for repetition and so on.
    Q) What is the purpose of function 'Delete data and meta data in a queue' in RSA7? What exactly is deleted?
    A) You should act with extreme caution when you use the deletion function in the delta queue. It is comparable to deleting an InitDelta in the BW System and should preferably be executed there. You do not only delete all data of this DataSource for the affected BW System, but also lose the entire information concerning the delta initialization. Then you can only request new deltas after another delta initialization.
    When you delete the data, the LUWs kept in the qRFC queue for the corresponding target system are confirmed. Physical deletion only takes place in the qRFC outbound queue if there are no more references to the LUWs.
    The deletion function is for example intended for a case where the BW System, from which the delta initialization was originally executed, no longer exists or can no longer be accessed.
    Q) Why does it take so long to delete from the delta queue (for example half a day)?
    A) Import PlugIn 2000.2 patch 3. With this patch the performance during deletion is considerably improved.
    Q) Why is the delta queue not updated when you start the V3 update in the logistics cockpit area?
    A) It is most likely that a delta initialization had not yet run or that the delta initialization was not successful. A successful delta initialization (the corresponding request must have QM status 'green' in the BW System) is a prerequisite for the application data being written in the delta queue.
    Q) What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)?
    A) The qRFC monitor basically displays the same data as RSA7. The internal queue name must be used for selection on the initial screen of the qRFC monitor. This is made up of the prefix 'BW, the client and the short name of the DataSource. For DataSources whose name are 19 characters long or shorter, the short name corresponds to the name of the DataSource. For DataSources whose name is longer than 19 characters (for delta-capable DataSources only possible as of PlugIn 2001.1) the short name is assigned in table ROOSSHORTN.
    In the qRFC monitor you cannot distinguish between repeatable and new LUWs. Moreover, the data of a LUW is displayed in an unstructured manner there.
    Q) Why are the data in the delta queue although the V3 update was not started?
    A) Data was posted in background. Then, the records are updated directly in the delta queue (RSA7). This happens in particular during automatic goods receipt posting (MRRS). There is no duplicate transfer of records to the BW system. See Note 417189.
    Q) Why does button 'Repeatable' on the RSA7 data details screen not only show data loaded into BW during the last delta but also data that were newly added, i.e. 'pure' delta records?
    A) Was programmed in a way that the request in repeat mode fetches both actually repeatable (old) data and new data from the source system.
    Q) I loaded several delta inits with various selections. For which one is the delta loaded?
    A) For delta, all selections made via delta inits are summed up. This means, a delta for the 'total' of all delta initializations is loaded.
    Q) How many selections for delta inits are possible in the system?
    A) With simple selections (intervals without complicated join conditions or single values), you can make up to about 100 delta inits. It should not be more.
    With complicated selection conditions, it should be only up to 10-20 delta inits.
    Reason: With many selection conditions that are joined in a complicated way, too many 'where' lines are generated in the generated ABAP source code that may exceed the memory limit.
    Q) I intend to copy the source system, i.e. make a client copy. What will happen with may delta? Should I initialize again after that?
    A) Before you copy a source client or source system, make sure that your deltas have been fetched from the DeltaQueue into BW and that no delta is pending. After the client copy, an inconsistency might occur between BW delta tables and the OLTP delta tables as described in Note 405943. After the client copy, Table ROOSPRMSC will probably be empty in the OLTP since this table is client-independent. After the system copy, the table will contain the entries with the old logical system name that are no longer useful for further delta loading from the new logical system. The delta must be initialized in any case since delta depends on both the BW system and the source system. Even if no dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you should expect that the delta have to be initialized after the copy.
    Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of processes?
    A) Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW queues only after informing the BW Support or only if this is explicitly requested in a note for component 'BC-BW' or 'BW-WHM-SAPI'.
    Q) Despite of the delta request being started after completion of the collective run (V3 update), it does not contain all documents. Only another delta request loads the missing documents into BW. What is the cause for this "splitting"?
    A) The collective run submits the open V2 documents for processing to the task handler, which processes them in one or several parallel update processes in an asynchronous way. For this reason, plan a sufficiently large "safety time window" between the end of the collective run in the source system and the start of the delta request in BW. An alternative solution where this problem does not occur is described in Note 505700.
    Q) Despite my deleting the delta init, LUWs are still written into the DeltaQueue?
    A) In general, delta initializations and deletions of delta inits should always be carried out at a time when no posting takes place. Otherwise, buffer problems may occur: If a user started the internal mode at a time when the delta initialization was still active, he/she posts data into the queue even though the initialization had been deleted in the meantime. This is the case in your system.
    Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What do these statuses mean? Which values in the field 'Status' mean what and which values are correct and which are alarming? Are the statuses BW-specific or generally valid in qRFC?
    A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read once either in a delta request or in a repetition of the delta request. However, this does not mean that the record has successfully reached the BW yet. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that the record has been written into the DeltaQueue and will be loaded into the BW with the next delta request or a repetition of a delta. In any case only the statuses READ, READY and RECORDED in both tables are considered to be valid. The status EXECUTED in TRFCQOUT can occur temporarily. It is set before starting a DeltaExtraction for all records with status READ present at that time. The records with status EXECUTED are usually deleted from the queue in packages within a delta request directly after setting the status before extracting a new delta. If you see such records, it means that either a process which is confirming and deleting records which have been loaded into the BW is successfully running at the moment, or, if the records remain in the table for a longer period of time with status EXECUTED, it is likely that there are problems with deleting the records which have already been successfully been loaded into the BW. In this state, no more deltas are loaded into the BW. Every other status is an indicator for an error or an inconsistency. NOSEND in SMQ1 means nothing (see note 378903).
    The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.
    Q) The extract structure was changed when the DeltaQueue was empty. Afterwards new delta records were written to the DeltaQueue. When loading the delta into the PSA, it shows that some fields were moved. The same result occurs when the contents of the DeltaQueue are listed via the detail display. Why are the data displayed differently? What can be done?
    Make sure that the change of the extract structure is also reflected in the database and that all servers are synchronized. We recommend to reset the buffers using Transaction $SYNC. If the extract structure change is not communicated synchronously to the server where delta records are being created, the records are written with the old structure until the new structure has been generated. This may have disastrous consequences for the delta.
    When the problem occurs, the delta needs to be re-initialized.
    Q) How and where can I control whether a repeat delta is requested?
    A) Via the status of the last delta in the BW Request Monitor. If the request is RED, the next load will be of type 'Repeat'. If you need to repeat the last load for certain reasons, set the request in the monitor to red manually. For the contents of the repeat see Question 14. Delta requests set to red despite of data being already updated lead to duplicate records in a subsequent repeat, if they have not been deleted from the data targets concerned before.
    Q) As of PI 2003.1, the Logistic Cockpit offers various types of update methods. Which update method is recommended in logistics? According to which criteria should the decision be made? How can I choose an update method in logistics?
    See the recommendation in Note 505700.
    Q) Are there particular recommendations regarding the data volume the DeltaQueue may grow to without facing the danger of a read failure due to memory problems?
    A) There is no strict limit (except for the restricted number range of the 24-digit QCOUNT counter in the LUW management table - which is of no practical importance, however - or the restrictions regarding the volume and number of records in a database table).
    When estimating "smooth" limits, both the number of LUWs is important and the average data volume per LUW. As a rule, we recommend to bundle data (usually documents) already when writing to the DeltaQueue to keep number of LUWs small (partly this can be set in the applications, e.g. in the Logistics Cockpit). The data volume of a single LUW should not be considerably larger than 10% of the memory available to the work process for data extraction (in a 32-bit architecture with a memory volume of about 1GByte per work process, 100 Mbytes per LUW should not be exceeded). That limit is of rather small practical importance as well since a comparable limit already applies when writing to the DeltaQueue. If the limit is observed, correct reading is guaranteed in most cases.
    If the number of LUWs cannot be reduced by bundling application transactions, you should at least make sure that the data are fetched from all connected BWs as quickly as possible. But for other, BW-specific, reasons, the frequency should not be higher than one DeltaRequest per hour.
    To avoid memory problems, a program-internal limit ensures that never more than 1 million LUWs are read and fetched from the database per DeltaRequest. If this limit is reached within a request, the DeltaQueue must be emptied by several successive DeltaRequests. We recommend, however, to try not to reach that limit but trigger the fetching of data from the connected BWs already when the number of LUWs reaches a 5-digit value.
    Q) I would like to display the date the data was uploaded on the report. Usually, we load the transactional data nightly. Is there any easy way to include this information on the report for users? So that they know the validity of the report.
    A) If I understand your requirement correctly, you want to display the date on which data was loaded into the data target from which the report is being executed. If it is so, configure your workbook to display the text elements in the report. This displays the relevance of data field, which is the date on which the data load has taken place.
    Q) Can we filter the fields at Transfer Structure?
    Q) Can we load data directly into infoobject with out extraction is it possible.
    Yes. We can copy from other infoobject if it is same. We load data from PSA if it is already in PSA.
    Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA, IF WE R SHEDULED DAILY, WEEKLY AND MONTHLY.
    a) We can set the time.
    Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING ON OFFSHORE PROJECTS. THROUGH WHICH NETWORK.
    a) VPN…………….Virtual Private Network, VPN is nothing but one sort of network where we can connect to the client systems sitting in offshore through RAS (Remote access server).
    Q) HOW CAN U ANALIZE THE PROJECT AT FIRST?
    Prepare Project Plan and Environment
    Define Project Management Standards and
    Procedures
    Define Implementation Standards and Procedures
    Testing & Go-live + supporting.
    Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL CUBES IF ONE CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
    Go to TCode sm66 then see which one is locked select that pid from there and goto sm12
    TCode then unlock it this is happened when lock errors are occurred when u scheduled.
    Q) Can anybody tell me how to add a navigational attribute in the BEx report in the rows?
    A) Expand dimension under left side panel (that is infocube panel) select than navigational attributes drag and drop under rows panel.
    Q) IF ANY TRASACTION CODE LIKE SMPT OR STMT.
    In current systems (BW 3.0B and R/3 4.6B) these Tcodes don't exist!
    Q) WHAT IS TRANSACTIONAL CUBE?
    A) Transactional InfoCubes differ from standard InfoCubes in that the former have an improved write access performance level. Standard InfoCubes are technically optimized for read-only access and for a comparatively small number of simultaneous accesses. Instead, the transactional InfoCube was developed to meet the demands of SAP Strategic Enterprise Management (SEM), meaning that, data is written to the InfoCube (possibly by several users at the same time) and re-read as soon as possible. Standard Basic cubes are not suitable for this.
    Q) Is there any way to delete cube contents within update rules from an ODS data source? The reason for this would be to delete (or zero out) a cube record in an "Open Order" cube if the open order quantity was 0.
    I've tried using the 0recordmode but that doesn't work. Also, would it
    be easier to write a program that would be run after the load and delete
    the records with a zero open qty?
    A) START routine for update rules u can write ABAP code.
    A) Yap, you can do it. Create a start routine in Update rule.
    It is not "Deleting cube contents with update rules" It is only possible to avoid that some content is updated into the InfoCube using the start routine. Loop at all the records and delete the record that has the condition. "If the open order quantity was 0" You have to think also in before and after images in case of a delta upload. In that case you may delete the change record and keep the old and after the change the wrong information.
    Q) I am not able to access a node in hierarchy directly using variables for reports. When I am using Tcode RSZV it is giving a message that it doesn't exist in BW 3.0 and it is embedded in BEx. Can any one tell me the other options to get the same functionality in BEx?
    A) Tcode RSZV is used in the earlier version of 3.0B only. From 3.0B onwards, it's possible in the Query Designer (BEx) itself. Just right click on the InfoObject for which you want to use as variables and precede further selecting variable type and processing types.
    Q) Wondering how can I get the values, for an example, if I run a report for month range 01/2004 - 10/2004 then monthly value is actually divide by the number of months that I selected. Which variable should I use?
    Q) Why is it every time I switch from Info Provider to InfoObject or from one item to another while in modeling I always get this message " Reading Data " or "constructing workbench" in it runs for minutes.... anyway to stop this?
    Q) Can any one give me info on how the BW delta works also would like to know about 'before image and after image' am currently in a BW project and have to write start routines for delta load.
    Q) I am very new to BW. I would like to clarify a doubt regarding Delta extractor. If I am correct, by using delta extractors the data that has already been scheduled will not be uploaded again. Say for a specific scenario, Sales. Now I have uploaded all the sales order created till yesterday into the cube. Now say I make changes to any of the open record, which was already uploaded. Now what happens when I schedule it again? Will the same record be uploaded again with the changes or will the changes get affected to the previous record.
    A)
    Q) In BW we need to write abap routines. I wish to know when and what type of abap routines we got to write. Also, are these routines written in update rules? I will be glad, if this is clarified with real-time scenarios and few examples?
    A) Over here we write our routines in the start routines in the update rules or in the transfer structure (you can choose between writing them in the start routines or directly behind the different characteristics. In the transfer structure you just click on the yellow triangle behind a characteristic and choose "routine". In the update rules you can choose "start routine" or click on the triangle with the green square behind an individual characteristic. Usually we only use start routine when it does not concern one single characteristic (for example when you have to read the same table for 4 characteristics). I hope this helps.
    We used ABAP Routines for example:
    To convert to Uppercase (transfer structure)
    To convert Values out of a third party tool with different keys into the same keys as our SAP System uses (transfer structure)
    To select only a part of the data for from an infosource updating the InfoCube (Start Routine) etc.
    Q) What is ODS?
    A) An ODS object acts as a storage location for consolidated and cleaned-up transaction data (transaction data or master data, for example) on the document (atomic) level.
    This data can be evaluated using a BEx query.
    Standard ODS Object
    Transactional ODS object:
    The data is immediately available here for reporting. For implementation, compare with the Transactional ODS Object.
    A transactional ODS object differs from a standard ODS object in the way it prepares data. In a standard ODS object, data is stored in different versions ((new) delta, active, (change log) modified), where as a transactional ODS object contains the data in a single version. Therefore, data is stored in precisely the same form in which it was written to the transactional ODS object by the application. In BW, you can use a transaction ODS object as a data target for an analysis process.
    The transactional ODS object is also required by diverse applications, such as SAP Strategic Enterprise Management (SEM) for example, as well as other external applications.
    Transactional ODS objects allow data to be available quickly. The data from this kind of ODS object is accessed transactionally, that is, data is written to the ODS object (possibly by several users at the same time) and reread as soon as possible.
    It offers no replacement for the standard ODS object. Instead, an additio

  • Consolidation in multiple currencies

    Hi there,
    I would like to translate consolidated results in two different currencies. Let's say, for example, USD and EUR.
    Do I need to create two alternative entity hierarchies whose parents def currencies are USD and EUR? Is there any other possible solutions?
    Any customized solution would do the job... even if  highlight me main technical choices, avoiding to disclosing details of your implementations.
    It's very useful to us because we wish to would like to maintain the opportunity to post Contribution Adjs.
    PLEASE HELP!!!

    I will better explain my request:
    PREMISE:
    No Org by Period app; YTD consolidation
    Sub translation manages the convertion applying average YTD rate, opening or closing rate. Some accounts are translated at historical rate. Translation also manages the calculation of the currency translation reserve in equity.
    Consolidation manages the split of equity reserves between minority and group shares. This split involves also the currency translation reserve, by debiting the "Currency Translation Reserve" account and crediting the "Currency Translation Reserve - Minority Share" account.
    Users need to analyze contribution of each single legal entity (including the split effect) both in USD and EUR. In order to properly translate and consolidate figures the quickest and most reasonable solution would be that of creating two alternative hierarchies (one in USD and in EUR). However this turns into headakes when you have to post Contribution Adjs as these must be posted twice, one time in EUR and the other time in USD.
    My question is:
    is there a customized approach that allows me to avoid redundancy in posting Contribution Adjs and also reduce the number of alternative hierarchies? Have you ever faced similar issues? How did u solve them?
    Many thanks again

  • Nokia Photos & Lifeblog

    PC Platform : Windows XP pro with SP3
    Nokia Phone N76
    Nokia PC Suite Ver 7.0.9.2
    Nokia Photos Ver 1.5.242
    Nokia Lifeblog Ver 2.5.224
    Nokia Ovi Ver 1.1
    Hmmmmmm Where to start ...... Lets start with Ovi Suite - I downloaded this new software on the basis that it was the recommended product in preference to Nokia PC suite which I had been using prior to this download on the 4th Nov 2008.
    1. Ovi Suite is NOT a replacement for PC Suite. I have an email from Nokia support telling me so and that its OK to run BOTH sets of software. BUT the POINT of MY email to them was I DO NOT WANT to cluter my PC up with 100s of 'ragtail' Nokia software products - I want all the functionality encapsulated in one product which is why both these Nokia products are supposed to do by NAMING their products ' Nokia xxxx SUITE ' (the key word being SUITE)
    2. I used Nokia LIFEBLOG under Nokia PC suite extensively. All my SMS text messages are saved and stored because I use this DATA for business communications. SMS is a record just like an email. It is invaluable to me to have an accurate record of ALL my SMS text messages. I was experiencing problems with Lifeblog NOT transfering ALL my SMS messages from the phone to Nokia PC suite. New messages not previously transfered to my PC were not picked up by the Lifeblog software as being needed to be transfered to my computer. I still dont know why it doesnt recognise these messages. Is there some sort of 'flag' put on the message in the phone data ? Why cant I CHOOSE which INDIVIDUAL Messages that I want transfered. AND why cant I set a control to transfer the message from the phone to the PC MORE than ONCE. (For instance I may inadvertantly have deleted a record of an SMS message on Lifeblog but the message is still in the phone memory and I want to transfer that message again - but there is NO way that I can see to do this
    3. Lifeblog as a seperate application (add on) to PC Suite was great in is concept BUT under Ovi Suite it NO longer shows as an application in its OWN right but MYSTERIOUSLY it is part of Nokia Photos!!! WHY WHY WHY - what the heck has SMS text messages got to do with Photos ???? I do not know how many BILLIONS of SMS text messages are sent around the world each day using NOKIA phones but I would guarantee that the number of SMS text messages out numbers the number of Photos stored on an individual phone by at least 100,000 to 1 - (ie 100,000 SMS text stored per 1 Photo) SO COME ON Nokia put some priority BACK into SMS Text message data storage, retrival and dta management
    4 Because Ovi Suite is new software (and by the way it CONVERTS Lifeblog SMS text messages before storing them under 'Nokia Photos Timeline' This implies that the file format for Nokia Lifeblog and Ovi Photos Timeline are DIFFERENT and therefore INCOMPATIBLE with each other. There is NO way that I KNOW OF to copy the SMS text messages to Nokia Photos Timeline AND then copy the same messages to Nokia PC Suite Lifeblog. - Why would I want to do that ? - well quite simply if I decide that Ovi Suite is a BAD product then I will want to continue to use Nokia PC Suite Lifeblog WITHOUT loosing ANY stored data.
    5 My overriding concern with BOTH products is that I have an ACCURATE and DEFINATIVE record of my SMS text messages. And if you think this is stupid or odd then just think what you would do if you had NO RECORD of IMPORTANT emails. You would be pulling your hair out especially as the MOBILE phone is as important to businesses as is the computer - SO NOKIA please please get your act together. It seems to me that 2 different sets of Nokia software programmers are developing 2 DIFFERENT products that supposed to do and achieve the SAME thing for the end USER. WHAT A COMPLETE WASTE OF TALENT AND RESOURCE AND MONEY = GIVE THE USER CHOICE and MANAGEMENT Capabilites OF all THEIR data AND allow THE user TO store, backup AND KEEP AN accutrate record of THEIR PERSONAL information store BOTH on the PC and on the Nokia Phone. IF it is NOT your mission statment to do this then WHY o WHY do you produce Nokia PC Suite and Nokia Ovi Suite programs - PC to Phone Syncronisation should be effective and accurate and MANAGEABLE for ALL STORED DATA where on the phone OR on the PC - COME ON make your product work for your clients
    6 Perhaps one of the most infuriating aspects of bespoke software development is the way software programmers from ALL providers insist upon having background system task loaded into PC memory utilising VITAL PC resouces even if that software program is not being used by the PC user at that precise moment in time. Now that I have loaded OVI Suite if I look at my background system processes on my PC I find that I have multiple Nokia Software programs hogging PC resources and MEMORY even when I am NOT using them - COME ON YOU LAZY programmers there must be a better way to develop your software. After all its OUR MONEY your playing with forcing us to buy MORE memory/harddisk storage space when in actual fact WE DONT NEED IT - AT LEAST give the USER the choice to have these background programs loaded and running or NOT to have them running.
    This post is all negative which is a REAL REAL pity but it states the FACTS and NOKIA need to do something about there product implementation - RIGHT NOW

    Hi,
    I dont have answers for all your queries, but still i want to share a little.
    1) OVI Suite is not a replacement for PC suite.
    Yes atleast for near future till nokia comes up with all applications bundled under one umbrella (Having all the applications of Nokia PC Suite + Nokia OVI Suite). As of now not all the devices are supported by Nokia OVI Suite except NSeries and some other phones. It has to think of Users who are using Mac too.
    2) If storing messages safely on your PC is the main concern means then, you can install Nokia Communication Centre which is part of Nokia PC Suite 7.0.x.x and above, which will store your messages to the PC. You can also use Content copier to backup the messages and install Noki software to view what exactly got stored on backup file(.nbu). It is unwritten law (Correct me if i am wrong), data(image/video/Text message/Multimedia message/Sound files) which is synced with Nokia lifeblog will not sync again with Nokia photos and vice versa Reason behind this could may be to avoid redundancy (Chances are there files may be duplicated twice which may lead to more memory space on your PC) This is my assumption. To conclude data synced once with Nokia lifeblog cannot be (re)synched again with Nokia Photos and vice versa as well as with same software which you have synced already. One way of doing this for images and videos could be sync images with lifeblog and then mark those as favorites and then sync it again with nokia photos, then it may sync twice.
    3) Lifeblog is not shown in Nokia OVI Suite. OVI Suite do have option to import the data from lifeblog, this may be because, nokia lifeblog is going to be replaced by Nokia photos. If you observe earlier Nseries phones will have lifeblog on the phone N73, N76 etc, but Nokia N 78 has got Nokia photos built in along with the device instead of lifeblog(device). Yes i do agree about sms part, as i mentioned earlier, if storing the messages safely on your pc is your main concern means, you can use Nokia communicaton centre.
    These are my comments as an user of Nokia pc softwares. Sure your comments will be a constructive feedback for those who are developing nokia softwares.

  • How to use one Assign action to create multiple context variables

    Hello, everyone.
    I read some tips from Oracle documentation that said:
    Avoid creating many OSB context variables that are used just once within another XQuery
    Context variables created using an Assign action are converted to XmlBeans and then reverted to the native XQuery format for the next XQuery. Multiple "Assign" actions can be collapsed into a single Assign action using a FLWOR expression. Intermediate values can be created using "let" statements. Avoiding redundant context variable creation eliminates overheads associated with internal data format conversions. This benefit has to be balanced against visibility of the code and reuse of the variables.
    Oracle® Fusion Middleware Performance and Tuning Guide
    +11g Release 1 (11.1.1)+
    Part Number E10108-03
    But I don't know how to do that. Can you show me ?
    Thank in advance
    Edited by: Doubt_Man on Aug 17, 2011 3:30 PM

    if you return sequences, you can declare the return type of your xquery as xs:double*
    (notice the asterix at the end)
    but I have the impression that in the assign action only the first element will be assigned to the context variable
    (correct me if I am wrong)
    So in fact you might indeed have to transform the sequence into a element()*, or concatenate it into a CVS string - for instance using string-join
    http://www.xqueryfunctions.com/xq/fn_string-join.html

  • Best practice for if/else when one outcome results in exit [Bash]

    I have a bash script with a lot of if/else constructs in the form of
    if <condition>
    then
    <do stuff>
    else
    <do other stuff>
    exit
    fi
    This could also be structured as
    if ! <condition>
    then
    <do other stuff>
    exit
    fi
    <do stuff>
    The first one seems more structured, because it explicitly associates <do stuff> with the condition.  But the second one seems more logical because it avoids explicitly making a choice (then/else) that doesn't really need to be made.
    Is one of the two more in line with "best practice" from pure bash or general programming perspectives?

    I'm not sure if there are 'formal' best practices, but I tend to use the latter form when (and only when) it is some sort of error checking.
    Essentially, this would be when <do stuff> was more of the main purpose of the script, or at least that neighborhood of the script, while <do other stuff> was mostly cleaning up before exiting.
    I suppose more generally, it could relate to the size of the code blocks.  You wouldn't want a long involved <do stuff> section after which a reader would see an "else" and think 'WTF, else what?'.  So, perhaps if there is a substantial disparity in the lengths of the two conditional blocks, put the short one first.
    But I'm just making this all up from my own preferences and intuition.
    When nested this becomes more obvious, and/or a bigger issue.  Consider two scripts:
    if [[ test1 ]]
    then
    if [[ test2 ]]
    then
    echo "All tests passed, continuing..."
    else
    echo "failed test 2"
    exit
    fi
    else
    echo "failed test 1"
    fi
    if [[ ! test1 ]]
    then
    echo "failed test 1"
    exit
    fi
    if [[ ! test2 ]]
    then
    echo "failed test 2"
    exit
    fi
    echo "passed all tests, continuing..."
    This just gets far worse with deeper levels of nesting.  The second seems much cleaner.  In reality though I'd go even further to
    [[ ! test1 ]] && echo "failed test 1" && exit
    [[ ! test2 ]] && echo "failed test 2" && exit
    echo "passed all tests, continuing..."
    edit: added test1/test2 examples.
    Last edited by Trilby (2012-06-19 02:27:48)

  • Deleted Item not actually getting deleted or **delay** deleted thus getting refetched (EWS Managed API)

    I am polling Exchange Server to fetch all the mails left in the Inbox and then fetching them, processing them and finally moving them to the deleted folder. My procedure takes following form of EWS Managed API calls:
    1 itemResults = service.FindItems(WellKnownFolderName.Inbox, itemView);
    2 //some code...
    3 service.LoadPropertiesForItems(itemResults.Items, itItemPropSet);
    4 //some code...
    5 foreach(Item item in itemResults)
    6 {
    7 //process item
    8 item.Delete(DeleteMode.MoveToDeletedItems);
    9 }
    To test this code I flooded the inbox with 5000 mails, 5 threads flooding 1000 mails each at the interval of 4ms.
    After test run, I realized some mails are getting processed multiple times.
    When I read the soap traces logged using TraceListener, I realized that when there are more than 100 mails in the inbox, there are more SOAP packets were dispatched in burst of 100s.
    Logically below set of SOAP Packets (let us call it "Set 1") should occur only once:
    <m:FindItem>...</m:FindItem> //generated by line 1
    <m:FindItemResponse>...</m:FindItemResponse> //generated by line 1
    <m:GetItem>...</m:GetItem> //generated by line 3
    <m:GetItemResponse>...</m:GetItemResponse> //generated by line 3
    followed x number of Delete SOAP packets for x number of mails in the Inbox
    <m:DeleteItem>...</m:DeleteItem> //generated by line 8
    <m:DeleteItemRepsonse>...</m:DeleteItemRepsonse> //generated by line 8
    However if there are say 325 mails in the inbox, then there first occurs 4 occurences of Set 1 followed by Delete SOAP packest as follows:
    //1st occurence
    <m:FindItem>100 items</m:FindItem>
    <m:FindItemResponse>100 items</m:FindItemResponse>
    <m:GetItem>100 items</m:GetItem>
    <m:GetItemResponse>35 items</m:GetItemResponse>
    //2nd occurence
    <m:FindItem>100 items</m:FindItem>
    <m:FindItemResponse>100 items</m:FindItemResponse>
    <m:GetItem>100 items</m:GetItem>
    <m:GetItemResponse>35 items</m:GetItemResponse>
    //3rd occurence
    <m:FindItem>100 items</m:FindItem>
    <m:FindItemResponse>100 items</m:FindItemResponse>
    <m:GetItem>100 items</m:GetItem>
    <m:GetItemResponse>100 items</m:GetItemResponse>
    //4th occurence
    <m:FindItem>35 items</m:FindItem>
    <m:FindItemResponse>35 items</m:FindItemResponse>
    <m:GetItem>35 items</m:GetItem>
    <m:GetItemResponse>35 items</m:GetItemResponse> //****followed by 335 occurences of the Delete SOAP packets****<m:DeleteItem>...</m:DeleteItem>  
    <m:DeleteItemRepsonse>...</m:DeleteItemRepsonse> 
    //334 more of above
    So there are 100 + 100 + 100 + 35 = 335 items actually getting handled, which is 10 more than what are actually present in the inbox. 
    Also notice that Delete SOAP packets are queued till end. They are not dispatched in burst.
    First thing I realize dispatching SOAP packets in burst of 100s is EWS Managed API behavior. I guess there must be traffic reason behind it and there must be some configuration setting to tweak it.
    Q1. How can we change this count of 100?
    I inferred that there are some overlaps in fetching of mails.
    So I copy pasted SOAP Traces for first two of
    <m:GetItem>100 items</m:GetItem>
    in two different files, and compared them in text comparer. I realised that some mails that are at the end of the first 100s were reappearing at the beginning of 2nd 100: 
    This is the pattern in many test runs.
    Due to this these overlapping mails, these mails are getting processed twice. Also the attempt to second time delete them was throwing exception : The specified object was not found in the store.
    So I guess there is time delay in delete mail API call and mails actually getting deleted, between which they are getting refetched.
    Q2. How can I handle this to avoid redundant processing of mails and unnecessary exceptions.

    I am polling Exchange Server to fetch all the mails left in the Inbox and then fetching them, processing them and finally moving them to the deleted folder. My procedure takes following form of EWS Managed API calls:
    1 itemResults = service.FindItems(WellKnownFolderName.Inbox, itemView);
    2 //some code...
    3 service.LoadPropertiesForItems(itemResults.Items, itItemPropSet);
    4 //some code...
    5 foreach(Item item in itemResults)
    6 {
    7 //process item
    8 item.Delete(DeleteMode.MoveToDeletedItems);
    9 }
    To test this code I flooded the inbox with 5000 mails, 5 threads flooding 1000 mails each at the interval of 4ms.
    After test run, I realized some mails are getting processed multiple times.
    When I read the soap traces logged using TraceListener, I realized that when there are more than 100 mails in the inbox, there are more SOAP packets were dispatched in burst of 100s.
    Logically below set of SOAP Packets (let us call it "Set 1") should occur only once:
    <m:FindItem>...</m:FindItem> //generated by line 1
    <m:FindItemResponse>...</m:FindItemResponse> //generated by line 1
    <m:GetItem>...</m:GetItem> //generated by line 3
    <m:GetItemResponse>...</m:GetItemResponse> //generated by line 3
    followed x number of Delete SOAP packets for x number of mails in the Inbox
    <m:DeleteItem>...</m:DeleteItem> //generated by line 8
    <m:DeleteItemRepsonse>...</m:DeleteItemRepsonse> //generated by line 8
    However if there are say 325 mails in the inbox, then there first occurs 4 occurences of Set 1 followed by Delete SOAP packest as follows:
    //1st occurence
    <m:FindItem>100 items</m:FindItem>
    <m:FindItemResponse>100 items</m:FindItemResponse>
    <m:GetItem>100 items</m:GetItem>
    <m:GetItemResponse>35 items</m:GetItemResponse>
    //2nd occurence
    <m:FindItem>100 items</m:FindItem>
    <m:FindItemResponse>100 items</m:FindItemResponse>
    <m:GetItem>100 items</m:GetItem>
    <m:GetItemResponse>35 items</m:GetItemResponse>
    //3rd occurence
    <m:FindItem>100 items</m:FindItem>
    <m:FindItemResponse>100 items</m:FindItemResponse>
    <m:GetItem>100 items</m:GetItem>
    <m:GetItemResponse>100 items</m:GetItemResponse>
    //4th occurence
    <m:FindItem>35 items</m:FindItem>
    <m:FindItemResponse>35 items</m:FindItemResponse>
    <m:GetItem>35 items</m:GetItem>
    <m:GetItemResponse>35 items</m:GetItemResponse> //****followed by 335 occurences of the Delete SOAP packets****<m:DeleteItem>...</m:DeleteItem>  
    <m:DeleteItemRepsonse>...</m:DeleteItemRepsonse> 
    //334 more of above
    So there are 100 + 100 + 100 + 35 = 335 items actually getting handled, which is 10 more than what are actually present in the inbox. 
    Also notice that Delete SOAP packets are queued till end. They are not dispatched in burst.
    First thing I realize dispatching SOAP packets in burst of 100s is EWS Managed API behavior. I guess there must be traffic reason behind it and there must be some configuration setting to tweak it.
    Q1. How can we change this count of 100?
    I inferred that there are some overlaps in fetching of mails.
    So I copy pasted SOAP Traces for first two of
    <m:GetItem>100 items</m:GetItem>
    in two different files, and compared them in text comparer. I realised that some mails that are at the end of the first 100s were reappearing at the beginning of 2nd 100: 
    This is the pattern in many test runs.
    Due to this these overlapping mails, these mails are getting processed twice. Also the attempt to second time delete them was throwing exception : The specified object was not found in the store.
    So I guess there is time delay in delete mail API call and mails actually getting deleted, between which they are getting refetched.
    Q2. How can I handle this to avoid redundant processing of mails and unnecessary exceptions.

Maybe you are looking for

  • How to create anchor links within a spry accordion?

    I am using the spry accordion from DW CS6. HAving 6 panels. I want to do this but can't manage it the usual html way...and I am clueless about javascript: I want to create a wordlink that forwards people to the content of a different panel within the

  • Acrobat Reader 10.0.1 and HP DesignJet 1055CM+ incompatibility

    Hello - I'm an IT administrator with my organization and we recently experienced a compatibility issue. We have four plotters on our network, one of which is a HP DesignJet 1055CM+. We're currently running Windows 7 64bit on the machines that connect

  • Inactive and Active Workspace in CONS not in sync

    Hi, We are facing a situation in the NWDI. The Public Part declared in a DC was renamed to Common_FunctionsSDA and Common_FunctionsAPI. During the activity imports into CONS system we observed that the activities failed because the DC in CONS system

  • Alternative using a .pdf form?

    We do performance reviews, order forms, etc all through formcentral. This is going to really hurt us as well. Do any of the listed alternatives allow a user to fill out a form off-line like a .pdf form with a submit button (as formcentral did)? We ha

  • After upgrade to ecc sm21 menu entries disable

    when i execute this tcode , its working fine but after execution when i try to  Sort w.r.t time or message type thease entries in first menu after execution is disable.what might be the problem>>?? how can i enable these entries.