PROJECT MANAGER INTERVIEW QUESTIONS

HI ABAPers,
hope you all are doing well.
recently i was selected in the technical interview  and waiting for the project manager interview.
so can anybody please help me how to handle the project manager round.
what are the interview questions asked by project manager.
please post some Q&A's related to this project round ASAP.
Regards,
SP.

Moderator message - Welcome to SCN
But please read [The Rules of Engagement|https://wiki.sdn.sap.com/wiki/display/HOME/RulesofEngagement] before posting again. Interview questions are not allowed.
post locked
Rob

Similar Messages

  • Organization Management Interview Questions and Answers  Extremely Urgent

    Hi,
    Please let me know Organization Management Interview Questions and Answers. MOST MOST URGENT
    Please do not post Link or website name and detail response will be highly appreciated.
    Very Respectfully,
    Sameer.
    SAP HR .

    Hi there,
    Pl. find herewith the answers of the questions posted on the forum.
    1. What are plan versions used for?
    Ans : Plan versions are scenarios in which you can create organizational plans.
    •     In the plan version which you have flagged as the active plan version, you create your current valid organizational plan. This is also the integration plan version which will be used if integration with Personnel Administration is active.
    •     You use additional plan versions to create additional organizational plans as planning scenarios.
    As a rule, a plan version contains one organizational structure, that is, one root organizational unit. It is, however, possible to create more than one root organizational unit, that is more than one organizational structure in a plan version.
    For more information on creating plan versions, see the Implementation Guide (IMG), under Personnel Management  Global Settings in Personnel Management  Plan Version Maintenance.
    2. What are the basic object types?
    Ans. An organization object type has an attribute that refers to an object of the organization management (position, job, user, and so on). The organization object type is linked to a business object type.
    Example
    The business object type BUS1001 (material) has the organization object type T024L (laboratory) as the attribute that on the other hand has an object of the organization management as the attribute. Thus, a specific material is linked with particular employees using an assigned laboratory.
    3. What is the difference between a job and a position?
    Ans. Job is not a concrete, it is General holding various task to perform which is generic.(Eg: Manager, General Manager, Executive).
    Positions are related to persons and Position is concrete and specific which are occupied by Persons. (Eg: Manager - HR, GM – HR, Executive - HR).
    4. What is the difference between an organizational unit and a work centre?
    Ans. Work Centre : A work center is an organizational unit that represents a suitably-equipped zone where assigned operations can be performed. A zone is a physical location in a site dedicated to a specific function. 
    Organization Unit : Organizational object (object key O) used to form the basis of an organizational plan. Organizational units are functional units in an enterprise. According to how tasks are divided up within an enterprise, these can be departments, groups or project teams, for example.
    Organizational units differ from other units in an enterprise such as personnel areas, company codes, business areas etc. These are used to depict structures (administration or accounting) in the corresponding components.
    5. Where can you maintain relationships between objects?
    Ans. Infotype 1001 that defines the Relationships between different objects.
    There are many types of possible relationships between different objects. Each individual relationship is actually a subtype or category of the Relationships infotype.
    Certain relationships can only be assigned to certain objects. That means that when you create relationship infotype records, you must select a relationship that is suitable for the two objects involved. For example, a relationship between two organizational units might not make any sense for a work center and a job.
    6. What are the main areas of the Organization and Staffing user interfaces?
    Ans. You use the user interface in the Organization and Staffing or Organization and Staffing (Workflow) view to create, display and edit organizational plans.
    The user interface is divided into various areas, each of it which fulfills specific functions.
    Search Area
    Selection Area
    Overview Area
    Details Area
    Together, the search area and the selection area make up the Object Manager.
    7. What is Expert Mode used for?
    Ans. interface is used to create Org structure. Using Infotypes we can create Objects in Expert mode and we have to use different transactions to create various types of objects.  If the company needs to create a huge structure, we will use Simple maintenance, because it is user friendly that is it is easy to create a structure, the system automatically relationship between the objects.
    8. Can you create cost centers in Expert Mode?
    Ans. Probably not. You create cost center assignments to assign a cost center to an organizational unit, or position.
    When you create a cost center assignment, the system creates a relationship record between the organizational unit or position and the cost center. (This is relationship A/B 011.) No assignment percentage record can be entered.
    9. Can you assign people to jobs in Expert Mode?
    10. Can you use the organizational structure to create a matrix organization?
    Ans. By depicting your organizational units and the hierarchical or matrix relationships between them, you model the organizational structure of your enterprise.
    This organizational structure is the basis for the creation of an organizational plan, as every position in your enterprise is assigned to an organizational unit. This defines the reporting structure.
    11. In general structure maintenance, is it possible to represent the legal entity of organizational units?
    12. What is the Object Infotype (1000) used for?
    Ans. Infotype that determines the existence of an organizational object.
    As soon as you have created an object using this infotype, you can determine additional object characteristics and relationships to other objects using other infotypes.
    To create new objects you must:
    •     Define a validity period for the object
    •     Provide an abbreviation to represent the object
    •     Provide a brief description of the object
    The validity period you apply to the object automatically limits the validity of any infotype records you append to the object. The validity periods for appended infotype records cannot exceed that of the Object infotype.
    The abbreviation assigned to an object in the system renders it easily identifiable. It is helpful to use easily recognizable abbreviations.
    You can change abbreviations and descriptions at a later time by editing object infotype records. However, you cannot change an object’s validity period in this manner. This must be done using the Delimit function.
    You can also delete the objects you create. However, if you delete an object the system erases all record of the object from the database. You should only delete objects if they are not valid at all (for example, if you create an object accidentally)
    13. What is the Relationships Infotype (1001) used for?
    Ans. Infotype that defines the Relationships between different objects.
    You indicate that a employee or user holds a position by creating a relationship infotype record between the position and the employee or user. Relationships between various organizational units form the organizational structure in your enterprise. You identify the tasks that the holder of a position must perform by creating relationship infotype records between individual tasks and a position.
    Creating and editing relationship infotype records is an essential part of setting up information in the Organizational Management component. Without relationships, all you have are isolated pieces of information.
    You must decide the types of relationship record you require for your organizational structure.
    If you work in Infotype Maintenance, you must create relationship records manually. However, if you work in Simple Maintenance and Structural Graphics, the system creates certain relationships automatically.
    14. Which status can Infotypes in the Organizational Management component have?
    Ans. Once you have created the basic framework of your organizational plan in Simple Maintenance, you can create and maintain all infotypes allowed for individual objects in your organizational plan. These can be the basic object types of Organizational Management – organizational unit, position, work center and task. You can also maintain object types, which do not belong to Organizational Management.
    15. What is an evaluation path?
    Ans. An evaluation path describes a chain of relationships that exists between individual organizational objects in the organizational plan.
    Evaluation paths are used in connection with the definition of roles and views.
    The evaluation path O-S-P describes the relationship chain Organizational unit > Position > Employee.
    Evaluation paths are used to select other objects from one particular organizational object. The system evaluates the organizational plan along the evaluation path.
    Starting from an organizational unit, evaluation path O-S-P is used to establish all persons who belong to this organizational unit or subordinate organizational units via their positions.
    16. What is Managers Desktop used for?
    Ans. Manager's Desktop assists in the performance of administrative and organizational management tasks. In addition to functions in Personnel Management, Manager's Desktop also covers other application components like Controlling, where it supports manual planning or the information system for cost centers.
    17. Is it possible to set up new evaluation paths in Customizing?
    Ans. You can use the evaluation paths available or define your own. Before creating new evaluation paths, check the evaluation paths available as standard.
    18. Which situations require new evaluation paths?
    Ans. When using an evaluation path in a view, you should consider the following:
    Define the evaluation path in such a manner that the relationship chain always starts from a user (object type US in Organizational Management) and ends at an organizational unit, a position or a user.
    When defining the evaluation path, use the Skip indicator in order not to overload the result of the evaluation.
    19. How do you set up integration between Personnel Administration and Organizational Management?
    Ans. Integration between the Organizational Management and Personnel Administration components enables you to,
    Use data from one component in the other
    Keep data in the two components consistent
    Basically its relationship between person and position.
    Objects in the integration plan version in the Organizational Management component must also be contained in the following Personnel Administration tables:
    Tables                    Objects
    T528B and T528T     Positions
    T513S and T513     Jobs
    T527X                    Organizational units
    If integration is active and you create or delete these objects in Organizational Management transactions, the system also creates or deletes the corresponding entries automatically in the tables mentioned above. Entries that were created automatically are indicated by a "P". You cannot change or delete them manually. Entries you create manually cannot have the "P" indicator (the entry cannot be maintained manually).
    You can transfer either the long or the short texts of Organizational Management objects to the Personnel Administration tables. You do this in the Implementation Guide under Organizational Management -> Integration -> Integration with Personnel Administration -> Set Up Integration with Personnel Administration. If you change these control entries at a later date, you must also change the relevant table texts. To do that you use the report RHINTE10 (Prepare Integration (OM with PA)).
    When you activate integration for the first time, you must ensure that the Personnel Administration and the Organizational Management databases are consistent. To do this, you use the reports:
    •        RHINTE00 (Adopt organizational assignment  (PA to PD))
    •        RHINTE10 (Prepare Integration (PD to PA))
    •        RHINTE20 (Check Program Integration PA - PD)
    •        RHINTE30 (Create Batch Input Folder for Infotype 0001)
    The following table entries are also required:
    •        PLOGI PRELI in Customizing for Organizational Management (under Set Up Integration with Personnel Administration). This entry defines the standard position number.
    •        INTE in table T77FC
    •        INTE_PS, INTE_OSP, INTEBACK, INTECHEK and INTEGRAT in Customizing under Global Settings ® Maintain Evaluation Paths.
    These table entries are included in the SAP standard system. You must not change them.
    Since integration enables you to create relationships between persons and positions (A/B 008), you may be required to include appropriate entries to control the validation of these relationships. You make the necessary settings for this check in Customizing under Global Settings ® Maintain Relationships.
    Sincerely,
    Devang Nandha
    "Together, Transform Business Process by leveraging Information Technology to Grow and Excel in Business".

  • Interview questions on SAP MM Inventory Management?

    Hi all !
    I am preparing for interview, so can anybody please give me the interview questions for SAP MM inventory management ?
    Thanks in advance !!

    Hi,
    Follow the links:
    http://www.sap-img.com/materials/sap-material-management-interview-questions.htm
    http://www.sap-basis-abap.com/sapmm.htm
    http://www.simplysap.com/forums/threadflat.htm?tid=32
    http://209.85.28.27/ccforums/PrintPost.aspx?PostID=44
    Regards,
    Biju K

  • Looking Wm interview questions

    Hi all,
    Could any one plzz help me find WM interview questions....
    Thank you,
    bye....

    http://www.sap-img.com/materials/interview-questions-on-sap-mm.htm
    http://www.sap-img.com/materials/sap-material-management-interview-questions.htm
    http://www.sap-img.com/materials/sap-mm-support-problem-solution.htm
    Regards
    Kalyan.

  • Looking for interview questions

    hi all,
    i m looking interview questions can anyone plzz help me find it.
    thanks ...

    http://www.sap-img.com/materials/interview-questions-on-sap-mm.htm
    http://www.sap-img.com/materials/sap-material-management-interview-questions.htm
    http://www.sap-img.com/materials/sap-mm-support-problem-solution.htm
    Regards
    Kalyan.

  • Interview Questions related to Warehouse management

    Hi all
    Can u please help me regarding  Interview Questions related to Warehouse management
    Thanks and Regds
    Daniel

    Have you searched in very first thread
    [Warehouse Management?|New to Materials Management / Warehouse Management?;

  • Project Management Question - timelines for Production system deployment

    Hi
    I need an expert advice on setting up timelines for establishing an Oracle Apps Production environment. We are implementing Oracle HR & Payroll at an organization.
    Our system architecture is a 2-node application tier and 2-node DB tier (RAC). RAC is configured by another party and we only have to prepare the application tier (2 nodes)
    We would be installing Oracle Apps R11i (11.5.10) with jserv load balancing on the 2-nodes
    Can anyone suggest how much time this should take to install/configure & patching of the application tier (on 2 nodes)?
    Regards
    Saira

    Duplicate post (check your other thread):
    Project Management Question - timelines for Production system deployment
    Re: Project Management Question - timelines for Production system deployment

  • Travel Management functions interview question and answers

    Hi Guys,
    I wanted to know Travel Management functions interview question and answers , it is most urgent for me.
    Your detail response will be highly appreciated.
    Thanks – Sam.

    Check this site
    http://help.sap.com/saphelp_470/helpdata/en/73/6bf037f1d6b302e10000009b38f889/frameset.htm
    Regards,
    Ruben

  • Question about elearning - OKP SAP Commercial Project Management

    Hi Gurus,
    i have a question about an e learning course which SAP offers-
    Course Name- OCPM10-
    OCPM10 - OKP SAP Commercial Project Management 1.0 | SAP Training and Certification Shop
    It's an e learning course for 20 hours, below are my questions-
    Since its an e-learning course, is it to be completed in a specified time (within 2-3 days) or we can have access to it throughout the year but duration of course is 20 hrs
    What kind of documentation is provided to me as part of this course.  

    1. C_TPLM22_05 - SAP Certified Solution Consultant PLM - Project Management with SAP ERP 2005   
    It includes SAP PLM 235 course material along with PLM 200, 210 220, 230,
    2. C_TPLM22_60 - SAP Certified Application Associate - Project Management with SAP ERP 6.0
    It is not including the PLM 235 course.
    This is the basic difference between  it.  IN Certification there are  6 -7 areas , in each of them you have to score more than 70 % , there many few multiple answer questions as well. Finally over score will come it should be higher than 70 %.  So, I ideally weight all the topics equally,
    With Regards
    Nitin P.

  • Project manager question

    I'm doing multicam work, with lots of footage.
    PPro seems to be bogging down my system. Therefore, i've decided to export my multicam sequences into separate projects.
    My question is, if i grab the Main project, and the multicam pre-comp, does premiere know intelligently re-link all existing footage, and keep the project bin in tact, or does Premiere re conform files, and duplicate all the footage?
    Also, while we're on the subject, the option to check "Rename Media Files to Match Clip names" has me confused.
    Where would you use this?

    Also, while we're on the subject, the option to check "Rename Media Files to Match Clip names" has me confused.
    Where would you use this?
    When your orginal footage is called e.g. clip1, clip 2, clip3 etc and the same clips in the project window you renamed to e.g. flowers, trees, plains.
    If you check that option the original footage is renamed to flowers, trees etc.

  • SAP BI INTERVIEW QUESTIONS

    Hi Friends,
    I was  face some Interview.
    Please send answers to the questions?
    How many data Fields  and key fields we can create in DSO?
    You can overwrite key fields or Data Fields?
    Which up date we use in Delta queue extraction( v1 or v2 or v3)
    Which message we get when transported request is failed?
    what is the Structural difference between Infoucbe and DSO
    Data Loading is taking huge time when we extract data from source system to BI system/ how to solve?(Before it took 3-4 Hours now data loading takes 4 days)

    What is the difference between Display Attribute and
    Navigational Attribute? How to make display attribute and navigational
    attribute?
    How to load flat file data?
    How to load Hierarchy file data?
    What is HACR?
    How to maintain HACR?
    If any issue in HACR then how to resolve the issue?
    What is Baby Cube?
    Why we are creating Aggregates?
    What is the use of Aggregates?
    Is there
    any particular field on that we can create Aggregates or we can maintain
    Aggregate on any field?
    What is
    the different DSO available? And what is the difference between those DSO?
    What is
    replacement path?
    What are
    the extractor types?
    • Application Specific
    o BW Content FI, HR, CO, SAP CRM, LO Cockpit
    o Customer-Generated Extractors
    LIS, FI-SL, CO-PA
    • Cross Application (Generic Extractors)
    o DB View, InfoSet, Function Module
    2. What are the steps involved in LO Extraction?
    • The steps are:
    o RSA5 Select the DataSources
    o LBWE Maintain DataSources and Activate Extract Structures
    o LBWG Delete Setup Tables
    o 0LI*BW Setup tables
    o RSA3 Check extraction and the data in Setup tables
    o LBWQ Check the extraction queue
    o LBWF Log for LO Extract Structures
    o RSA7 BW Delta
    Queue Monitor
    3. How to create a connection with LIS InfoStructures?
    • LBW0 Connecting LIS InfoStructures to BW
    4. What is the difference between ODS and InfoCube and MultiProvider?
    • ODS: Provides granular data, allows overwrite and data is in transparent
    tables, ideal for drilldown and RRI.
    • CUBE: Follows the star schema, we can only append data, ideal for primary
    reporting.
    • MultiProvider: Does not have physical data. It allows to access data from
    different InfoProviders (Cube, ODS, InfoObject). It is also preferred for
    reporting.
    5. What are Start routines, Transfer routines and Update routines?
    • Start Routines: The start routine is run for each DataPackage after the data
    has been written to the PSA and before the transfer rules have been executed.
    It allows complex computations for a key figure or a characteristic. It has no
    return value. Its purpose is to execute preliminary calculations and to store
    them in global DataStructures. This structure or table can be accessed in the
    other routines. The entire DataPackage in the transfer structure format is used
    as a parameter for the routine.
    • Transfer / Update Routines: They are defined at the InfoObject level. It is
    like the Start Routine. It is independent of the DataSource. We can use this to
    define Global Data and Global Checks.
    6. What is the difference between start routine and update routine, when, how
    and why are they called?
    • Start routine can be used to access InfoPackage while update routines are
    used while updating the Data Targets.
    7. What is the table that is used in start routines?
    • Always the table structure will be the structure of an ODS or InfoCube. For
    example if it is an ODS then active table structure will be the table.
    8. Explain how you used Start routines in your project?
    • Start routines are used for mass processing of records. In start routine all
    the records of DataPackage is available for processing. So we can process all
    these records together in start routine. In one of scenario, we wanted to apply
    size % to the forecast data. For example if material M1 is forecasted to say
    100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra
    Large 20%), we wanted to have 4 records against one single record that is
    coming in the info package. This is achieved in start routine.
    9. What are Return Tables?
    • When we want to return multiple records, instead of single value, we use the
    return table in the Update Routine. Example: If we have total telephone expense
    for a Cost Center, using a return table we can get
    expense per employee.
    10. How do start routine and return table synchronize with each other?
    • Return table is used to return the Value following the execution of start
    routine
    11. What is the difference
    between V1, V2 and V3 updates?
    • V1 Update: It is a Synchronous update. Here the Statistics update is carried
    out at the same time as the document update (in the application
    tables).
    • V2 Update: It is an Asynchronous update. Statistics update and the Document
    update take place as different tasks.
    o V1 & V2 don't need scheduling.
    • Serialized V3 Update: The V3 collective update must be scheduled as a job
    (via LBWE). Here, document data is collected in the order it was created and
    transferred into the BW as a batch job. The transfer sequence may not be the
    same as the order in which the data was created in all scenarios. V3 update
    only processes the update data that is successfully processed with the V2
    update.
    12. What is compression?
    • It is a process used to delete the Request IDs and this saves space.
    13. What is Rollup?
    • This is used to load new DataPackages (requests) into the InfoCube
    aggregates. If we have not performed a rollup then the new InfoCube data will
    not be available while reporting on the aggregate.
    14. What is table partitioning and what are the benefits of partitioning in an
    InfoCube?
    • It is the method of dividing a table which would enable a quick reference.
    SAP uses fact file partitioning to improve performance. We can partition only
    at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as
    data is stored in the relevant partitions. Also table maintenance becomes
    easier. Oracle,
    Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL
    Server, IBM DB2/400 do not support table portioning.
    15. How many extra partitions are created and why?
    • Two partitions are created for date before the begin date and after the end
    date.
    16. What are the options available in transfer rule?
    • InfoObject
    • Constant
    • Routine
    • Formula
    17. How would you optimize the dimensions?
    • We should define as many dimensions as possible and we have to take care that
    no single dimension crosses more than 20% of the fact table size.
    18. What are Conversion Routines for units and currencies in the update rule?
    • Using this option we can write ABAP
    code for Units / Currencies conversion. If we enable this flag then unit of Key
    Figure appears in the ABAP code as an additional parameter. For example, we can
    convert units in Pounds to Kilos.
    19. Can an InfoObject be an InfoProvider, how and why?
    • Yes, when we want to report on Characteristics or Master Data. We have to
    right click on the InfoArea and select "Insert characteristic as data
    target". For example, we can make 0CUSTOMER as an InfoProvider and report
    on it.
    20. What is Open Hub Service?
    • The Open Hub Service enables us to distribute data from an SAP BW system into
    external Data Marts, analytical applications, and other applications. We can
    ensure controlled distribution using several systems. The central object for
    exporting data is the InfoSpoke. We can define the source and the target object
    for the data. BW becomes a hub of an enterprise data warehouse.
    The distribution of data becomes clear through central monitoring from the
    distribution status in the BW system.
    21. How do you transform Open
    Hub Data?
    • Using BADI we can transform Open Hub Data according to the destination
    requirement.
    22. What is ODS?
    • Operational DataSource is used for detailed storage of data. We can overwrite
    data in the ODS. The data is stored in transparent tables.
    23. What are BW Statistics and what is its use?
    • They are group of Business Content InfoCubes which are used to measure
    performance for Query and Load Monitoring. It also shows the usage of
    aggregates, OLAP and Warehouse management
    http://www.ittestpapers.com/articles/713/3/SAP-BW-Interview-Questions---Part-A/Page3.html
    Communication Structure and Transfer
    rules
    • Create and InfoPackage
    • Load Data
    25. What are the delta options available when you load from flat file?
    • The 3 options for Delta Management with Flat Files:
    o Full Upload
    o New Status for Changed records (ODS Object only)
    o Additive Delta (ODS Object & InfoCube)
    Q) Under which menu path is the Test Workbench to be found, including in
    earlier Releases?
    The menu path is: Tools - ABAP Workbench - Test - Test Workbench.
    Q) I want to delete a BEx query that is in Production system through request. Is
    anyone aware about it?
    A) Have you tried the RSZDELETE transaction?
    Q) Errors while monitoring process chains.
    A) During data loading. Apart from them, in process chains you add so many
    process types, for example after loading data into Info Cube, you rollup data
    into aggregates, now this rolling up of data into aggregates is a process type
    which you keep after the process type for loading data into Cube. This rolling
    up into aggregates might fail.
    Another one is after you load data into ODS, you activate ODS data (another
    process type) this might also fail.
    Q) In Monitor----- Details (Header/Status/Details) à Under Processing (data
    packet): Everything OK à Context menu of Data Package 1 (1 Records): Everything
    OK ---- Simulate update. (Here we can debug update rules or transfer rules.)
    SM50 à Program/Mode à Program à Debugging & debug this work process.
    Q) PSA Cleansing.
    A) You know how to edit PSA. I don't think you can delete single records. You
    have to delete entire PSA data for a request.
    Q) Can we make a datasource to support delta.
    A) If this is a custom (user-defined) datasource you can make the datasource
    delta enabled. While creating datasource from RSO2, after entering datasource
    name and pressing create, in the next screen there is one button at the top,
    which says generic delta. If you want more details about this there is a
    chapter in Extraction book, it's in last pages u find out.
    Generic delta services: -
    Supports delta extraction for generic extractors according to:
    Time stamp
    Calendar day
    Numeric pointer, such as document number & counter
    Only one of these attributes can be set as a delta attribute.
    Delta extraction is supported for all generic extractors, such as tables/views,
    SAP Query and function modules
    The delta queue (RSA7) allows you to monitor the current status of the delta
    attribute
    Q) Workbooks, as a general rule, should be transported with the
    role.
    Here are a couple of scenarios:
    1. If both the workbook and its role have been previously transported, then the
    role does not need to be part of the transport.
    2. If the role exists in both dev and the target system but the workbook has
    never been transported, and then you have a choice of transporting the role
    (recommended) or just the workbook. If only the workbook is transported, then
    an additional step will have to be taken after import: Locate the WorkbookID
    via Table RSRWBINDEXT (in Dev and verify the same exists in the target system)
    and proceed to manually add it to the role in the target system via Transaction
    Code PFCG -- ALWAYS use control c/control v copy/paste for manually adding!
    3. If the role does not exist in the target system you should transport both
    the role and workbook. Keep in mind that a workbook is an object unto itself
    and has no dependencies on other objects. Thus, you do not receive an error
    message from the transport of 'just a workbook' -- even though it may not be
    visible, it will exist (verified via Table RSRWBINDEXT).
    Overall, as a general rule, you should transport roles with workbooks.
    Q) How much time does it take to extract 1 million (10 lackhs) of records into
    an infocube?
    A. This depends, if you have complex coding in update rules it will take longer
    time, or else it will take less than 30 minutes.
    Q) What are the five ASAP Methodologies?
    A: Project plan, Business Blue print, Realization, Final preparation & Go-Live - support.
    1. Project Preparation: In this phase, decision makers define clear project
    objectives and an efficient decision making process ( i.e. Discussions with the
    client, like what are his needs and requirements etc.). Project managers
    will be involved in this phase (I guess).
    A Project Charter is issued and an implementation strategy is outlined in this
    phase.
    2. Business Blueprint: It is a detailed documentation of your company's
    requirements. (i.e. what are the objects we need to develop are modified
    depending on the client's requirements).
    3. Realization: In this only, the implementation of the project takes place (development
    of objects etc) and we are involved in the project from here only.
    4. Final Preparation: Final preparation before going live i.e. testing,
    conducting pre-go-live, end user training etc.
    End user training is given that is in the client site you train them how to
    work with the new environment, as they are new to the technology.
    5. Go-Live & support: The project has gone live and it is into production.
    The Project team will be supporting the end users.
    Q) What is landscape of R/3 & what is landscape of BW. Landscape of R/3 not
    sure.
    Then Landscape of b/w: u have the development system, testing system, production system
    Development system: All the implementation part is done in this sys. (I.e.,
    Analysis of objects developing, modification etc) and from here the objects are
    transported to the testing system, but before transporting an initial test
    known as Unit testing
    (testing of objects) is done in the development sys.
    Testing/Quality system: quality check is done in this system and integration
    testing is done.
    Production system: All the extraction part takes place in this sys.
    Q) How do you measure the size of infocube?
    A: In no of records.
    Q). Difference between infocube and ODS?
    A: Infocube is structured as star schema (extended) where a fact table is
    surrounded by different dim table that are linked with DIM'ids. And the data
    wise, you will have aggregated data in the cubes. No overwrite functionality
    ODS is a flat structure (flat table) with no star schema concept and which will
    have granular data (detailed level). Overwrite functionality.
    Flat file
    datasources does not support 0recordmode in extraction.
    x before, -after, n new, a add, d delete, r reverse
    Q) Difference between display attributes and navigational attributes?
    A: Display attribute is one, which is used only for display purpose in the
    report. Where as navigational attribute is used for drilling down in the
    report. We don't need to maintain Navigational attribute in the cube as a
    characteristic (that is the advantage) to drill down.
    Q. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
    A: But how is it possible? If you load it manually twice, then you can delete
    it by requestID.
    Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
    Sure you can. ODS is nothing but a table.
    Q. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?
    A) Yes of course. For example, for loading text and hierarchies we use
    different data sources but the same InfoSource.
    Q. BRIEF THE DATAFLOW IN BW.
    A) Data flows from transactional system to analytical system (BW). DataSources
    on the transactional system needs to be replicated on BW side and attached to
    infosource and update rules respectively.
    Q. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER
    RULES?
    Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
    FULL and DELTA.
    Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in LIS THEN
    WHAT IS THE PROCEDURE IN LO-COCKPIT?
    No LIS in LO cockpit. We will have datasources and can be maintained (append
    fields). Refer white paper
    on LO-Cockpit extractions.
    Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?
    A) Initially we don't delete the setup tables but when we do change in extract
    structure we go for it. We r changing the extract structure right, that means
    there are some newly added fields in that which r not before. So to get the
    required data ( i.e.; the data which is required is taken and to avoid
    redundancy) we delete n then fill the setup tables.
    To refresh the statistical data.
    The extraction set up reads the dataset that you want to process such as,
    customers orders with the tables like VBAK, VBAP) & fills the relevant communication
    structure with the data. The data is stored in cluster
    tables from where it is read when the initialization is run. It is important
    that during initialization phase, no one generates or modifies application
    data, at least until the tables can be set up.
    Q) SIGNIFICANCE of ODS?
    It holds granular data (detailed level).
    Q) WHERE THE PSA DATA IS STORED?
    In PSA table.
    Q) WHAT IS DATA SIZE?
    The volume of data one data target holds (in no. of records)
    Q) Different types of INFOCUBES.
    Basic, Virtual (remote, sap remote and multi)
    Virtual Cube is used for example, if you consider railways reservation all the
    information has to be updated online. For designing the Virtual cube you have
    to write the function module that is linking to table, Virtual cube it is like
    a the structure, when ever the table is updated the virtual cube will fetch the
    data from table and display report Online... FYI.. you will get the information
    : https://www.sdn.sap.com/sdn
    /index.sdn and search for Designing Virtual Cube and you will get
    a good material designing the Function Module
    Q) INFOSET QUERY.
    Can be made of ODS's and Characteristic InfoObjects with masterdata.
    Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
    In R/3 or in BW? 2 in R/3 and 2 in BW
    Q) ROUTINES?
    Exist in the InfoObject, transfer routines, update routines and start routine
    Q) BRIEF SOME STRUCTURES USED IN BEX.
    Rows and Columns, you can create structures.
    Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
    Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes &
    Characteristic values.
    Variable Types are
    Manual entry /default value
    Replacement path
    SAP exit
    Customer exit
    Authorization
    Q) HOW MANY LEVELS YOU CAN GO IN REPORTING?
    You can drill down to any level by using Navigational attributes and jump
    targets.
    Q) WHAT ARE INDEXES?
    Indexes are data base indexes, which help in retrieving data fastly.
    Q) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
    Help! Refer documentation
    Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED?
    No.
    Q) WHAT IS THE SIGNIFICANCE OF KPI'S?
    KPI's indicate the performance of a company. These are key figures
    Q) AFTER THE DATA EXTRACTION
    WHAT IS THE IMAGE POSITION.
    After image (correct me if I am wrong)
    Q) REPORTING AND RESTRICTIONS.
    Help! Refer documentation.
    Q) TOOLS USED FOR PERFORMANCE TUNING.
    ST22, Number ranges, delete indexes before load. Etc
    Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA DAILY.
    There should be some tool to run the job daily (SM37 jobs)
    Q) AUTHORIZATIONS.
    Profile generator
    Q) WEB REPORTING.
    What are you expecting??
    Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.
    Of course
    Q) PROCEDURES OF REPORTING ON MULTICUBES
    Refer help. What are you expecting? MultiCube works on Union condition
    Q) EXPLAIN TRANPSORTATION OF OBJECTS?
    Dev---àQ and Dev-------àP
    Q) What types of partitioning are there for BW?
    There are two Partitioning Performance aspects for BW (Cube & PSA)
    Query Data Retrieval
    Performance Improvement:
    Partitioning by (say) Date Range improves data retrieval by making best use of
    database [data range] execution plans and indexes (of say Oracle database engine).
    B) Transactional Load Partitioning Improvement:
    Partitioning based on expected load volumes and data element sizes. Improves
    data loading into PSA and Cubes by infopackages (Eg. without timeouts).
    Q) How can I compare data in R/3 with data in a BW Cube after the daily delta
    loads? Are there any standard procedures for checking them or matching the
    number of records?
    A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the
    number of records extracted. Then go to BW Monitor to check the number of
    records in the PSA and check to see if it is the same & also in the monitor
    header tab.
    A) RSA3 is a simple extractor checker program that allows you to rule out
    extracts problems in R/3. It is simple to use, but only really tells you if the
    extractor works. Since records that get updated into Cubes/ODS structures are
    controlled by Update Rules, you will not be able to determine what is in the
    Cube compared to what is in the R/3 environment. You will need to compare
    records on a 1:1 basis against records in R/3 transactions for the functional
    area in question. I would recommend enlisting the help of the end user community
    to assist since they presumably know the data.
    To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute
    and you will see the record count, you can also go to display that data. You
    are not modifying anything so what you do in RSA3 has no effect on data quality
    afterwards. However, it will not tell you how many records should be expected
    in BW for a given load. You have that information in the monitor RSMO during
    and after data loads. From RSMO for a given load you can determine how many
    records were passed through the transfer rules from R/3, how many targets were
    updated, and how many records passed through the Update Rules. It also gives
    you error messages from the PSA.
    Q) Types of Transfer Rules?
    A) Field to Field mapping, Constant, Variable & routine.
    Q) Types of Update Rules?
    A) (Check box), Return table
    Q) Transfer Routine?
    A) Routines, which we write in, transfer rules.
    Q) Update Routine?
    A) Routines, which we write in Update rules
    Q) What is the difference between writing a routine in transfer rules and
    writing a routine in update rules?
    A) If you are using the same InfoSource to update data in more than one data
    target its better u write in transfer rules because u can assign one InfoSource
    to more than one data target & and what ever logic u write in update rules
    it is specific to particular one data target.
    Q) Routine with Return Table.
    A) Update rules generally only have one return value. However, you can create a
    routine in the tab strip key figure calculation, by choosing checkbox Return
    table. The corresponding key figure routine then no longer has a return value,
    but a return table. You can then generate as many key figure values, as you
    like from one data record.
    Q) Start routines?
    A) Start routines u can write in both updates rules and transfer rules, suppose
    you want to restrict (delete) some records based on conditions before getting
    loaded into data targets, then you can specify this in update rules-start
    routine.
    Ex: - Delete Data_Package ani ante it will delete a record based on the
    condition
    Q) X & Y Tables?
    X-table = A table to link material SIDs with SIDs for time-independent
    navigation attributes.
    Y-table = A table to link material SIDs with SIDS for time-dependent navigation
    attributes.
    There are four types of sid tables
    X time independent navigational attributes sid tables
    Y time dependent navigational attributes sid tables
    H hierarchy sid tables
    I hierarchy structure sid tables
    Q) Filters & Restricted Key figures (real time example)
    Restricted KF's u can have for an SD cube: billed quantity, billing value, no:
    of billing documents as RKF's.
    Q) Line-Item Dimension (give me an real time example)
    Line-Item Dimension: Invoice no: or Doc no: is a real time example
    Q) What does the number in the 'Total' column in Transaction RSA7 mean?
    A) The 'Total' column displays the number of LUWs that were written in the
    delta queue and that have not yet been confirmed. The number includes the LUWs
    of the last delta request (for repetition of a delta request) and the LUWs for
    the next delta request. A LUW only disappears from the RSA7 display when it has
    been transferred to the BW System and a new delta request has been received
    from the BW System.
    Q) How to know in which table (SAP BW) contains Technical Name / Description
    and creation data of a particular Reports. Reports that are created using BEx
    Analyzer.
    A) There is no such table in BW if you want to know such details while you are
    opening a particular query press properties button you will come to know all
    the details that you wanted.
    You will find your information about technical names and description about
    queries in the following tables. Directory of all reports (Table RSRREPDIR) and
    Directory of the reporting component elements (Table RSZELTDIR) for workbooks
    and the connections to queries check Where- used list for reports in workbooks
    (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table
    RSRWBINDEXT)
    Q) What is a LUW in the delta queue?
    A) A LUW from the point of view of the delta queue can be an individual
    document, a group of documents from a collective run or a whole data packet of
    an application
    extractor.
    Q) Why does the number in the 'Total' column in the overview screen of
    Transaction RSA7 differ from the number of data records that is displayed when
    you call the detail view?
    A) The number on the overview screen corresponds to the total of LUWs (see also
    first question) that were written to the qRFC queue and that have not yet been
    confirmed. The detail screen displays the records contained in the LUWs. Both,
    the records belonging to the previous delta request and the records that do not
    meet the selection conditions of the preceding delta init requests are filtered
    out. Thus, only the records that are ready for the next delta request are
    displayed on the detail screen. In the detail screen of Transaction RSA7, a
    possibly existing customer exit is not taken into account.
    Q) Why does Transaction RSA7 still display LUWs on the overview screen after
    successful delta loading?
    A) Only when a new delta has been requested does the source system learn that
    the previous delta was successfully loaded to the BW System. Then, the LUWs of
    the previous delta may be confirmed (and also deleted). In the meantime, the
    LUWs must be kept for a possible delta request repetition. In particular, the
    number on the overview screen does not change when the first delta was loaded
    to the BW System.
    Q) Why are selections not taken into account when the delta queue is filled?
    A) Filtering according to selections takes place when the system reads from the
    delta queue. This is necessary for reasons of performance.
    Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has
    also been loaded successfully?
    It is most likely that this is a DataSource that does not send delta data to
    the BW System via the delta queue but directly via the extractor (delta for
    master data using ALE change pointers). Such a DataSource should not be
    displayed in RSA7. This error is corrected with BW 2.0B Support Package 11.
    Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the
    loading procedure from the delta queue?
    A) The impact is limited. If performance problems are related to the loading
    process from the delta queue, then refer to the application-specific notes (for
    example in the CO-PA area, in the logistics cockpit area and so on).
    Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as
    effective for the delta queue as for a full update. Please note, however, that
    LUWs are not split during data loading for consistency reasons. This means that
    when very large LUWs are written to the DeltaQueue, the actual package size may
    differ considerably from the MAXSIZE and MAXLINES parameters.
    Q) Why does it take so long to display the data in the delta queue (for example
    approximately 2 hours)?
    A) With Plug In 2001.1 the display was changed: the user has the option of
    defining the amount of data to be displayed, to restrict it, to selectively
    choose the number of a data record, to make a distinction between the 'actual'
    delta data and the data intended for repetition and so on.
    Q) What is the purpose of function 'Delete data and meta data in a queue' in
    RSA7? What exactly is deleted?
    A) You should act with extreme caution when you use the deletion function in
    the delta queue. It is comparable to deleting an InitDelta in the BW System and
    should preferably be executed there. You do not only delete all data of this
    DataSource for the affected BW System, but also lose the entire information
    concerning the delta initialization. Then you can only request new deltas after
    another delta initialization.
    When you delete the data, the LUWs kept in the qRFC queue for the corresponding
    target system are confirmed. Physical deletion only takes place in the qRFC
    outbound queue if there are no more references to the LUWs.
    The deletion function is for example intended for a case where the BW System,
    from which the delta initialization was originally executed, no longer exists
    or can no longer be accessed.
    Q) Why does it take so long to delete from the delta queue (for example half a
    day)?
    A) Import PlugIn 2000.2 patch 3. With this patch the performance during
    deletion is considerably improved.
    Q) Why is the delta queue not updated when you start the V3 update in the
    logistics cockpit area?
    A) It is most likely that a delta initialization had not yet run or that the
    delta initialization was not successful. A successful delta initialization (the
    corresponding request must have QM status 'green' in the BW System) is a
    prerequisite for the application data being written in the delta queue.
    Q) What is the relationship between RSA7 and the qRFC monitor (Transaction
    SMQ1)?
    A) The qRFC monitor basically displays the same data as RSA7. The internal
    queue name must be used for selection on the initial screen of the qRFC
    monitor. This is made up of the prefix 'BW, the client and the short name of
    the DataSource. For DataSources whose name are 19 characters long or shorter,
    the short name corresponds to the name of the DataSource. For DataSources whose
    name is longer than 19 characters (for delta-capable DataSources only possible
    as of PlugIn 2001.1) the short name is assigned in table ROOSSHORTN.
    In the qRFC monitor you cannot distinguish between repeatable and new LUWs.
    Moreover, the data of a LUW is displayed in an unstructured manner there.
    Q) Why are the data in the delta queue although the V3 update was not started?
    A) Data was posted in background. Then, the records are updated directly in the
    delta queue (RSA7). This happens in particular during automatic goods receipt
    posting (MRRS). There is no duplicate transfer of records to the BW system. See
    Note 417189.
    Q) Why does button 'Repeatable' on the RSA7 data details screen not only show
    data loaded into BW during the last delta but also data that were newly added,
    i.e. 'pure' delta records?
    A) Was programmed in a way that the request in repeat mode fetches both
    actually repeatable (old) data and new data from the source system.
    Q) I loaded several delta inits with various selections. For which one is the
    delta loaded?
    A) For delta, all selections made via delta inits are summed up. This means, a
    delta for the 'total' of all delta initializations is loaded.
    Q) How many selections for delta inits are possible in the system?
    A) With simple selections (intervals without complicated join conditions or
    single values), you can make up to about 100 delta inits. It should not be
    more.
    With complicated selection conditions, it should be only up to 10-20 delta
    inits.
    Reason: With many selection conditions that are joined in a complicated way,
    too many 'where' lines are generated in the generated ABAP
    source code that may exceed the memory limit.
    Q) I intend to copy the source system, i.e. make a client copy. What will
    happen with may delta? Should I initialize again after that?
    A) Before you copy a source client or source system, make sure that your deltas
    have been fetched from the DeltaQueue into BW and that no delta is pending.
    After the client copy, an inconsistency might occur between BW delta tables and
    the OLTP delta tables as described in Note 405943. After the client copy, Table
    ROOSPRMSC will probably be empty in the OLTP since this table is
    client-independent. After the system copy, the table will contain the entries
    with the old logical system name that are no longer useful for further delta
    loading from the new logical system. The delta must be initialized in any case
    since delta depends on both the BW system and the source system. Even if no
    dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you
    should expect that the delta have to be initialized after the copy.
    Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of
    processes?
    A) Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW
    queues only after informing the BW Support or only if this is explicitly
    requested in a note for component 'BC-BW' or 'BW-WHM-SAPI'.
    Q) Despite of the delta request being started after completion of the
    collective run (V3 update), it does not contain all documents. Only another
    delta request loads the missing documents into BW. What is the cause for this
    "splitting"?
    A) The collective run submits the open V2 documents for processing to the task
    handler, which processes them in one or several parallel update processes in an
    asynchronous way. For this reason, plan a sufficiently large "safety time
    window" between the end of the collective run in the source system and the
    start of the delta request in BW. An alternative solution where this problem
    does not occur is described in Note 505700.
    Q) Despite my deleting the delta init, LUWs are still written into the
    DeltaQueue?
    A) In general, delta initializations and deletions of delta inits should always
    be carried out at a time when no posting takes place. Otherwise, buffer
    problems may occur: If a user started the internal mode at a time when the
    delta initialization was still active, he/she posts data into the queue even
    though the initialization had been deleted in the meantime. This is the case in
    your system.
    Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some
    entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What
    do these statuses mean? Which values in the field 'Status' mean what and which
    values are correct and which are alarming? Are the statuses BW-specific or
    generally valid in qRFC?
    A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read
    once either in a delta request or in a repetition of the delta request.
    However, this does not mean that the record has successfully reached the BW
    yet. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that
    the record has been written into the DeltaQueue and will be loaded into the BW
    with the next delta request or a repetition of a delta. In any case only the
    statuses READ, READY and RECORDED in both tables are considered to be valid.
    The status EXECUTED in TRFCQOUT can occur temporarily. It is set before
    starting a DeltaExtraction for all records with status READ present at that
    time. The records with status EXECUTED are usually deleted from the queue in
    packages within a delta request directly after setting the status before
    extracting a new delta. If you see such records, it means that either a process
    which is confirming and deleting records which have been loaded into the BW is
    successfully running at the moment, or, if the records remain in the table for
    a longer period of time with status EXECUTED, it is likely that there are
    problems with deleting the records which have already been successfully been
    loaded into the BW. In this state, no more deltas are loaded into the BW. Every
    other status is an indicator for an error or an inconsistency. NOSEND in SMQ1
    means nothing (see note 378903).
    The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.
    Q) The extract structure was changed when the DeltaQueue was empty. Afterwards
    new delta records were written to the DeltaQueue. When loading the delta into
    the PSA, it shows that some fields were moved. The same result occurs when the
    contents of the DeltaQueue are listed via the detail display. Why are the data
    displayed differently? What can be done?
    Make sure that the change of the extract structure is also reflected in the
    database and that all servers are synchronized. We recommend to reset the
    buffers using Transaction $SYNC. If the extract structure change is not
    communicated synchronously to the server where delta records are being created,
    the records are written with the old structure until the new structure has been
    generated. This may have disastrous consequences for the delta.
    When the problem occurs, the delta needs to be re-initialized.
    Q) How and where can I control whether a repeat delta is requested?
    A) Via the status of the last delta in the BW Request Monitor. If the request
    is RED, the next load will be of type 'Repeat'. If you need to repeat the last
    load for certain reasons, set the request in the monitor to red manually. For
    the contents of the repeat see Question 14. Delta requests set to red despite
    of data being already updated lead to duplicate records in a subsequent repeat,
    if they have not been deleted from the data targets concerned before.
    Q) As of PI 2003.1, the Logistic Cockpit offers various types of update
    methods. Which update method is recommended in logistics? According to which
    criteria should the decision be made? How can I choose an update method in
    logistics?
    See the recommendation in Note 505700.
    Q) Are there particular recommendations regarding the data volume the
    DeltaQueue may grow to without facing the danger of a read failure due to
    memory problems?
    A) There is no strict limit (except for the restricted number range of the
    24-digit QCOUNT counter in the LUW management table - which is of no practical
    importance, however - or the restrictions regarding the volume and number of
    records in a database table).
    When estimating "smooth" limits, both the number of LUWs is important
    and the average data volume per LUW. As a rule, we recommend to bundle data
    (usually documents) already when writing to the DeltaQueue to keep number of
    LUWs small (partly this can be set in the applications, e.g. in the Logistics
    Cockpit). The data volume of a single LUW should not be considerably larger
    than 10% of the memory available to the work process for data extraction
    (in a 32-bit architecture with a memory volume of about 1GByte per work
    process, 100 Mbytes per LUW should not be exceeded). That limit is of rather
    small practical importance as well since a comparable limit already applies
    when writing to the DeltaQueue. If the limit is observed, correct reading is
    guaranteed in most cases.
    If the number of LUWs cannot be reduced by bundling application transactions,
    you should at least make sure that the data are fetched from all connected BWs
    as quickly as possible. But for other, BW-specific, reasons, the frequency
    should not be higher than one DeltaRequest per hour.
    To avoid memory problems, a program-internal limit ensures that never more than
    1 million LUWs are read and fetched from the database per DeltaRequest. If this
    limit is reached within a request, the DeltaQueue must be emptied by several
    successive DeltaRequests. We recommend, however, to try not to reach that limit
    but trigger the fetching of data from the connected BWs already when the number
    of LUWs reaches a 5-digit value.
    Q) I would like to display the date the data was uploaded on the
    report. Usually, we load the transactional data nightly. Is there any easy way
    to include this information on the report for users? So that they know the
    validity of the report.
    A) If I understand your requirement correctly, you want to display the date on
    which data was loaded into the data target from which the report is being
    executed. If it is so, configure your workbook to display the text elements in
    the report. This displays the relevance of data field, which is the date on which
    the data load has taken place.
    Q) Can we filter the fields at Transfer Structure?
    Q) Can we load data directly into infoobject with out extraction is it
    possible.
    Yes. We can copy from other infoobject if it is same. We load data from PSA if
    it is already in PSA.
    Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA, IF WE R SHEDULED DAILY, WEEKLY
    AND MONTHLY.
    a) We can set the time.
    Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING ON OFFSHORE PROJECTS.
    THROUGH WHICH NETWORK.
    a) VPN…………….Virtual
    Private Network, VPN is nothing but one sort of network
    where we can connect to the client systems sitting in offshore through RAS
    (Remote access server).
    Q) HOW CAN U ANALIZE THE PROJECT AT FIRST?
    Prepare Project Plan and Environment
    Define Project Management
    Standards and
    Procedures
    Define Implementation Standards and Procedures
    Testing & Go-live + supporting.
    Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL CUBES IF ONE
    CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
    Go to TCode sm66 then see which one is locked select that pid from there and
    goto sm12
    TCode then unlock it this is happened when lock errors are occurred when u
    scheduled.
    Q) Can anybody tell me how to add a navigational attribute in the BEx report in
    the rows?
    A) Expand dimension under left side panel (that is infocube panel) select than
    navigational attributes drag and drop under rows panel.
    Q) IF ANY TRASACTION CODE LIKE SMPT OR STMT.
    In current systems (BW 3.0B and R/3 4.6B) these Tcodes don't exist!
    Q) WHAT IS TRANSACTIONAL CUBE?
    A) Transactional InfoCubes differ from standard InfoCubes in that the former
    have an improved write access performance level. Standard InfoCubes are
    technically optimized for read-only access and for a comparatively small number
    of simultaneous accesses. Instead, the transactional InfoCube was developed to
    meet the demands of SAP Strategic Enterprise Management (SEM), meaning that,
    data is written to the InfoCube (possibly by several users at the same time)
    and re-read as soon as possible. Standard Basic cubes are not suitable for
    this.
    Q) Is there any way to delete cube contents within update rules from an ODS
    data source? The reason for this would be to delete (or zero out) a cube record
    in an "Open Order" cube if the open order quantity was 0.
    I've tried using the 0recordmode but that doesn't work. Also, would it
    be easier to write a program that would be run after the load and delete
    the records with a zero open qty?
    A) START routine for update rules u can write ABAP code.
    A) Yap, you can do it. Create a start routine in Update rule.
    It is not "Deleting cube contents with update rules" It is only
    possible to avoid that some content is updated into the InfoCube using the
    start routine. Loop at all the records and delete the record that has the
    condition. "If the open order quantity was 0" You have to think also
    in before and after images in case of a delta upload. In that case you may
    delete the change record and keep the old and after the change the wrong
    information.
    Q) I am not able to access a node in hierarchy directly using variables for
    reports. When I am using Tcode RSZV it is giving a message that it doesn't
    exist in BW 3.0 and it is embedded in BEx. Can any one tell me the other
    options to get the same functionality in BEx?
    A) Tcode RSZV is used in the earlier version of 3.0B only. From 3.0B onwards,
    it's possible in the Query Designer (BEx) itself. Just right click on the
    InfoObject for which you want to use as variables and precede further selecting
    variable type and proce

  • Interview  questions need solution

    Dear all
    1. I have an interview question
    Is it posible to do credit check at base value say pr00 value, generally credit check is done at netvalue, all open items or total value but at base value is is possible
    2. Tcode for list of customers
    3. Can we delete CMR if yes how can we delete, if not why we can delete
    4. What is the system landscape of your latest client
    5. How shipment calculation can be done and what r the necessary configuration settings to carry out shipping calculation
    6. Is solution manager is implemented in ur company, how and why a SD functional consultant can use
    your immediate response is highly appreciated
    thanks
    sateesh

    Hi
    Here is more details in your question.
    1. Is it posible to do credit check at base value say pr00 value, generally credit check is done at netvalue, all open items or total value but at base value is is possible
    Ans : Basically in your pricing procedure there is a coloum Subtotal  with option A "Carry over price to KOMP-CMPRE (credit price)" it means whichever condition type you will maintain this option system will carry the credit check against the value of that condition type.
    2. . Tcode for list of customers
    Ans. For standard SAP t-Code is VCUST, for table level go to SE11 and use table KNA1(for General) KNVV (for sales)
    3.  Can we delete CMR if yes how can we delete, if not why we can delete
    Ans. Generally master data which is created in SAP you can not delete. You can only block the customer master for overall or some specific sales area level
    4. What is the system landscape of your latest client
    Ans. This is totally depands on customer. Some customer keep system landscape like DEVELOPMENT - QUALITY - PRODUCTION
             Whereas some client keep DEVELOPMENT - QUALITY - PRODUCTION - SENDBOX (REPLICA OF PRODUCTION SERVER for R&D or for support teams research purpose)
    5. How shipment calculation can be done and what r the necessary configuration settings to carry out shipping calculation?
    Ans.  This question is not cleared that what u required whether shipment cost calculation or transportation time calculation
              For Shipment cost you have to make setting in SPRO -- LOGISTIC EXECUTION -- TRANSPORTATION ---SHIPMENT COST (here you maintain the config of shipment cost)
    6. . Is solution manager is implemented in ur company, how and why a SD functional consultant can use
    Ans. Solution manager is very important implemantation tool. It is kind of document repository where u maintain all information of variouse implemantation steps. It is one place where you maintain the all information of project (whether project related issue with their processing status or other imformation)
    Regards
    Shambhu Sarkar

  • Could anyone please tell me how to expalin an informatica project in interview..

    Hi Narayan, As you provided below points abt project explanation - i have added generic explanation about project. But my explanation is not proper. Request you to please guide  me in right direction and add some point from your side which will help me in interviews. > Project Domain with (Optional: Architecture or Design format that your client is following) -  Our project is in aviation domain. Objective of our project is to fetch data from heterogeneous source systems and load data into DWH depending on business scearios/requirements. > Your responsibilities in details My responsibilities are to develope mappings depending on business requirements. We are getting LLD in which flow of mapping is provided along with transformation details. So we have to develope mapping based on LLD. > What kind of data is coming & some brief of different Source Our source system are mostly files and relation table (Oracle) and target are relational table (Oracle) > Bottleneck’s that you have encountered in your project with solution what to explai here. Plaese provide with example > Some new experience(Technically). Request you to please add two/three points here

    What      are your Daily routines?            Cheking any imp mails are came.Need to disucss with Team lead if we have to do anywork. By EOD send the mail to team lead regarding work. need to attend weekly status meeting.How      many mapping have you created all together in your project?        As of now i did 3 projects first project : 48 workflows , second : 52 workflow , third project : 156 worlfows as of now.  In      which account does your Project Fall?          account???         hope its manufacturing and advanced services. What      is your Reporting Hierarchy?           me -->team lead--->Project Manager--->programming Manager         me---> AR--->HR  How      many Complex Mapping’s have you created? Could you please me the situation      for which you have developed that Complex mapping?          i think 8 , one for character by character  comparision (with informatica not possible) so written 950 lines if single SQL code. and one for Dynamical Hierarchy distribution .... and one for Multi byte characters ( japanes,chines,..etc) and so on...      What is your Involvement in Performance tuning of your      Project?   some times any way performance team will take care of my code.What is the Schema of your Project? And why did you opt      for that particular schema?     some_Prd --->that is the souce for us  What are your Roles in this project?            as a developer , desing the workflows , unit testing, and so on... 
    Can I have one situation which you have adopted by      which performance has improved dramatically?         yes ..... my firest project ..i faced perormance issue after go live.  Where you Involved in more than two projects      simultaneously?         of couse ,, i involved 3 projects at a time. Do you have any experience in the Production support?           no
    What kinds of Testing have you done on your Project      (Unit or Integration or System or UAT)? And Enhancement’s were done after      testing?        Unit testing and sometimes integration. UAT will done by business not BI team.How many Dimension Table are there in your Project and      how are they linked to the fact table?           current project 18 and 2 facts . and relationship will always be dimention keys. 
    How do we do the Fact Load?           loding the fact table ??      after dimention load complete .. How did you implement CDC in your project?          change data capture ??        It is always on souce modification date. 
    How does your Mapping in File to Load look like?              souce --->ods--->flat file ( .dat) -- e cap delimeter
    How does your Mapping in Load to Stage look like?           is depends on project ...          Souce -->ods--->stage
    How does your Mapping in Stage to ODS look like?            Stage to ods ???         never ...
    What is the size of your Data warehouse?           10 TBWhat is your Daily feed size and weekly feed size?        feed size>???Which Approach (Top down or Bottom Up) was used in      building your project?            Bottom up ---i mean you are asking about dimention to fact and fact to dimention??How do you access your source’s (are they Flat files or      Relational)?               Relational and some times flat files also.
    Have you developed any Stored Procedure or triggers in      this project? How did you use them and in which situation?              no
    Did your Project go live? What are the issues that you      have faced while moving your project from the Test Environment to the      Production Environment?           yeah...i faced some issues.
    What is the biggest Challenge that you encountered in      this project?          Dynamic hierachy data distribution and moving the files from unix box to informatica directory through shell scriopt , need to clean the data in flatfile itselt. 
    What is the scheduler tool you have used in this      project? How did you schedule jobs using it?           Dollar Universe.. by using $U Sessioin task.   
         ---Naresh Neelam if you have any questions please send me a mail. i dont have access outside network at my work.

  • Oracle Financials Functional Situational Interview Questions

    Hello Oracle Fin Brothers,
    For Oracle Financial Functional Consultant
    I am new in Oracle Financials, Just completed my training and I am having my first interview Thursday; the recruiter tell me the first step will be Situational Interview and I really don't understand what this mean; can some one tell me exactly how is an Situational Interview in Oracle Financial Functional? what are the possible questions and answers? did some one have a list of situational interview questions and answer to help me with? I will really appreciated.
    The second step will be Skill interview; what is the difference with situational interview? what are the possible questions and answers? can some one provide a list of possible questions and answers? for Oracle Functional consultant?
    Hope to get some helpful post
    Thanks
    Ferrari

    Hi,
    I can tell you about my experience in different process.
    I usually have a first interview with the recruiter/agency this first interview is quite general, he usually review your CV, job history,... the recruiter ask you about your interest or motivation for that process, if you are willing to travel, reallocate, etc.. your salary expectations and your knowledge in the different modules. This is the easier interview and if everything is ok, the recruiter will tell you for next steps, probably about the project, the employer, etc...
    In the second interview normally is with the consultancy company, with the manager and an experience consultant that usually asks about technical questions, they ask for example, how to setup a dual accounting process with SLA in GL, which reports you would use to reconcille AP/AR and GL, how many segments do you need to setup in GL for a specific business situation, how to load assest in FA, which are de accounting steps from buy goods until you account the AP invoices,...
    In some cases you have also a final interview with customer just to get the final approval, but the really important one is the second interview.
    Good Luck ;-)
    xhuertax

  • Real time interview questions in XI

    Hi guys,
               Can anyone send me Real time interview questions in XI to my mail id [email protected] will be awarded.
    Thanks in advance

    Hi , check some of the FAQS.
    Some of them not answered.
    . Which of the following are Components of XI
    MDM, Adapter Framework, RWB, SLD, IS

    2. What is a XI Pipeline?
    3. Source element occurs once whereas the target element it is mapped to is produced 3 times when the mapping is executed. Why does this happen?
    4. A context object is used in place of what?
    5. what is UDF? What are the mandatory functions that you use in a Java based UDF
    a. Init() , Execute(), Destroy(), Run(), SetParameter()
    6. ABAP mapping is implemented using what?
    7. When you don’t find the ABAP mapping option in IR what do u do?
    8. Any of the of different Mapping types (Java, MM, XSLT, ABAP) can be called in any order for Interface mapping ? True of False
    9. Is true case sensitive in case of a Boolean function and can 1 be interpreted a Boolean TRUE?
    10. what is Context Changes ?
    11. What are the protocols that the Mail adapter supports
    12. Why is a SAP BC used?
    13. WSDL representation of a Message Interface is used to generate what kind of proxies?
    a. ABAP Proxies
    b. Java Proxies
    c. Neither ABAP nor Java
    d. Both ABAP and Java proxies
    14. Would you configure a Sender IDOC communication channel?
    15. You are required to upload additional libraries for the JMS adapter. How would you do it?
    16. QoS that a Sender JDBC communication channel supports
    17. What are the transport protocols a JMS adapter supports?
    18. Would you configure the Integration Server as a Logical system in a scenario where IDOCs are being sent from a SAP R/3 system to XI?
    19. Why do we specify the Logical System name in the SLD?
    20. The pre-requisites for sending IDOCs to an XI system
    a. Connection parameters must be maintained in SWCV
    b. User must have administration rights in XI
    c. The IDOC metadata must be imported into IR
    21. You need to post a transaction using a RFC. How would you accomplish this?
    a. Use a async BAPI call with implicit Commit?
    b. Use a async BAPI call with explicit Commit?
    22. what is PCK? What is the necessity for a PCK?
    23. In a company the Central Adapter Engine is installed close to the business partner site. Why do you this this is done?
    24. The flow of a message entering the Adapter Engine from Integration Server is--
    a. It is queued, processed using module processors and then posted to the backend application
    b. It is processed using module processors, queued and then posted to the backend application
    25. Is the persistence layer used by the Adapter Engine and the Integration Engine (Integration Server) same?
    26. Is the Message ID specified in the Integration Engine same as the Message ID used during the Message transformation in the Adapter engine?
    27. Would you configure a Sender HTTP adapter?
    28. QoS in case of a RFC Receiver adapter
    29. Sync-Async bridge is used for?
    30. A Business Process is
    a. Executable cross component
    b. Can send and receive messages
    31. What is the purpose of a deadline branch
    32. What is SXI_CACHE used for?
    33. Container elements can be typed to what ?
    34. Why is a Wait step used?
    35. A block can have which of the following?
    a. Multiple Exception branches
    b. Multiple Condition branches
    36. For what all step types can you use a Corelation?
    37. Which of the following is true?
    a. Blocks can be Nested
    b. Blocks can be Overlapped
    38. You need to collect and club messages in a container element coming from different steps. How would you do this?
    39. In case of a Block, which of the following is true?
    Elements of a super container are visible in sub-containers
    Elements of a subordinate container are not visible in all blocks
    Elements defined in the process container are visible in all blocks
    40. what is Co relation & Local Co-relation
    41. Where can you use N: M transformation?
    42. Alert framework uses/leverages CCMS?
    43. If you want to cancel a process and set its status to ‘Logically Deleted’ when a Deadline is reached, do you need to use a Control Step having its Action as ‘CancelProcess’ or is it automatically done?
    44. What are the ways an Exception can be triggered?
    45. What would be the best architecture after implementing SAP XI? Implementing EDI adapter(s).
    46. How to run the Adapter engine as a service?
    47. How SAP Netweaver supports a holistic approach to BPM (Business Process Management)?
    48. What is the role of SAP XI?
    49. How can we differentiate SAP XI from Business Connector (BC)?
    50. How to send mail from SAP XI?
    51. What are the migration steps from XI 2.0 to XI 3.0?
    52. XI will support synchronous communication and asynchronous communication?
    A. Yes
    B. No
    53. Integration server contains the following components?
    A. Additional integration services
    B. Integration Engine
    C. Business Process Engine
    D. Integration Repository
    54. Integration Repository provides the following components?
    A. Business processes
    B. Mapping Objects
    C. Components at design time
    D. Imported objects
    55. What is the usage of Web Application server in XI?
    56. What is the use of RFC and IDOC Adapters in XI?
    57. How to convert WSDL (Web Services Descriptive Language) to target language?
    58. What is the component to generate Jave classes?
    59. What is ESA (Enterprise Service Architecture)?
    60. What are the key elements of ESA?
    61. How to transport SLD, Integration Directory & Integration Repository objects to the Production system?
    62. Can we import XSD Schemas into XI 2.0?
    63. which api you use for java mapping.
    64. You use context object in place of what?
    65. To make non mandatory node mandatory which you should to?
    66. In case of RFC communication sender system sends a rfc call but target system does not receive it. What you think went wrong?
    67 Difference between Xi business process and workflow ?
    68. When you use transaction SXMB_MONI for process monitoring, which field tells you that the entry is for business process.
    69. What are different Xi components?
    70. In which all places you can use receive process?
    71 In which all steps you can activate correlation?
    1. How many interfaces are u developed in u r project.
    2,What is land accepted.?
    3.What is Your team size ?
    4.The work assignment procedure in your organization?
    5.What is your complete company object.?
    6.What is the nesicity of developing that scenario.?
    7.What is the advantage over other integration tools.?
    8.What is Sender agreement?
    9.What is Receiver agreement?
    10.Tell me the steps for Multiple Idoc to File.(BPM Scenarios)
    11.Tell me the steps for file to Multiple Idoc (BPM)
    12.How to Create Alerts in BPM?
    13.How to Use Third Party Adaptors in u r project.?
    14.How to Use External Objects?
    15.What is the Use of Node Fictions in XI?(example)
    16 Examples on RFC lookups?
    regards,
    Brahmaji.

Maybe you are looking for

  • No Connection between EIS console and the Integration Server

    HI, EPM System 11.1.2.2 1. I have Integration Server and Oracle on the same machine (windows 2008 R2 x64). 2. I have created ODBC(system DSN) on the same server. 3. I have a client machine (windows 7 x64) on which EIS console has installed. 4. Error

  • Brushed edits disappear in Viewer from some images

    I have some photos for which I can no longer access the post processing edit I made to them. Specifically it seems to be the brushed edits that are lost. Crops, contrast, etc. are present, but nothing brushed. If I've just quit Aperture and go back i

  • Photos in random order

    After I upload photos to iphoto 09, they appear in random order, not at all according to date. I can't use "split event" because alike photos are not even next to each other... How do I fix this or move photos into some order that makes sense? Or mus

  • Is this computer outdated or rather in need of service?

    The computer is slow. My model ID is iMac5,2. My processor name is Intel Core 2 Duo. My processor speed is 1.83 GHz. My memory slots have 256 MB of DDR2 SDRAM, 667MHz. My memory is 512 MB. I check Activity Monitor earlier.  It said I had 0 free MB of

  • HT201317 transfer iphone photos to ipad

    How do i transfer pictures previously taken on my iphone 4s to my new ipad and old pc comptuer