Analytic Application in CE7.2

Hi,
I developed a analytic application to display a pie chart in webdynpro. When I run the application, I am not able to see the pie chart. A small icon is displayed and when I click on that wheel keeps on running but no output is displayed.
Are there any specific prerequisite that we need to follow to run the analytic applications. I am following the PDF "Developing Analytics Applications with Webdynpro for Java".
Please help.
Regards
V. Suresh Kumar

Solved

Similar Messages

  • MCW_AA111 System not a valid BW target system for analytical applications

    Dear Gurus,
    Our project would like to use retail allocation strategy with reference to SAP BW, such as
    Quotas based on SAP BW data
    Top down (SAP BW)
    Bottom up (SAP BW)
    To retract the data from BW, we go to the configuration of Data Retraction for Retail/CP. (T-CODE: MCW_AA).
    In the Maintenance of Queries for Analytical Applications, we wanted to add a new entry in the query list. We can find the Target Sys. But when we tried the input help(F4) of the Analyt. Appl. Query., the msg: "MCW_AA111:System is not a valid BW target system for analytical applications" prompt out.
    RFC connection has already been established between SAP-retail and BW.
    Does anybody have any clue?
    Thank you.
    Alex.

    hi Kevin,
    check if helps following oss note
    Note 983449 - Termin A122 1COLUMN no valid characteristic of infoprovider
    Symptom
    Termination A 122 Brain occurs when you test and generate a query. The system does not recognise the characteristic 1COLUMN.
    Other terms
    Query, condition, COB_PRO
    Reason and Prerequisites
    This problem is caused by a program error.
    Solution
    SAP NetWeaver 2004s BI
               Import Support Package 10 for SAP NetWeaver 2004s BI (BI Patch 10 or SAPKW70010) into your BI system. The Support Package is available once Note 914304 "SAPBINews BI 7.0 Support Package 10", which describes this Support Package in more detail, has been released for customers.
    In urgent cases, you can implement the correction instructions.
    You must first implement Notes 932065, 935140, 948389, 964580 and 969846, which provide information about transaction SNOTE. Otherwise, problems and syntax errors may occur when you deimplement some notes.

  • Is timesten fit for analytical application ?+ my test result

    quite surprised by the performance of Timesten on OLTP application, I tried to know whether timesten perform as better in OLAP (analytical application) as well.
    I have the test table as below:
    "ddate" date,
         "B_5"     char(2),
         "C_10"     char(2),
         "D_1000"     char(4),
         "E_2"     char(2),
         "ttl"     smallint
    I have 5mil rows in the table, unique values of each attribute is B_5 = 5, C_10=10, D_1000=1000, index is created for each of the attribute except for ttl
    the datastore size is over 200MB which is quite acceptable, I set the permant size at 800 MB, so now the dataset is comfortably fit in the physical memory. (I have 2G memory)
    now I have a bunch of crazy query like this:
    select sum(ttl) from syn where D_1000 in ('v106','v111','v113','v128','v130','v193','v250','v277','v28','v292','v3','v317','v32','v337','v34','v341','v381','v389','v415','v421','v445','v468','v487','v535','v566','v574','v575','v600','v621','v628','v63','v643','v663','v667','v671','v679','v68','v690','v691','v701','v733','v747','v754','v768','v769','v774','v779','v805','v809','v818','v824','v825','v867','v880','v881','v919','v952','v958','v984','v986','v991','v995') and C_10 in ('v1','v10','v4','v8','v9') group by ddate
    The query time per query varies between 2 seconds - 12 seconds, but I never be able to see the superior experience at micro-second/query as in the benchmarking.
    I using direct method on 4CPU, 2G physical memory machine, WinXP SP2, single connection, query tool is ttisql.
    Is this performance reasonable with timesten?

    Our TTree index can be used to speed up query when the condition specifies constraint on a prefix of the index. Constraint must be equality unless the constraint on the last key of this prefix which can be in-equality. So you can take advantage of this in order to reduce the number of index created. Index will take space to store and time to maintain. Note than for an update, we will update an index if and only some column of this index is changed. And when we need to update an index, we delete the old value and insert the new one so its cost will be double that of an insert or delete. If space is not an issue and/or updates is not frequent than index creation should be no problems. Otherwise consideration has to be made to balance select and update. You should also consider updating statistics so the best index can be used in case there is no index that can cover all the constraints. For instance in this case with single column index, the better index to be used is the one on d_1000 because it has more values but the optimizer wont know it unless statistics was collected. So you may have acceptable performance without additional indexes but better statistics.

  • SAP Sample Applications on CE7.1

    Gurus,
    We are trying to install / configure the simple SAP applications provide in the link below by SAP.
    We are proceeding step by step with the documents provided by SAP. But we see that in our CE7.1 EHP1 version we dont see the menu path mentioned in the SAP document.
    Can anyone who has implemented these simple applications help us in pointing what is wrong?
    [http://esworkplace.sap.com/socoview(bD1lbiZjPTAwMSZkPW1pbg==)/render.asp?packageid=DE0426DD9B0249F19515001A64D3F462&id=EBF08FD8067241F787448B3EB87DA04E]
    Edited by: ANIRUDDHA DAS on Mar 20, 2009 6:39 PM

    Hi,
      I have implemented these samples in CE7.1 EHP1.
      Do you mean that you cant find
    Window->Preferences->General-?Network Connections in your IDE.If this is the problem the you need to reistall your IDE.While installing it will ask for the update site.You have to give it properly then only all the plugins will be installed.
          If this is not your problem,then kindly let me know what do you mean by  "We dont see the menu path".
    Regards,
    Sudhir

  • Setting the BC Components - Analytical Applications - SAP Library

    To add a comment, please log in or register on the top of this page and choose Reply. Please write your comment in English.
    You can also go back to the SAP help page.

    Hi Valerie,
    Thanks for the reply.  Yes I have checked that the SPN is unique and it is set to the FQDN of the ABAP server.  We only have 1 domain at the client and not multiple domains. In SPNEGO the keytab is configured for the domain with the service user and SPNEGO also picks up the SPNs correctly and the SPN uniqueness check and Token check in SPNEGO also works.
    I also checked the klist output and it has a whole list of Kerberos ticket outputs for various encryption types for my user including for e.g. AES-256-CTS-HMAC-SHA1-96 which is one of the algorithms in SPNEGO.
    So I don't know what else to check...

  • Issue while creating a new application in Hypeiron Planning

    Hi,
    When I am trying to create a new application in Hyperion Planning - the below log is coming...
    Planning server started fine in 4328 ms ( as shown in the below log) but once I click the finish button to create the new application its then I am facing with the issue and the same error is repeating in the Planning server..
    but I could see the new application in EAS console - but when I cannot see the application in workspace or planning web..
    When I restart the planning server - the same error is repeating - but when I delete the application from EAS and then RECONFIGURE Planning(database, instance and datasouce of planning )in shared services and restart the planning server the error is going of (most probably because all the old tables are being dropped in the database)
    Please help me resolve the issue.
    Planning server Log:
    Mar 31, 2008 3:40:24 PM org.apache.coyote.http11.Http11Protocol init
    INFO: Initializing Coyote HTTP/1.1 on http-8300
    Mar 31, 2008 3:40:24 PM org.apache.catalina.startup.Catalina load
    INFO: Initialization processed in 766 ms
    Mar 31, 2008 3:40:24 PM org.apache.catalina.core.StandardService start
    INFO: Starting service Catalina
    Mar 31, 2008 3:40:24 PM org.apache.catalina.core.StandardEngine start
    INFO: Starting Servlet Engine: Apache Tomcat/5.0.28
    Mar 31, 2008 3:40:25 PM org.apache.catalina.core.StandardHost start
    INFO: XML validation disabled
    Mar 31, 2008 3:40:25 PM org.apache.catalina.core.StandardHost getDeployer
    INFO: Create Host deployer for direct deployment ( non-jmx )
    Mar 31, 2008 3:40:25 PM org.apache.catalina.core.StandardHostDeployer install
    INFO: Installing web application at context path /HyperionPlanning from URL file
    :G:\Hyperion\deployments\Tomcat5\HyperionPlanning\webapps\HyperionPlanning
    Creating rebind thread to RMI
    Mar 31, 2008 3:40:29 PM org.apache.coyote.http11.Http11Protocol start
    INFO: Starting Coyote HTTP/1.1 on http-8300
    Mar 31, 2008 3:40:29 PM org.apache.jk.common.ChannelSocket init
    INFO: JK2: ajp13 listening on /0.0.0.0:8302
    Mar 31, 2008 3:40:29 PM org.apache.jk.server.JkMain start
    INFO: Jk running ID=0 time=0/31 config=null
    Mar 31, 2008 3:40:29 PM org.apache.catalina.startup.Catalina start
    INFO: Server startup in 4328 ms
    [INFO] AuthChallengeProcessor - basic authentication scheme selected
    [INFO] AuthChallengeProcessor - basic authentication scheme selected
    Setting Arbor path to: G:\Hyperion\common\EssbaseRTC\9.3.1
    [INFO] AuthChallengeProcessor - basic authentication scheme selected
    [INFO] AuthChallengeProcessor - basic authentication scheme selected
    [INFO] AuthChallengeProcessor - basic authentication scheme selected
    [INFO] AuthChallengeProcessor - basic authentication scheme selected
    Connection to the datasource created successfully.
    Query Failed: SQL_SYSDB_DELETE_EXPIRED_EXTERNAL_ACTIONS:[100]
    java.sql.SQLException: [Hyperion][Oracle JDBC Driver][Oracle]ORA-00942: table or
    view does not exist
    at hyperion.jdbc.base.BaseExceptions.createException(Unknown Source)
    at hyperion.jdbc.base.BaseExceptions.getException(Unknown Source)
    at hyperion.jdbc.oracle.OracleImplStatement.execute(Unknown Source)
    at hyperion.jdbc.base.BaseStatement.commonExecute(Unknown Source)
    at hyperion.jdbc.base.BaseStatement.executeUpdateInternal(Unknown Source
    at hyperion.jdbc.base.BasePreparedStatement.executeUpdate(Unknown Source
    at com.hyperion.planning.sql.HspSQLImpl.executeUpdate(Unknown Source)
    at com.hyperion.planning.sql.HspSQLImpl.executeUpdate(Unknown Source)
    at com.hyperion.planning.event.HspSysExtChangeHandler.actionPoller(Unkno
    wn Source)
    at com.hyperion.planning.event.HspSysExtChangeHandler.run(Unknown Source
    Error encountered with Database connection, recreating connections.
    Nested Exception: java.sql.SQLException: [Hyperion][Oracle JDBC Driver][Oracle]O
    RA-00942: table or view does not exist
    Query Failed: SQL_SYSDB_DELETE_EXPIRED_EXTERNAL_ACTIONS:[100]
    java.sql.SQLException: [Hyperion][Oracle JDBC Driver][Oracle]ORA-00942: table or
    view does not exist
    at hyperion.jdbc.base.BaseExceptions.createException(Unknown Source)
    at hyperion.jdbc.base.BaseExceptions.getException(Unknown Source)
    at hyperion.jdbc.oracle.OracleImplStatement.execute(Unknown Source)
    at hyperion.jdbc.base.BaseStatement.commonExecute(Unknown Source)
    at hyperion.jdbc.base.BaseStatement.executeUpdateInternal(Unknown Source
    at hyperion.jdbc.base.BasePreparedStatement.executeUpdate(Unknown Source
    at com.hyperion.planning.sql.HspSQLImpl.executeUpdate(Unknown Source)
    at com.hyperion.planning.sql.HspSQLImpl.executeUpdate(Unknown Source)
    at com.hyperion.planning.event.HspSysExtChangeHandler.actionPoller(Unkno
    wn Source)
    at com.hyperion.planning.event.HspSysExtChangeHandler.run(Unknown Source
    Error encountered with Database connection, recreating connections.
    Nested Exception: java.sql.SQLException: [Hyperion][Oracle JDBC Driver][Oracle]O
    RA-00942: table or view does not exist
    Query Failed: SQL_SYSDB_DELETE_EXPIRED_EXTERNAL_ACTIONS:[100]
    java.sql.SQLException: [Hyperion][Oracle JDBC Driver][Oracle]ORA-00932: inconsis
    tent datatypes: expected INTERVAL got NUMBER
    at hyperion.jdbc.base.BaseExceptions.createException(Unknown Source)
    at hyperion.jdbc.base.BaseExceptions.getException(Unknown Source)
    at hyperion.jdbc.oracle.OracleImplStatement.execute(Unknown Source)
    at hyperion.jdbc.base.BaseStatement.commonExecute(Unknown Source)
    at hyperion.jdbc.base.BaseStatement.executeUpdateInternal(Unknown Source
    at hyperion.jdbc.base.BasePreparedStatement.executeUpdate(Unknown Source
    at com.hyperion.planning.sql.HspSQLImpl.executeUpdate(Unknown Source)
    at com.hyperion.planning.sql.HspSQLImpl.executeUpdate(Unknown Source)
    at com.hyperion.planning.event.HspSysExtChangeHandler.actionPoller(Unkno
    wn Source)
    at com.hyperion.planning.event.HspSysExtChangeHandler.run(Unknown Source
    Error encountered with Database connection, recreating connections.
    Nested Exception: java.sql.SQLException: [Hyperion][Oracle JDBC Driver][Oracle]O
    RA-00932: inconsistent datatypes: expected INTERVAL got NUMBER
    software details:
    EAS server is up and working fine
    Hyperion planning 9.3.1
    Oracle database 9.2.0.1.0
    Regards,
    Ravi

    Now we have a new schema in place WITH a new username and password for oldb database-
    this is the change in the datasouce configuration (USER NAME CHANGE)
    Datasource name: newplan
    Select Database: Oracle
    Database details:
    Sever: my database server ( say 10.301.222.320)
    Port:1521
    Product: PLANNING
    database: oldb
    username:HYPPLAN
    password:***
    when I use the above configuration: below is the error log
    Apr 1, 2008 11:39:49 AM org.apache.coyote.http11.Http11Protocol init
    INFO: Initializing Coyote HTTP/1.1 on http-8300
    Apr 1, 2008 11:39:49 AM org.apache.catalina.startup.Catalina load
    INFO: Initialization processed in 750 ms
    Apr 1, 2008 11:39:49 AM org.apache.catalina.core.StandardService start
    INFO: Starting service Catalina
    Apr 1, 2008 11:39:49 AM org.apache.catalina.core.StandardEngine start
    INFO: Starting Servlet Engine: Apache Tomcat/5.0.28
    Apr 1, 2008 11:39:49 AM org.apache.catalina.core.StandardHost start
    INFO: XML validation disabled
    Apr 1, 2008 11:39:49 AM org.apache.catalina.core.StandardHost getDeployer
    INFO: Create Host deployer for direct deployment ( non-jmx )
    Apr 1, 2008 11:39:49 AM org.apache.catalina.core.StandardHostDeployer install
    INFO: Installing web application at context path /HyperionPlanning from URL file
    :G:\Hyperion\deployments\Tomcat5\HyperionPlanning\webapps\HyperionPlanning
    Creating rebind thread to RMI
    Apr 1, 2008 11:39:53 AM org.apache.coyote.http11.Http11Protocol start
    INFO: Starting Coyote HTTP/1.1 on http-8300
    Apr 1, 2008 11:39:53 AM org.apache.jk.common.ChannelSocket init
    INFO: JK2: ajp13 listening on /0.0.0.0:8302
    Apr 1, 2008 11:39:53 AM org.apache.jk.server.JkMain start
    INFO: Jk running ID=0 time=0/16 config=null
    Apr 1, 2008 11:39:53 AM org.apache.catalina.startup.Catalina start
    INFO: Server startup in 4656 ms
    [INFO] AuthChallengeProcessor - basic authentication scheme selected
    [INFO] AuthChallengeProcessor - basic authentication scheme selected
    Setting Arbor path to: G:\Hyperion\common\EssbaseRTC\9.3.1
    Connection to the datasource created successfully.
    Error log starts here--------------------------------------
    in cpp-Created Application:planone 0
    Unable to create Analytical application. Exiting Application Creation.
    Exception in Application Creation :Unable to create Analytical application. Exit
    ing Application Creation.
    java.lang.IllegalStateException: Unable to create Analytical application. Exitin
    g Application Creation.
    at com.hyperion.planning.HspManageApplication.createApp(Unknown Source)
    at com.hyperion.planning.appdeploy.HspManageAppSession.createApplication
    (Unknown Source)
    at HspCreateApp.Handle(Unknown Source)
    at HspCreateApp.doPost(Unknown Source)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Appl
    icationFilterChain.java:237)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationF
    ilterChain.java:157)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperV
    alve.java:214)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValv
    eContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.jav
    a:520)
    at org.apache.catalina.core.StandardContextValve.invokeInternal(Standard
    ContextValve.java:198)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextV
    alve.java:152)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValv
    eContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.jav
    a:520)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j
    ava:137)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValv
    eContext.java:104)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j
    ava:118)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValv
    eContext.java:102)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.jav
    a:520)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal
    ve.java:109)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValv
    eContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.jav
    a:520)
    at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
    at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:16
    0)
    at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:300)
    at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:374)
    at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:743)
    at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.ja
    va:675)
    at org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:866)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadP
    ool.java:683)
    at java.lang.Thread.run(Unknown Source)
    java.lang.RuntimeException: Create application failed.
    at com.hyperion.planning.appdeploy.HspManageAppSession.createApplication
    (Unknown Source)
    at HspCreateApp.Handle(Unknown Source)
    at HspCreateApp.doPost(Unknown Source)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Appl
    icationFilterChain.java:237)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationF
    ilterChain.java:157)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperV
    alve.java:214)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValv
    eContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.jav
    a:520)
    at org.apache.catalina.core.StandardContextValve.invokeInternal(Standard
    ContextValve.java:198)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextV
    alve.java:152)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValv
    eContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.jav
    a:520)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j
    ava:137)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValv
    eContext.java:104)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j
    ava:118)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValv
    eContext.java:102)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.jav
    a:520)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal
    ve.java:109)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValv
    eContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.jav
    a:520)
    at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
    at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:16
    0)
    at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:300)
    at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:374)
    at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:743)
    at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.ja
    va:675)
    at org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:866)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadP
    ool.java:683)
    at java.lang.Thread.run(Unknown Source)
    the error I am getting in appwizard - "error occured while creating application - please check the log"
    and if I am using the same schema - i.e. same userid and database for both planning system and planning application I was getting the previous error
    Regards,
    Ravi

  • Error while creating hyperion planning application in 11.1.1.1.0

    Hi All,
    I am using EPM system 11.1.1.1.0 and i get the following error while trying to create a new application using Hyperion Planning:
    Unable to find JDBC_CATALOG key for application: ABC1+
    Connection to the datasource created successfully.+
    in cpp -Created NonUnicode App:ABC1 0+
    Unable to create Analytical application. Exiting Application Creation.+
    Exception in Application Creation :Unable to create Analytical application. Exit+
    ing Application Creation.+
    java.lang.IllegalStateException: Unable to create Analytical application. Exitin+
    g Application Creation.+
    at com.hyperion.planning.HspManageApplication.createApp(Unknown Source)+
    at com.hyperion.planning.appdeploy.HspManageAppSession.createApplication+
    *(Unknown Source)*
    at HspCreateApp.Handle(Unknown Source)+
    at HspCreateApp.doPost(Unknown Source)+
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)+
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)+
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Appl+
    icationFilterChain.java:252)+
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationF+
    ilterChain.java:173)+
    at HspValidationFilter.doFilter(Unknown Source)+
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Appl+
    icationFilterChain.java:202)+
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationF+
    ilterChain.java:173)+
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperV+
    alve.java:213)+
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextV+
    alve.java:178)+
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j+
    ava:126)+
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j+
    ava:105)+
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal+
    ve.java:107)+
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.jav+
    a:148)+
    at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java+
    *:869)*
    at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.p+
    rocessConnection(Http11BaseProtocol.java:664)+
    at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpo+
    int.java:527)+
    at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFol+
    lowerWorkerThread.java:80)+
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadP+
    ool.java:684)+
    at java.lang.Thread.run(Unknown Source)+
    java.lang.RuntimeException: Create application failed.+
    at com.hyperion.planning.appdeploy.HspManageAppSession.createApplication+
    *(Unknown Source)*
    at HspCreateApp.Handle(Unknown Source)+
    at HspCreateApp.doPost(Unknown Source)+
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)+
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)+
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Appl+
    icationFilterChain.java:252)+
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationF+
    ilterChain.java:173)+
    at HspValidationFilter.doFilter(Unknown Source)+
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Appl+
    icationFilterChain.java:202)+
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationF+
    ilterChain.java:173)+
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperV+
    alve.java:213)+
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextV+
    alve.java:178)+
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j+
    ava:126)+
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j+
    ava:105)+
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal+
    ve.java:107)+
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.jav+
    a:148)+
    at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java+
    *:869)*
    at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.p+
    rocessConnection(Http11BaseProtocol.java:664)+
    at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpo+
    int.java:527)+
    at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFol+
    lowerWorkerThread.java:80)+
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadP+
    ool.java:684)+
    at java.lang.Thread.run(Unknown Source)+
    Error terminating Essbase connection: java.lang.NullPointerException+
    If somebody can please help me resolve this issue , it would be great.
    Thanks in Advance.
    Alicia

    I suspect that this is caused by an "incorrect" datasource definition when you create a new one.
    I noticed that when I created an Essbase DB through the Essbase Enterprise Admin Service, it only allowed me to create one for localhost, not my server name EVEN if when the "Manage Data Source" validated the connection.
    This points to one of the myriad of apps inconsistently trying to resolve a server name.
    So, in EAS, see if you can connect to your local Essbase install and if successful, use the exact same params for you Workspace Application.
    Hope that helps.

  • Error while running WDJ application using jxl.jar

    Hi Experts,
    I am using jxl.jar in my webdynpro java application in CE7.2. I added jxl.jar in java build path and place the jar file in lib directory of webdynpro DC. It shows error in development configuration perspective while building, then i created another DC of type ExternalLibrary and added jxl.jar in libraries folder, then i right click on that jar and published as archive. Then in Development configuration perspective i added ExternalLibrary project to my webdynpro DC, it doen't shows error while building.
    I deployed successfully, when i run the application, it shows the following error. Is there any fault in adding external jar to mt DC? i can't understand where the problem is?
    Error:
    java.lang.ClassNotFoundException: jxl.Workbook -
    Loader Info -
    ClassLoader name: [com.drl.bomrecipe/bomrecipe] Loader hash code: 30a86ee9 Living status: alive Direct parent loaders: [system:Frame] [interface:webservices] [interface:cross] [interface:security] [interface:transactionext] [library:webservices_lib] [library:opensql] [library:jms] [library:ejb20] [service:p4] [service:ejb] [service:servlet_jsp] [sap.com/tcwdapi] [library:tcblexceptionlib] [library:tcblloggingapi] Resources: E:\usr\sap\CE7\J00\j2ee\cluster\apps\com.drl.bomrecipe\bomrecipe\servlet_jsp\webdynpro\resources\com.drl.bomrecipe\bomrecipe\root\WEB-INF\lib\com.drl.bomrecipe~bomrecipe.jar -
        at com.sap.engine.boot.loader.MultiParentClassLoader.loadClass(MultiParentClassLoader.java:272)
        at com.sap.engine.boot.loader.MultiParentClassLoader.loadClass(MultiParentClassLoader.java:241)
        at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:367)
    help me please, thanks in advance
    Regards,
    Pradeep Kumar G

    Hi Pradeep,
    Have you created both compilation and assembly public parts for your Ext Lib, and have you added them both to your Used DC's list?
    Also remember, since you are using an assembly public part, the 'Create Archive' and 'Deploy new archive and run' should not be used!
    Use the DC Build and DC Deploy instead.
    Hope this helps!
    Robin van het Hof

  • Difference in deposit date and first application date

    We are building an analytical application off of the EBS AR Module. I have noticed a number of records that have a deposit date on the receipt that is at least 2 weeks before the first receivable_application record (even Unapplied transaction). Could this be caused from a clearing issue with the bank? What triggers the record to go to the application? What reasons would cause this to be so far apart?

    hi
    please check if there is entry in Infotype 41 against technical date of hiring .
    If there is entry against that , system will pickup from Infotype 41 .
    Regards
    sameer

  • About Analytic wrokspace

    Hi All,
    As I was reading about OLAP,I came across Analytic workspace (AW). I tried finding out when and during what scenario to use AW. Is it just an alternative to Materialized Views or something more...
    Could not find any good place which could clarify me. Can anybody give me some links or docs that explains this OR please explain this right here.
    Thanks for the help
    Mrutyunjay.

    an analytic workspace is a molap cube within the 9i database
    there are several ways of doing olap, Rolap (based on relational stars) or molap with its cubes. both aspects share the vision of a smooth datamodel, terms like dimensions,measures, hierarchies etc and have lots of things in common
    certainly there are areas, where one design approach is more suitable than the other
    molap (and aws) have a strength in powerful calculations, in calculation models (think of margins, contributions,...) data spreading among hierarchies and lots more
    molap is often used within analytic applications (think of controlling and finance depertments) trying to optimize cash flow, profit and other company vital kpis
    best of all: with 9i olap you can choose between both and share the same set of metadata,userinterfaces etc ;)
    hope this helps a bit,
    thomas

  • OBI Analytics Application

    We have installed OBIEE on Windows XP and we have created some reports / dashboards using the seeded SH repository. We would now like to go to the next level of using Oracle BI Analytic Applications. To do that I have some questions.
    1) Do we need to install Oracle BI Analytic Applications on top of OBIEE?
    2) If so where can I download the Financial BI Analytics.
    Please advise us of how to proceed further.
    Thanks
    -G

    Hi G,
    Check the followin links;
    http://www.oracle.com/technology/software/products/ias/htdocs/101320bi.html and look for 'Oracle Business Intelligence Applications, v. 7.9.3'
    http://download.oracle.com/docs/cd/E10021_01/doc/bi.79/b31979.pdf
    Good Luck,
    Daan Bakboord

  • OBI Applications - Installation Questions

    We would like to install OBI Applications on a Windows XP box. We have already installed OBIEE in one of our boxes. Can we install OBI Applications in the same box in which we have installed OBIEE. Will I run into any perfomance issues if we have both OBIEE and OBI Applications in the same box. The configurations of the box are provided below.
    Xeon 2.8Ghz / 2GB Memory / Dual processor / 146GB *3 HDD (Raid 5)
    Let us know.
    Thanks,
    -G

    The BI Applications are a package. The Informatica ETL components, which amount to over 2000 individual pieces are dependent on Informatica.
    It is certainly possible to use BI EE with another ETL tool to create a similar environment but that defeats the whole idea of a packaged Analytical Application.
    BI Applications provide enormous ROI and reduce delivery schedules from Years to Months.
    Read up here and you will see why:
    http://www.oracle.com/solutions/business_intelligence/obia.html
    Yes, I do work for Oracle so I am biased but feature for feature, no one else comes close at all.

  • Integrating CE7.2 UWL into Portal 7.0

    Hi Experts,
          We had CE7.2 portal in that we  had one UWL , here my requirement is we need to integrate CE7.2  UWL to EP Portal 7.0 UWL , so what ever tasks are created in CE it has automatically  needs to update in EP Portal7.0
         If anybody have an idea please help on this.
    Thanks
    Renu

    I need to mention one more thing
        Client is not interested to use FPN but I am not aware of his problem , so the BPM related applications and some of the composite applications are not able to develop using Portal , Thats why they preferred CE7.2 portal, but End user using normal EP Portal 7.0 , so what ever developed  applications in CE7.2  they needs to access through Portal
       In future also they want to maintain the communication between both the portals , why because if they want to develop any BPM , or any composite applications in future they wants to access through portal
      If you have thought for to come out this situation , let me know ASAP
    Thanks
    Renu

  • SAP BI INTERVIEW QUESTIONS

    Hi Friends,
    I was  face some Interview.
    Please send answers to the questions?
    How many data Fields  and key fields we can create in DSO?
    You can overwrite key fields or Data Fields?
    Which up date we use in Delta queue extraction( v1 or v2 or v3)
    Which message we get when transported request is failed?
    what is the Structural difference between Infoucbe and DSO
    Data Loading is taking huge time when we extract data from source system to BI system/ how to solve?(Before it took 3-4 Hours now data loading takes 4 days)

    What is the difference between Display Attribute and
    Navigational Attribute? How to make display attribute and navigational
    attribute?
    How to load flat file data?
    How to load Hierarchy file data?
    What is HACR?
    How to maintain HACR?
    If any issue in HACR then how to resolve the issue?
    What is Baby Cube?
    Why we are creating Aggregates?
    What is the use of Aggregates?
    Is there
    any particular field on that we can create Aggregates or we can maintain
    Aggregate on any field?
    What is
    the different DSO available? And what is the difference between those DSO?
    What is
    replacement path?
    What are
    the extractor types?
    • Application Specific
    o BW Content FI, HR, CO, SAP CRM, LO Cockpit
    o Customer-Generated Extractors
    LIS, FI-SL, CO-PA
    • Cross Application (Generic Extractors)
    o DB View, InfoSet, Function Module
    2. What are the steps involved in LO Extraction?
    • The steps are:
    o RSA5 Select the DataSources
    o LBWE Maintain DataSources and Activate Extract Structures
    o LBWG Delete Setup Tables
    o 0LI*BW Setup tables
    o RSA3 Check extraction and the data in Setup tables
    o LBWQ Check the extraction queue
    o LBWF Log for LO Extract Structures
    o RSA7 BW Delta
    Queue Monitor
    3. How to create a connection with LIS InfoStructures?
    • LBW0 Connecting LIS InfoStructures to BW
    4. What is the difference between ODS and InfoCube and MultiProvider?
    • ODS: Provides granular data, allows overwrite and data is in transparent
    tables, ideal for drilldown and RRI.
    • CUBE: Follows the star schema, we can only append data, ideal for primary
    reporting.
    • MultiProvider: Does not have physical data. It allows to access data from
    different InfoProviders (Cube, ODS, InfoObject). It is also preferred for
    reporting.
    5. What are Start routines, Transfer routines and Update routines?
    • Start Routines: The start routine is run for each DataPackage after the data
    has been written to the PSA and before the transfer rules have been executed.
    It allows complex computations for a key figure or a characteristic. It has no
    return value. Its purpose is to execute preliminary calculations and to store
    them in global DataStructures. This structure or table can be accessed in the
    other routines. The entire DataPackage in the transfer structure format is used
    as a parameter for the routine.
    • Transfer / Update Routines: They are defined at the InfoObject level. It is
    like the Start Routine. It is independent of the DataSource. We can use this to
    define Global Data and Global Checks.
    6. What is the difference between start routine and update routine, when, how
    and why are they called?
    • Start routine can be used to access InfoPackage while update routines are
    used while updating the Data Targets.
    7. What is the table that is used in start routines?
    • Always the table structure will be the structure of an ODS or InfoCube. For
    example if it is an ODS then active table structure will be the table.
    8. Explain how you used Start routines in your project?
    • Start routines are used for mass processing of records. In start routine all
    the records of DataPackage is available for processing. So we can process all
    these records together in start routine. In one of scenario, we wanted to apply
    size % to the forecast data. For example if material M1 is forecasted to say
    100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra
    Large 20%), we wanted to have 4 records against one single record that is
    coming in the info package. This is achieved in start routine.
    9. What are Return Tables?
    • When we want to return multiple records, instead of single value, we use the
    return table in the Update Routine. Example: If we have total telephone expense
    for a Cost Center, using a return table we can get
    expense per employee.
    10. How do start routine and return table synchronize with each other?
    • Return table is used to return the Value following the execution of start
    routine
    11. What is the difference
    between V1, V2 and V3 updates?
    • V1 Update: It is a Synchronous update. Here the Statistics update is carried
    out at the same time as the document update (in the application
    tables).
    • V2 Update: It is an Asynchronous update. Statistics update and the Document
    update take place as different tasks.
    o V1 & V2 don't need scheduling.
    • Serialized V3 Update: The V3 collective update must be scheduled as a job
    (via LBWE). Here, document data is collected in the order it was created and
    transferred into the BW as a batch job. The transfer sequence may not be the
    same as the order in which the data was created in all scenarios. V3 update
    only processes the update data that is successfully processed with the V2
    update.
    12. What is compression?
    • It is a process used to delete the Request IDs and this saves space.
    13. What is Rollup?
    • This is used to load new DataPackages (requests) into the InfoCube
    aggregates. If we have not performed a rollup then the new InfoCube data will
    not be available while reporting on the aggregate.
    14. What is table partitioning and what are the benefits of partitioning in an
    InfoCube?
    • It is the method of dividing a table which would enable a quick reference.
    SAP uses fact file partitioning to improve performance. We can partition only
    at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as
    data is stored in the relevant partitions. Also table maintenance becomes
    easier. Oracle,
    Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL
    Server, IBM DB2/400 do not support table portioning.
    15. How many extra partitions are created and why?
    • Two partitions are created for date before the begin date and after the end
    date.
    16. What are the options available in transfer rule?
    • InfoObject
    • Constant
    • Routine
    • Formula
    17. How would you optimize the dimensions?
    • We should define as many dimensions as possible and we have to take care that
    no single dimension crosses more than 20% of the fact table size.
    18. What are Conversion Routines for units and currencies in the update rule?
    • Using this option we can write ABAP
    code for Units / Currencies conversion. If we enable this flag then unit of Key
    Figure appears in the ABAP code as an additional parameter. For example, we can
    convert units in Pounds to Kilos.
    19. Can an InfoObject be an InfoProvider, how and why?
    • Yes, when we want to report on Characteristics or Master Data. We have to
    right click on the InfoArea and select "Insert characteristic as data
    target". For example, we can make 0CUSTOMER as an InfoProvider and report
    on it.
    20. What is Open Hub Service?
    • The Open Hub Service enables us to distribute data from an SAP BW system into
    external Data Marts, analytical applications, and other applications. We can
    ensure controlled distribution using several systems. The central object for
    exporting data is the InfoSpoke. We can define the source and the target object
    for the data. BW becomes a hub of an enterprise data warehouse.
    The distribution of data becomes clear through central monitoring from the
    distribution status in the BW system.
    21. How do you transform Open
    Hub Data?
    • Using BADI we can transform Open Hub Data according to the destination
    requirement.
    22. What is ODS?
    • Operational DataSource is used for detailed storage of data. We can overwrite
    data in the ODS. The data is stored in transparent tables.
    23. What are BW Statistics and what is its use?
    • They are group of Business Content InfoCubes which are used to measure
    performance for Query and Load Monitoring. It also shows the usage of
    aggregates, OLAP and Warehouse management
    http://www.ittestpapers.com/articles/713/3/SAP-BW-Interview-Questions---Part-A/Page3.html
    Communication Structure and Transfer
    rules
    • Create and InfoPackage
    • Load Data
    25. What are the delta options available when you load from flat file?
    • The 3 options for Delta Management with Flat Files:
    o Full Upload
    o New Status for Changed records (ODS Object only)
    o Additive Delta (ODS Object & InfoCube)
    Q) Under which menu path is the Test Workbench to be found, including in
    earlier Releases?
    The menu path is: Tools - ABAP Workbench - Test - Test Workbench.
    Q) I want to delete a BEx query that is in Production system through request. Is
    anyone aware about it?
    A) Have you tried the RSZDELETE transaction?
    Q) Errors while monitoring process chains.
    A) During data loading. Apart from them, in process chains you add so many
    process types, for example after loading data into Info Cube, you rollup data
    into aggregates, now this rolling up of data into aggregates is a process type
    which you keep after the process type for loading data into Cube. This rolling
    up into aggregates might fail.
    Another one is after you load data into ODS, you activate ODS data (another
    process type) this might also fail.
    Q) In Monitor----- Details (Header/Status/Details) à Under Processing (data
    packet): Everything OK à Context menu of Data Package 1 (1 Records): Everything
    OK ---- Simulate update. (Here we can debug update rules or transfer rules.)
    SM50 à Program/Mode à Program à Debugging & debug this work process.
    Q) PSA Cleansing.
    A) You know how to edit PSA. I don't think you can delete single records. You
    have to delete entire PSA data for a request.
    Q) Can we make a datasource to support delta.
    A) If this is a custom (user-defined) datasource you can make the datasource
    delta enabled. While creating datasource from RSO2, after entering datasource
    name and pressing create, in the next screen there is one button at the top,
    which says generic delta. If you want more details about this there is a
    chapter in Extraction book, it's in last pages u find out.
    Generic delta services: -
    Supports delta extraction for generic extractors according to:
    Time stamp
    Calendar day
    Numeric pointer, such as document number & counter
    Only one of these attributes can be set as a delta attribute.
    Delta extraction is supported for all generic extractors, such as tables/views,
    SAP Query and function modules
    The delta queue (RSA7) allows you to monitor the current status of the delta
    attribute
    Q) Workbooks, as a general rule, should be transported with the
    role.
    Here are a couple of scenarios:
    1. If both the workbook and its role have been previously transported, then the
    role does not need to be part of the transport.
    2. If the role exists in both dev and the target system but the workbook has
    never been transported, and then you have a choice of transporting the role
    (recommended) or just the workbook. If only the workbook is transported, then
    an additional step will have to be taken after import: Locate the WorkbookID
    via Table RSRWBINDEXT (in Dev and verify the same exists in the target system)
    and proceed to manually add it to the role in the target system via Transaction
    Code PFCG -- ALWAYS use control c/control v copy/paste for manually adding!
    3. If the role does not exist in the target system you should transport both
    the role and workbook. Keep in mind that a workbook is an object unto itself
    and has no dependencies on other objects. Thus, you do not receive an error
    message from the transport of 'just a workbook' -- even though it may not be
    visible, it will exist (verified via Table RSRWBINDEXT).
    Overall, as a general rule, you should transport roles with workbooks.
    Q) How much time does it take to extract 1 million (10 lackhs) of records into
    an infocube?
    A. This depends, if you have complex coding in update rules it will take longer
    time, or else it will take less than 30 minutes.
    Q) What are the five ASAP Methodologies?
    A: Project plan, Business Blue print, Realization, Final preparation & Go-Live - support.
    1. Project Preparation: In this phase, decision makers define clear project
    objectives and an efficient decision making process ( i.e. Discussions with the
    client, like what are his needs and requirements etc.). Project managers
    will be involved in this phase (I guess).
    A Project Charter is issued and an implementation strategy is outlined in this
    phase.
    2. Business Blueprint: It is a detailed documentation of your company's
    requirements. (i.e. what are the objects we need to develop are modified
    depending on the client's requirements).
    3. Realization: In this only, the implementation of the project takes place (development
    of objects etc) and we are involved in the project from here only.
    4. Final Preparation: Final preparation before going live i.e. testing,
    conducting pre-go-live, end user training etc.
    End user training is given that is in the client site you train them how to
    work with the new environment, as they are new to the technology.
    5. Go-Live & support: The project has gone live and it is into production.
    The Project team will be supporting the end users.
    Q) What is landscape of R/3 & what is landscape of BW. Landscape of R/3 not
    sure.
    Then Landscape of b/w: u have the development system, testing system, production system
    Development system: All the implementation part is done in this sys. (I.e.,
    Analysis of objects developing, modification etc) and from here the objects are
    transported to the testing system, but before transporting an initial test
    known as Unit testing
    (testing of objects) is done in the development sys.
    Testing/Quality system: quality check is done in this system and integration
    testing is done.
    Production system: All the extraction part takes place in this sys.
    Q) How do you measure the size of infocube?
    A: In no of records.
    Q). Difference between infocube and ODS?
    A: Infocube is structured as star schema (extended) where a fact table is
    surrounded by different dim table that are linked with DIM'ids. And the data
    wise, you will have aggregated data in the cubes. No overwrite functionality
    ODS is a flat structure (flat table) with no star schema concept and which will
    have granular data (detailed level). Overwrite functionality.
    Flat file
    datasources does not support 0recordmode in extraction.
    x before, -after, n new, a add, d delete, r reverse
    Q) Difference between display attributes and navigational attributes?
    A: Display attribute is one, which is used only for display purpose in the
    report. Where as navigational attribute is used for drilling down in the
    report. We don't need to maintain Navigational attribute in the cube as a
    characteristic (that is the advantage) to drill down.
    Q. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
    A: But how is it possible? If you load it manually twice, then you can delete
    it by requestID.
    Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
    Sure you can. ODS is nothing but a table.
    Q. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?
    A) Yes of course. For example, for loading text and hierarchies we use
    different data sources but the same InfoSource.
    Q. BRIEF THE DATAFLOW IN BW.
    A) Data flows from transactional system to analytical system (BW). DataSources
    on the transactional system needs to be replicated on BW side and attached to
    infosource and update rules respectively.
    Q. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER
    RULES?
    Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
    FULL and DELTA.
    Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in LIS THEN
    WHAT IS THE PROCEDURE IN LO-COCKPIT?
    No LIS in LO cockpit. We will have datasources and can be maintained (append
    fields). Refer white paper
    on LO-Cockpit extractions.
    Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?
    A) Initially we don't delete the setup tables but when we do change in extract
    structure we go for it. We r changing the extract structure right, that means
    there are some newly added fields in that which r not before. So to get the
    required data ( i.e.; the data which is required is taken and to avoid
    redundancy) we delete n then fill the setup tables.
    To refresh the statistical data.
    The extraction set up reads the dataset that you want to process such as,
    customers orders with the tables like VBAK, VBAP) & fills the relevant communication
    structure with the data. The data is stored in cluster
    tables from where it is read when the initialization is run. It is important
    that during initialization phase, no one generates or modifies application
    data, at least until the tables can be set up.
    Q) SIGNIFICANCE of ODS?
    It holds granular data (detailed level).
    Q) WHERE THE PSA DATA IS STORED?
    In PSA table.
    Q) WHAT IS DATA SIZE?
    The volume of data one data target holds (in no. of records)
    Q) Different types of INFOCUBES.
    Basic, Virtual (remote, sap remote and multi)
    Virtual Cube is used for example, if you consider railways reservation all the
    information has to be updated online. For designing the Virtual cube you have
    to write the function module that is linking to table, Virtual cube it is like
    a the structure, when ever the table is updated the virtual cube will fetch the
    data from table and display report Online... FYI.. you will get the information
    : https://www.sdn.sap.com/sdn
    /index.sdn and search for Designing Virtual Cube and you will get
    a good material designing the Function Module
    Q) INFOSET QUERY.
    Can be made of ODS's and Characteristic InfoObjects with masterdata.
    Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
    In R/3 or in BW? 2 in R/3 and 2 in BW
    Q) ROUTINES?
    Exist in the InfoObject, transfer routines, update routines and start routine
    Q) BRIEF SOME STRUCTURES USED IN BEX.
    Rows and Columns, you can create structures.
    Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
    Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes &
    Characteristic values.
    Variable Types are
    Manual entry /default value
    Replacement path
    SAP exit
    Customer exit
    Authorization
    Q) HOW MANY LEVELS YOU CAN GO IN REPORTING?
    You can drill down to any level by using Navigational attributes and jump
    targets.
    Q) WHAT ARE INDEXES?
    Indexes are data base indexes, which help in retrieving data fastly.
    Q) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
    Help! Refer documentation
    Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED?
    No.
    Q) WHAT IS THE SIGNIFICANCE OF KPI'S?
    KPI's indicate the performance of a company. These are key figures
    Q) AFTER THE DATA EXTRACTION
    WHAT IS THE IMAGE POSITION.
    After image (correct me if I am wrong)
    Q) REPORTING AND RESTRICTIONS.
    Help! Refer documentation.
    Q) TOOLS USED FOR PERFORMANCE TUNING.
    ST22, Number ranges, delete indexes before load. Etc
    Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA DAILY.
    There should be some tool to run the job daily (SM37 jobs)
    Q) AUTHORIZATIONS.
    Profile generator
    Q) WEB REPORTING.
    What are you expecting??
    Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.
    Of course
    Q) PROCEDURES OF REPORTING ON MULTICUBES
    Refer help. What are you expecting? MultiCube works on Union condition
    Q) EXPLAIN TRANPSORTATION OF OBJECTS?
    Dev---àQ and Dev-------àP
    Q) What types of partitioning are there for BW?
    There are two Partitioning Performance aspects for BW (Cube & PSA)
    Query Data Retrieval
    Performance Improvement:
    Partitioning by (say) Date Range improves data retrieval by making best use of
    database [data range] execution plans and indexes (of say Oracle database engine).
    B) Transactional Load Partitioning Improvement:
    Partitioning based on expected load volumes and data element sizes. Improves
    data loading into PSA and Cubes by infopackages (Eg. without timeouts).
    Q) How can I compare data in R/3 with data in a BW Cube after the daily delta
    loads? Are there any standard procedures for checking them or matching the
    number of records?
    A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the
    number of records extracted. Then go to BW Monitor to check the number of
    records in the PSA and check to see if it is the same & also in the monitor
    header tab.
    A) RSA3 is a simple extractor checker program that allows you to rule out
    extracts problems in R/3. It is simple to use, but only really tells you if the
    extractor works. Since records that get updated into Cubes/ODS structures are
    controlled by Update Rules, you will not be able to determine what is in the
    Cube compared to what is in the R/3 environment. You will need to compare
    records on a 1:1 basis against records in R/3 transactions for the functional
    area in question. I would recommend enlisting the help of the end user community
    to assist since they presumably know the data.
    To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute
    and you will see the record count, you can also go to display that data. You
    are not modifying anything so what you do in RSA3 has no effect on data quality
    afterwards. However, it will not tell you how many records should be expected
    in BW for a given load. You have that information in the monitor RSMO during
    and after data loads. From RSMO for a given load you can determine how many
    records were passed through the transfer rules from R/3, how many targets were
    updated, and how many records passed through the Update Rules. It also gives
    you error messages from the PSA.
    Q) Types of Transfer Rules?
    A) Field to Field mapping, Constant, Variable & routine.
    Q) Types of Update Rules?
    A) (Check box), Return table
    Q) Transfer Routine?
    A) Routines, which we write in, transfer rules.
    Q) Update Routine?
    A) Routines, which we write in Update rules
    Q) What is the difference between writing a routine in transfer rules and
    writing a routine in update rules?
    A) If you are using the same InfoSource to update data in more than one data
    target its better u write in transfer rules because u can assign one InfoSource
    to more than one data target & and what ever logic u write in update rules
    it is specific to particular one data target.
    Q) Routine with Return Table.
    A) Update rules generally only have one return value. However, you can create a
    routine in the tab strip key figure calculation, by choosing checkbox Return
    table. The corresponding key figure routine then no longer has a return value,
    but a return table. You can then generate as many key figure values, as you
    like from one data record.
    Q) Start routines?
    A) Start routines u can write in both updates rules and transfer rules, suppose
    you want to restrict (delete) some records based on conditions before getting
    loaded into data targets, then you can specify this in update rules-start
    routine.
    Ex: - Delete Data_Package ani ante it will delete a record based on the
    condition
    Q) X & Y Tables?
    X-table = A table to link material SIDs with SIDs for time-independent
    navigation attributes.
    Y-table = A table to link material SIDs with SIDS for time-dependent navigation
    attributes.
    There are four types of sid tables
    X time independent navigational attributes sid tables
    Y time dependent navigational attributes sid tables
    H hierarchy sid tables
    I hierarchy structure sid tables
    Q) Filters & Restricted Key figures (real time example)
    Restricted KF's u can have for an SD cube: billed quantity, billing value, no:
    of billing documents as RKF's.
    Q) Line-Item Dimension (give me an real time example)
    Line-Item Dimension: Invoice no: or Doc no: is a real time example
    Q) What does the number in the 'Total' column in Transaction RSA7 mean?
    A) The 'Total' column displays the number of LUWs that were written in the
    delta queue and that have not yet been confirmed. The number includes the LUWs
    of the last delta request (for repetition of a delta request) and the LUWs for
    the next delta request. A LUW only disappears from the RSA7 display when it has
    been transferred to the BW System and a new delta request has been received
    from the BW System.
    Q) How to know in which table (SAP BW) contains Technical Name / Description
    and creation data of a particular Reports. Reports that are created using BEx
    Analyzer.
    A) There is no such table in BW if you want to know such details while you are
    opening a particular query press properties button you will come to know all
    the details that you wanted.
    You will find your information about technical names and description about
    queries in the following tables. Directory of all reports (Table RSRREPDIR) and
    Directory of the reporting component elements (Table RSZELTDIR) for workbooks
    and the connections to queries check Where- used list for reports in workbooks
    (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table
    RSRWBINDEXT)
    Q) What is a LUW in the delta queue?
    A) A LUW from the point of view of the delta queue can be an individual
    document, a group of documents from a collective run or a whole data packet of
    an application
    extractor.
    Q) Why does the number in the 'Total' column in the overview screen of
    Transaction RSA7 differ from the number of data records that is displayed when
    you call the detail view?
    A) The number on the overview screen corresponds to the total of LUWs (see also
    first question) that were written to the qRFC queue and that have not yet been
    confirmed. The detail screen displays the records contained in the LUWs. Both,
    the records belonging to the previous delta request and the records that do not
    meet the selection conditions of the preceding delta init requests are filtered
    out. Thus, only the records that are ready for the next delta request are
    displayed on the detail screen. In the detail screen of Transaction RSA7, a
    possibly existing customer exit is not taken into account.
    Q) Why does Transaction RSA7 still display LUWs on the overview screen after
    successful delta loading?
    A) Only when a new delta has been requested does the source system learn that
    the previous delta was successfully loaded to the BW System. Then, the LUWs of
    the previous delta may be confirmed (and also deleted). In the meantime, the
    LUWs must be kept for a possible delta request repetition. In particular, the
    number on the overview screen does not change when the first delta was loaded
    to the BW System.
    Q) Why are selections not taken into account when the delta queue is filled?
    A) Filtering according to selections takes place when the system reads from the
    delta queue. This is necessary for reasons of performance.
    Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has
    also been loaded successfully?
    It is most likely that this is a DataSource that does not send delta data to
    the BW System via the delta queue but directly via the extractor (delta for
    master data using ALE change pointers). Such a DataSource should not be
    displayed in RSA7. This error is corrected with BW 2.0B Support Package 11.
    Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the
    loading procedure from the delta queue?
    A) The impact is limited. If performance problems are related to the loading
    process from the delta queue, then refer to the application-specific notes (for
    example in the CO-PA area, in the logistics cockpit area and so on).
    Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as
    effective for the delta queue as for a full update. Please note, however, that
    LUWs are not split during data loading for consistency reasons. This means that
    when very large LUWs are written to the DeltaQueue, the actual package size may
    differ considerably from the MAXSIZE and MAXLINES parameters.
    Q) Why does it take so long to display the data in the delta queue (for example
    approximately 2 hours)?
    A) With Plug In 2001.1 the display was changed: the user has the option of
    defining the amount of data to be displayed, to restrict it, to selectively
    choose the number of a data record, to make a distinction between the 'actual'
    delta data and the data intended for repetition and so on.
    Q) What is the purpose of function 'Delete data and meta data in a queue' in
    RSA7? What exactly is deleted?
    A) You should act with extreme caution when you use the deletion function in
    the delta queue. It is comparable to deleting an InitDelta in the BW System and
    should preferably be executed there. You do not only delete all data of this
    DataSource for the affected BW System, but also lose the entire information
    concerning the delta initialization. Then you can only request new deltas after
    another delta initialization.
    When you delete the data, the LUWs kept in the qRFC queue for the corresponding
    target system are confirmed. Physical deletion only takes place in the qRFC
    outbound queue if there are no more references to the LUWs.
    The deletion function is for example intended for a case where the BW System,
    from which the delta initialization was originally executed, no longer exists
    or can no longer be accessed.
    Q) Why does it take so long to delete from the delta queue (for example half a
    day)?
    A) Import PlugIn 2000.2 patch 3. With this patch the performance during
    deletion is considerably improved.
    Q) Why is the delta queue not updated when you start the V3 update in the
    logistics cockpit area?
    A) It is most likely that a delta initialization had not yet run or that the
    delta initialization was not successful. A successful delta initialization (the
    corresponding request must have QM status 'green' in the BW System) is a
    prerequisite for the application data being written in the delta queue.
    Q) What is the relationship between RSA7 and the qRFC monitor (Transaction
    SMQ1)?
    A) The qRFC monitor basically displays the same data as RSA7. The internal
    queue name must be used for selection on the initial screen of the qRFC
    monitor. This is made up of the prefix 'BW, the client and the short name of
    the DataSource. For DataSources whose name are 19 characters long or shorter,
    the short name corresponds to the name of the DataSource. For DataSources whose
    name is longer than 19 characters (for delta-capable DataSources only possible
    as of PlugIn 2001.1) the short name is assigned in table ROOSSHORTN.
    In the qRFC monitor you cannot distinguish between repeatable and new LUWs.
    Moreover, the data of a LUW is displayed in an unstructured manner there.
    Q) Why are the data in the delta queue although the V3 update was not started?
    A) Data was posted in background. Then, the records are updated directly in the
    delta queue (RSA7). This happens in particular during automatic goods receipt
    posting (MRRS). There is no duplicate transfer of records to the BW system. See
    Note 417189.
    Q) Why does button 'Repeatable' on the RSA7 data details screen not only show
    data loaded into BW during the last delta but also data that were newly added,
    i.e. 'pure' delta records?
    A) Was programmed in a way that the request in repeat mode fetches both
    actually repeatable (old) data and new data from the source system.
    Q) I loaded several delta inits with various selections. For which one is the
    delta loaded?
    A) For delta, all selections made via delta inits are summed up. This means, a
    delta for the 'total' of all delta initializations is loaded.
    Q) How many selections for delta inits are possible in the system?
    A) With simple selections (intervals without complicated join conditions or
    single values), you can make up to about 100 delta inits. It should not be
    more.
    With complicated selection conditions, it should be only up to 10-20 delta
    inits.
    Reason: With many selection conditions that are joined in a complicated way,
    too many 'where' lines are generated in the generated ABAP
    source code that may exceed the memory limit.
    Q) I intend to copy the source system, i.e. make a client copy. What will
    happen with may delta? Should I initialize again after that?
    A) Before you copy a source client or source system, make sure that your deltas
    have been fetched from the DeltaQueue into BW and that no delta is pending.
    After the client copy, an inconsistency might occur between BW delta tables and
    the OLTP delta tables as described in Note 405943. After the client copy, Table
    ROOSPRMSC will probably be empty in the OLTP since this table is
    client-independent. After the system copy, the table will contain the entries
    with the old logical system name that are no longer useful for further delta
    loading from the new logical system. The delta must be initialized in any case
    since delta depends on both the BW system and the source system. Even if no
    dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you
    should expect that the delta have to be initialized after the copy.
    Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of
    processes?
    A) Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW
    queues only after informing the BW Support or only if this is explicitly
    requested in a note for component 'BC-BW' or 'BW-WHM-SAPI'.
    Q) Despite of the delta request being started after completion of the
    collective run (V3 update), it does not contain all documents. Only another
    delta request loads the missing documents into BW. What is the cause for this
    "splitting"?
    A) The collective run submits the open V2 documents for processing to the task
    handler, which processes them in one or several parallel update processes in an
    asynchronous way. For this reason, plan a sufficiently large "safety time
    window" between the end of the collective run in the source system and the
    start of the delta request in BW. An alternative solution where this problem
    does not occur is described in Note 505700.
    Q) Despite my deleting the delta init, LUWs are still written into the
    DeltaQueue?
    A) In general, delta initializations and deletions of delta inits should always
    be carried out at a time when no posting takes place. Otherwise, buffer
    problems may occur: If a user started the internal mode at a time when the
    delta initialization was still active, he/she posts data into the queue even
    though the initialization had been deleted in the meantime. This is the case in
    your system.
    Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some
    entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What
    do these statuses mean? Which values in the field 'Status' mean what and which
    values are correct and which are alarming? Are the statuses BW-specific or
    generally valid in qRFC?
    A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read
    once either in a delta request or in a repetition of the delta request.
    However, this does not mean that the record has successfully reached the BW
    yet. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that
    the record has been written into the DeltaQueue and will be loaded into the BW
    with the next delta request or a repetition of a delta. In any case only the
    statuses READ, READY and RECORDED in both tables are considered to be valid.
    The status EXECUTED in TRFCQOUT can occur temporarily. It is set before
    starting a DeltaExtraction for all records with status READ present at that
    time. The records with status EXECUTED are usually deleted from the queue in
    packages within a delta request directly after setting the status before
    extracting a new delta. If you see such records, it means that either a process
    which is confirming and deleting records which have been loaded into the BW is
    successfully running at the moment, or, if the records remain in the table for
    a longer period of time with status EXECUTED, it is likely that there are
    problems with deleting the records which have already been successfully been
    loaded into the BW. In this state, no more deltas are loaded into the BW. Every
    other status is an indicator for an error or an inconsistency. NOSEND in SMQ1
    means nothing (see note 378903).
    The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.
    Q) The extract structure was changed when the DeltaQueue was empty. Afterwards
    new delta records were written to the DeltaQueue. When loading the delta into
    the PSA, it shows that some fields were moved. The same result occurs when the
    contents of the DeltaQueue are listed via the detail display. Why are the data
    displayed differently? What can be done?
    Make sure that the change of the extract structure is also reflected in the
    database and that all servers are synchronized. We recommend to reset the
    buffers using Transaction $SYNC. If the extract structure change is not
    communicated synchronously to the server where delta records are being created,
    the records are written with the old structure until the new structure has been
    generated. This may have disastrous consequences for the delta.
    When the problem occurs, the delta needs to be re-initialized.
    Q) How and where can I control whether a repeat delta is requested?
    A) Via the status of the last delta in the BW Request Monitor. If the request
    is RED, the next load will be of type 'Repeat'. If you need to repeat the last
    load for certain reasons, set the request in the monitor to red manually. For
    the contents of the repeat see Question 14. Delta requests set to red despite
    of data being already updated lead to duplicate records in a subsequent repeat,
    if they have not been deleted from the data targets concerned before.
    Q) As of PI 2003.1, the Logistic Cockpit offers various types of update
    methods. Which update method is recommended in logistics? According to which
    criteria should the decision be made? How can I choose an update method in
    logistics?
    See the recommendation in Note 505700.
    Q) Are there particular recommendations regarding the data volume the
    DeltaQueue may grow to without facing the danger of a read failure due to
    memory problems?
    A) There is no strict limit (except for the restricted number range of the
    24-digit QCOUNT counter in the LUW management table - which is of no practical
    importance, however - or the restrictions regarding the volume and number of
    records in a database table).
    When estimating "smooth" limits, both the number of LUWs is important
    and the average data volume per LUW. As a rule, we recommend to bundle data
    (usually documents) already when writing to the DeltaQueue to keep number of
    LUWs small (partly this can be set in the applications, e.g. in the Logistics
    Cockpit). The data volume of a single LUW should not be considerably larger
    than 10% of the memory available to the work process for data extraction
    (in a 32-bit architecture with a memory volume of about 1GByte per work
    process, 100 Mbytes per LUW should not be exceeded). That limit is of rather
    small practical importance as well since a comparable limit already applies
    when writing to the DeltaQueue. If the limit is observed, correct reading is
    guaranteed in most cases.
    If the number of LUWs cannot be reduced by bundling application transactions,
    you should at least make sure that the data are fetched from all connected BWs
    as quickly as possible. But for other, BW-specific, reasons, the frequency
    should not be higher than one DeltaRequest per hour.
    To avoid memory problems, a program-internal limit ensures that never more than
    1 million LUWs are read and fetched from the database per DeltaRequest. If this
    limit is reached within a request, the DeltaQueue must be emptied by several
    successive DeltaRequests. We recommend, however, to try not to reach that limit
    but trigger the fetching of data from the connected BWs already when the number
    of LUWs reaches a 5-digit value.
    Q) I would like to display the date the data was uploaded on the
    report. Usually, we load the transactional data nightly. Is there any easy way
    to include this information on the report for users? So that they know the
    validity of the report.
    A) If I understand your requirement correctly, you want to display the date on
    which data was loaded into the data target from which the report is being
    executed. If it is so, configure your workbook to display the text elements in
    the report. This displays the relevance of data field, which is the date on which
    the data load has taken place.
    Q) Can we filter the fields at Transfer Structure?
    Q) Can we load data directly into infoobject with out extraction is it
    possible.
    Yes. We can copy from other infoobject if it is same. We load data from PSA if
    it is already in PSA.
    Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA, IF WE R SHEDULED DAILY, WEEKLY
    AND MONTHLY.
    a) We can set the time.
    Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING ON OFFSHORE PROJECTS.
    THROUGH WHICH NETWORK.
    a) VPN…………….Virtual
    Private Network, VPN is nothing but one sort of network
    where we can connect to the client systems sitting in offshore through RAS
    (Remote access server).
    Q) HOW CAN U ANALIZE THE PROJECT AT FIRST?
    Prepare Project Plan and Environment
    Define Project Management
    Standards and
    Procedures
    Define Implementation Standards and Procedures
    Testing & Go-live + supporting.
    Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL CUBES IF ONE
    CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
    Go to TCode sm66 then see which one is locked select that pid from there and
    goto sm12
    TCode then unlock it this is happened when lock errors are occurred when u
    scheduled.
    Q) Can anybody tell me how to add a navigational attribute in the BEx report in
    the rows?
    A) Expand dimension under left side panel (that is infocube panel) select than
    navigational attributes drag and drop under rows panel.
    Q) IF ANY TRASACTION CODE LIKE SMPT OR STMT.
    In current systems (BW 3.0B and R/3 4.6B) these Tcodes don't exist!
    Q) WHAT IS TRANSACTIONAL CUBE?
    A) Transactional InfoCubes differ from standard InfoCubes in that the former
    have an improved write access performance level. Standard InfoCubes are
    technically optimized for read-only access and for a comparatively small number
    of simultaneous accesses. Instead, the transactional InfoCube was developed to
    meet the demands of SAP Strategic Enterprise Management (SEM), meaning that,
    data is written to the InfoCube (possibly by several users at the same time)
    and re-read as soon as possible. Standard Basic cubes are not suitable for
    this.
    Q) Is there any way to delete cube contents within update rules from an ODS
    data source? The reason for this would be to delete (or zero out) a cube record
    in an "Open Order" cube if the open order quantity was 0.
    I've tried using the 0recordmode but that doesn't work. Also, would it
    be easier to write a program that would be run after the load and delete
    the records with a zero open qty?
    A) START routine for update rules u can write ABAP code.
    A) Yap, you can do it. Create a start routine in Update rule.
    It is not "Deleting cube contents with update rules" It is only
    possible to avoid that some content is updated into the InfoCube using the
    start routine. Loop at all the records and delete the record that has the
    condition. "If the open order quantity was 0" You have to think also
    in before and after images in case of a delta upload. In that case you may
    delete the change record and keep the old and after the change the wrong
    information.
    Q) I am not able to access a node in hierarchy directly using variables for
    reports. When I am using Tcode RSZV it is giving a message that it doesn't
    exist in BW 3.0 and it is embedded in BEx. Can any one tell me the other
    options to get the same functionality in BEx?
    A) Tcode RSZV is used in the earlier version of 3.0B only. From 3.0B onwards,
    it's possible in the Query Designer (BEx) itself. Just right click on the
    InfoObject for which you want to use as variables and precede further selecting
    variable type and proce

  • BW Interview Questions 2

    Hi,
    Here are some BW interview questions. Make sure you have prepared for all the q's before going for an interview.
    1) Please describe your experience with BEx (Business Explorer)
    A) Rate your level of experience with BEx and the rationale for you’re self-rating
    B) How many queries have you developed? :
    C) How many reports have you written?
    D) How many workbooks have you developed?
    E) Experience with jump targets (OLTP, use jump target)
    F) Describe experience with BW-compatible ETL tools (e.g. Ascential)
    2) Describe your experience with 3rd party report tools (Crystal Decisions, Business Objects a plus)
    3) Describe your experience with the design and implementation of standard & custom InfoCubes.
    1. How many InfoCubes have you implemented from start to end by yourself (not with a team)?
    2. Of these Cubes, how many characteristics (including attributes) did the largest one have.
    3. How much customization was done on the InfoCubes have you implemented?
    4) Describe your experience with requirements definition/gathering.
    5) What experience have you had creating Functional and Technical specifications?
    6) Describe any testing experience you have:
    7) Describe your experience with BW extractors
    1. How many standard BW extractors have you implemented?
    2. How many custom BW extractors have you implemented?
    8) Describe how you have used Excel as a compliment to BEx
    A) Describe your level of expertise and the rationale for your self-rating (experience with macros, pivot tables and formatting)
    B)
    9) Describe experience with ABAP
    10) Describe any hands on experience with ASAP Methodology.
    11) Identify SAP functional areas (SEM, CRM, etc.) you have experience in. Describe that experience.
    12) What is partitioning and what are the benefits of partitioning in an InfoCube?
    A) Partitioning is the method of dividing a table (either column wise or row wise) based on the fields available which would enable a quick reference for the intended values of the fields in the table. By partitioning an infocube, the reporting performance is enhanced because it is easier to search in smaller tables. Also table maintenance becomes easier.
    13) What does Rollup do?
    A) Rollup creates aggregates in an infocube whenever new data is loaded.
    14) What are the inputs for an infoset?
    A) The inputs for an infoset are ODS objects and InfoObjects (with master data or text).
    15) What internally happens when BW objects like Info Object, Info Cube or ODS are created and activated?
    A) When an InfoObject, InfoCube or ODS object is created, BW maintains a saved version of that object but does not make it available for use. Once the object is activated, BW creates an active version that is available for use.
    16) What is the maximum number of key fields that you can have in an ODS object?
    A) 16.
    17) What is the specific advantage of LO extraction over LIS extraction?
    A) The load performance of LO extraction is better than that of LIS. In LIS two tables are used for delta management that is cumbersome. In LO only one delta queue is used for delta management.
    18) What is the importance of 0REQUID?
    A) It is the InfoObject for Request id. OREQUID enables BW to distinguish between different data records.
    19) Can you add programs in the scheduler?
    A) Yes. Through event handling.
    20) What is the importance of the table ROIDOCPRMS?
    A) It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data packet size, maximum number of lines in a data packet, etc. The data packet size can be changed through the control parameters option on SBIW i.e., the contents of this table can be changed.
    21) What is the importance of 'start routine' in update rules?
    A) A Start routine is a user exit that can be executed before the update rule starts to allow more complex computations for a key figure or a characteristic. The start routine has no return value. Its purpose is to execute preliminary calculations and to store them in a global data structure. You can access this structure or table in the other routines.
    22) When is IDOC data transfer used?
    A) IDOCs are used for communication between logical systems like SAP R/3, R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. In BW, an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data into PSA since data there is more detailed. It is used when the file size is lesser than 1000 bytes.
    23) What is partitioning characteristic in CO-PA used for?
    A) For easier parallel search and load of data.
    24) What is the advantage of BW reporting on CO-PA data compared with directly running the queries on CO-PA?
    A) BW has a better performance advantage over reporting in R/3. For a huge amount of data, the R/3 reporting tool is at a serious disadvantage because R/3 is modeled as an OLTP system and is good for transaction processing rather than analytical processing.
    25) What is the function of BW statistics cube?
    A) BW statistics cube contains the data related to the reporting performance and the data loads of all the InfoCubes in the BW system.
    26) When an ODS is in 'overwrite' mode, does uploading the same data again and again create new entries in the change log each time data is uploaded?
    A) No.
    27) What is the function of 'selective deletion' tab in the manage->contents of an infocube?
    A) It allows us to select a particular value of a particular field and delete its contents.
    28) When we collapse an infocube, is the consolidated data stored in the same infocubeinfocube? or is it stored in the new
    A) Data is stored in the same cube.
    29) What is the effect of aggregation on the performance? Are there any negative effects on the performance?
    A) Aggregation improves the performance in reporting.
    30) What happens when you load transaction data without loading master data?
    A) The transaction data gets loaded and the master data fields remain blank.
    31) When given a choice between a single infocube and multiple InfoCubes with a multiprovider, what factors does one need to consider before making a decision?
    A) One would have to see if the InfoCubes are used individually. If these cubes are often used individually, then it is better to go for a multiprovider with many cubes since the reporting would be faster for an individual cube query rather than for a big cube with lot of data.
    32) How many hierarchy levels can be created for a characteristic info object?
    A) Maximum of 98 levels.
    33) What is open hub service?
    A) The open hub service enables you to distribute data from an SAP BW system into external data marts, analytical applications, and other applications. With this, you can ensure controlled distribution using several systems. The central object for the export of data is the Infospoke. Using this, you can define the object from which the data comes and into which target it is transferred. Through the open hub service, SAP BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.
    34) What is the function of 'reconstruction' tab in an infocube?
    A) It reconstructs the deleted requests from the infocube. If a request has been deleted and later someone wants the data records of that request to be added to the infocube, one can use the reconstruction tab to add those records. It goes to the PSA and brings the data to the infocube.
    35) What are secondary indexes with respect to InfoCubes?
    A) Index created in addition to the primary index of the infocube. When you activate a table in the ABAP Dictionary, an index is created on the primary key fields of the table. Further indexes created for the table are called secondary indexes.
    36) What is DB connect and where is it used?
    A) DB connect is database connecting piece of program. It is used in connecting third party tools with BW for reporting purpose.
    37) Can we extract hierarchies from R/3 for CO-PA?
    A) No We cannot, “NO hierarchies in CO/PA�?.
    38) Explain ‘field name for partitioning’ in CO-PA
    A) The CO/PA partitioning is used to decrease package size (eg: company code)
    39) What is V3 update method ?
    A) It is a program in R/3 source system that schedules batch jobs to update extract structure to data source collectively.
    40) Differences between serialized and non-serialized V3 updates
    41) What is the common method of finding the tables used in any R/3 extraction
    A) By using the transaction LISTSCHEMA we can navigate the tables.
    42) Differences between table view and infoset query
    A) An InfoSet Query is a query using flat tables.
    43) How to load data from one InfoCube to another InfoCube ?
    A) Thro DataMarts data can be loaded from one InfoCube to another InfoCube.
    44) What is the significance of setup tables in LO extractions ?
    A) It adds the Selection Criteria to the LO extraction.
    45) Difference between extract structure and datasource
    A) In Datasource we define the data from diff source sys,where as in extract struct it contains the replicated data of datasource n where in we can define extract rules, n transfer rules
    B) Extract Structure is a record layout of InfoObjects.
    C) Extract Structure is created on SAP BW system.
    46) What happens internally when Delta is Initialized
    47) What is referential integrity mechanism ?
    A) Referential integrity is the property that guarantees that values from one column depend on values from another column.This property is enforced through integrity constraints.
    48) What is activation of extract structure in LO ?
    49) What is the difference between Info IDoc and data IDoc ?
    50) What is D-Management in LO ?
    A) It is a method used in delta update methods, which is based on change log in LO.
    Plz experts.. provide the answers for the questions..
    Thanx in advance.
    Sunil

    Hi,
    In my case i dont have an experience in BW. I went straight to academy.It is like i am starting a new career.Are the questions also apply to me .

Maybe you are looking for

  • Printing on the bottom of the page...

    Hi, I'm working on a Dunning letter report that has a remit slip. The remit slip needs to always print on the bottom of the page. Is there a way to force a frame to always print at the bottom of the page. Thanks, Tom

  • Cannot connect to mobile bluetooth device in my car

    I have an iPhone 4 and cannot connect zu the mobile bluetooth device in my car. My iPhone doesn't find it. Can anybody help?

  • [SOLVED] Can Pacman Manage My Custom Packages

    I am going to make a custom build of MPlayer because I want it patched to take advantage of the Compiz Video plugin.  I think I might enable a couple other options that are disabled as well.  The thing I'm not sure of is how pacman will react to my p

  • Folder/File transfer help

    I'm trying to transfer files/folders from an old macbook to a new one, but when I connected the new to the old via firewire, finder was not able to locate them. The folders I am looking for are placed on the sidebar of the old mac. The only folders f

  • ADOBE CC - Germany - Can't open Photoshop and other programs - can't connect ADOBE

    Hello - I try to open my CC program's - but don't work. I get always the information hat I have to much profiles. But is not true. I try to connect with ADOPE to close all profils. But doesnt work. No connection to ADOBE