Load Root cause table

Hi All,
I am having the requirement to load root cause into the sql table. Now, in the BOC documentation, it is said to use the LoadRootCause sample package. I am working in BPC 7.0 and is not able to find any such package in the example files. Please let me know what to be done in this case.
Regards,
Arun.

Hi Patrick,
Its not that easy for me (may be bcoz I am entirely new to this area, especially into thr SQL server ). As per the documentation, the data has to be loaded into the RootCauseEvent sql table. This table should carry one field each for al lthe dimensions in the application set. When I checked this table, at present ther is only one field corresponding to dimension ENTITY in the application set. So how to create rest of the dimension fields?????? So I hope there might be some coding (again I am not sure if the right term is coding, as I am extremely new into this) inside the LoadRootCause.dts package which enables rest of the fields also into the sql table.
Anyhow thanks for the reply Patrick. Lets search more on this.
Regards,
Arun.

Similar Messages

  • From which table  feed back partner and root cause partner

    hi,
    can any body tell me ,from which table  feed back partner and root cause partner
    can be find.
    i need to add this in datasource 0crm_sales_act_1.
    thanks in advance.
    regards,
    sridhar M

    It is always advsible to use function modules in CRM instead of making direct table access. Unlike R/3, CRM tables and buffering are quite complex.
    In your case, to read the values of maintained Sales Areas, use the function module <h5>CRMA_BUPA_GET_SALES_AREAS</h5>
    The function module takes the BP Guid (BUT000-PARTNER_GUID)
    However you can also find your information in tables<h5>
    CRMM_BUT_LNK0011 - BP Sales rule list
    CRMM_BUT_LNK0010 - BP Sales rule list
    CRMM_BUT_SET0010 - BP Sales rule set</h5>
    Hope this helps.
    Easwar Ram
    http://www.parxlns.com

  • Issues loading data in table component after dploying to tomcat5.5.28

    Hi
    I export a war file from sjsc2 u 1 into tomcat5.5.28 and also set up jndi to point to the datasource properly but displaying table component with the data loaded from creator.i'm getting the following error
    type Exception report
    message
    description The server encountered an internal error () that prevented it from fulfilling this request.
    exception
    javax.servlet.ServletException: Servlet execution threw an exception
         com.sun.rave.web.ui.util.UploadFilter.doFilter(UploadFilter.java:194)
    root cause
    java.lang.AbstractMethodError: oracle.jdbc.driver.OracleDatabaseMetaData.locatorsUpdateCopy()Z
         com.sun.sql.rowset.CachedRowSetXImpl.execute(CachedRowSetXImpl.java:972)
         com.sun.sql.rowset.CachedRowSetXImpl.execute(CachedRowSetXImpl.java:1410)
         com.sun.data.provider.impl.CachedRowSetDataProvider.checkExecute(CachedRowSetDataProvider.java:1219)
         com.sun.data.provider.impl.CachedRowSetDataProvider.absolute(CachedRowSetDataProvider.java:283)
         com.sun.data.provider.impl.CachedRowSetDataProvider.getRowKeys(CachedRowSetDataProvider.java:232)
         com.sun.data.provider.impl.CachedRowSetDataProvider.cursorFirst(CachedRowSetDataProvider.java:351)
         com.sun.data.provider.impl.CachedRowSetDataProvider.setCachedRowSet(CachedRowSetDataProvider.java:182)
         com.sun.data.provider.impl.CachedRowSetDataProvider.close(CachedRowSetDataProvider.java:209)
         epnl_idbadge.managers_browse_screen.destroy(managers_browse_screen.java:380)
         com.sun.rave.web.ui.appbase.faces.ViewHandlerImpl.destroy(ViewHandlerImpl.java:580)
         com.sun.rave.web.ui.appbase.faces.ViewHandlerImpl.renderView(ViewHandlerImpl.java:316)
         com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:87)
         com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:221)
         com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:117)
         javax.faces.webapp.FacesServlet.service(FacesServlet.java:198)
         com.sun.rave.web.ui.util.UploadFilter.doFilter(UploadFilter.java:194)
    note The full stack trace of the root cause is available in the Apache Tomcat/5.0.28 logs.
    Pls can anyone show me the way out.
    Thanks in advance

    Thanks for your response ,I have the following drivers in my Tomcat\common\ lib
    ojbc14.jar
    ojbc14_g.jar
    ojbc14dms.jar
    ojbc14dms_g.jar
    orai18n.jar
    Pls check if there is any thing i need to do to make it work right.
    Thanks

  • Issue Root Cause Question - WebIntelligenceProcessingServer

    Hi,
    Yesterday our SAP BI based portal server suffered a dramatic decrease of perfomance that prevented reports to be viewed (users were able to login into portal, but when clicked on a given report it showed the clock progress bar with no results..) . We finally decided to restart all the servers in order to try to workaround the situation. Luckily it works but I am focused in trying to understand what cause the issue.
    We have been reviewing the performance counters of all involved servers (CPU, Memory consumption) duing the period of time when the issue appeared and we don't find any quick explanation. Our more intersting clue are the info contained in the BO server tace files (*.glf files) , and particularly the WebIntelligenceProcessingServer trace.
    At the time of the issue (and several minutes before) the next list of errors were triggered repeteadly
    **ERROR:cdzContext:ExtensionManagementException has been raised [cdzContext.cpp;1193]
    **ERROR:cdzContext:All the servers with CMS es-w08-bient1:6400, cluster @es-w08-bient1.f4e.org:6400, kind pjs which host service DSLBridge, are down or disabled [cdzContext.cpp;1194]
    **ERROR:DSLBridgeController:BrigdeController::getDslBridge : the IExtension reference is null [src/DSLBrigdeController.cpp;157]
    **ERROR:DSLBridgeController:Error: Failed to load SL Service extension
    I suspect that we have some configuration issue related with the WebIntelligenceProcessingServer but by reviewing the data provided by the monitoring application (Memory max threshould count,....) we don't see which would be the cause of the stuff.
    Any idea? Thanks for sharing your thoughs.
    Alfons

    Hello Alfons,
    Finding root cause of the issue is really a part of investigation as there are many other components/areas we need to consider/review before we comment anything on your issue.
    However, here are some practices we need to follow for better performace.
    It's recommended to have Web Intelligence servers recycle. There are two settings that are used in this functionality. The Web Intelligence server is recycled when BOTH conditions are met.
    Maximum Documents Before Recycling: default is 50. Range is 10 - 1000.
    Timeout Before Recycling (seconds): default is 1200. Range is 100 - 10000.
    In addition to this, kindly tune your WIPS as reccomended by SAP. For more information, please go through below blog:
    http://scn.sap.com/people/matthew.shaw/blog/2011/04/06/performance-tips-getting-the-most-out-of-your-web-intelligence-processing-servers
    Hope this will help.
    Regards,
    Mahesh

  • Oracle.jbo.NoDefException: JBO-29114 ADFContext is not setup to process messages for this exception. Use the exception stack trace and error code to investigate the root cause of this exception. Root cause error code is JBO-25058. Error message parameters

    Dear Guru's,
    I am not able to solve the above issue for last couple of days.
    I am newbie to the webservice
    My Issue...
    I am using Jdeveloper 11.1.2.4.0 Release 2
    1. Using Jdev I built one small Web Service with two methods.
            While testing the Webservice...
                   I passed User Id as Parameter and it successfully return the values (user id, user name and description) from fnd_user table
    2. I created another application to consume the web service i created.
                   1. I added the webservice SOAP and added the method.
                   2. Created a jsf page and drag and drop the parameter and return values to the jsf page.
    3. While executing the created jsf page I received the error message as below
    "oracle.jbo.NoDefException: JBO-29114 ADFContext is not setup to process messages for this exception. Use the exception stack trace and error code to investigate the root cause of this exception. Root cause error code is JBO-25058. Error message parameters are {0=Attribute, 1=UserName, 2=UserName}"
    Even I know that this issue is repeated one in our forum, I was not able to solve this issue.
    Can anybody help to solve this issue.
    Thanks and Regards,
    Durai S E

    Dear Guru's,
    I am not able to solve the above issue for last couple of days.
    I am newbie to the webservice
    My Issue...
    I am using Jdeveloper 11.1.2.4.0 Release 2
    1. Using Jdev I built one small Web Service with two methods.
            While testing the Webservice...
                   I passed User Id as Parameter and it successfully return the values (user id, user name and description) from fnd_user table
    2. I created another application to consume the web service i created.
                   1. I added the webservice SOAP and added the method.
                   2. Created a jsf page and drag and drop the parameter and return values to the jsf page.
    3. While executing the created jsf page I received the error message as below
    "oracle.jbo.NoDefException: JBO-29114 ADFContext is not setup to process messages for this exception. Use the exception stack trace and error code to investigate the root cause of this exception. Root cause error code is JBO-25058. Error message parameters are {0=Attribute, 1=UserName, 2=UserName}"
    Even I know that this issue is repeated one in our forum, I was not able to solve this issue.
    Can anybody help to solve this issue.
    Thanks and Regards,
    Durai S E

  • Using dbms_lob to load image into table

    I am trying to load a set of images from my DB drive into a table. This works fine when I try to load only 1 record. If I try to load more than 1 record, first gets created but I get this error, and it doesn't load the images for the rest of them.
    ORA-22297:     warning: Open LOBs exist at transaction commit time
    Cause:     An attempt was made to commit a transaction with open LOBs at transaction commit time.
    Action:     This is just a warning. The transaction was commited successfully, but any domain or functional indexes on the open LOBs were not updated. You may want to rebuild those indexes.
    Am I missing something in the code that's needed?
    in_file UTL_FILE.FILE_TYPE;
    bf bfile;
    b blob;
    src_offset integer := 1;
    dest_offset integer := 1;
    CURSOR get_pics is select id from emp;
    BEGIN
    FOR x in get_pics LOOP
    BEGIN
    insert into stu_pic(id,student_picture)
    values(x.id,empty_blob()) returning student_picture into b;
    l_picture_uploaded := 'Y';
    bf := bfilename('INTERFACES',x.student_id || '.' || p_image_type);
    dbms_lob.fileopen(bf,dbms_lob.file_readonly);
    dbms_lob.open(b,dbms_lob.lob_readwrite);
    dbms_lob.loadBlobFromFile(b,bf,dbms_lob.lobmaxsize,dest_offset,src_offset);
    dbms_lob.close(b);
    dbms_lob.fileclose(bf);
    EXCEPTION when dup_val_on_index then null;
    END;
    END LOOP;
    END;

    There are two methods you can use.
    1. Create an external table with those images(BLOB column) and then use that external table to insert into another table.
    Demo as follows:
    This is my pdf files
    C:\Saubhik\Assembly\Books\Algorithm>dir *.pdf
    Volume in drive C has no label.
    Volume Serial Number is 6806-ABBD
    Directory of C:\Saubhik\Assembly\Books\Algorithm
    08/16/2009  02:11 PM         1,208,247 algorithms.pdf
    08/17/2009  01:05 PM        13,119,033 fci4all.com.Introduction_to_the
    d_Analysis_of_Algorithms.pdf
    09/04/2009  06:58 PM        30,375,002 sedgewick-algorithms.pdf
                   3 File(s)     44,702,282 bytes
                   0 Dir(s)   7,474,593,792 bytes free
    C:\Saubhik\Assembly\Books\Algorithm>This is my file with which I'll load the pdf files as BLOB
    C:\Saubhik\Assembly\Books\Algorithm>type mypdfs.txt
    Algorithms.pdf,algorithms.pdf
    Sedgewick-Algorithms.pdf,sedgewick-algorithms.pdf
    C:\Saubhik\Assembly\Books\Algorithm>Now the actual code
    SQL> /* This is my directory object */
    SQL> CREATE or REPLACE DIRECTORY saubhik AS 'C:\Saubhik\Assembly\Books\Algorithm';
    Directory created.
    SQL> /* Now my external table */
    SQL> /* This table contains two columns. 1.pdfname contains the name of the file
    DOC>   and 2.pdfFile is a BLOB column contains the actual pdf*/ 
    SQL> CREATE TABLE mypdf_external (pdfname VARCHAR2(50),pdfFile BLOB)
      2         ORGANIZATION EXTERNAL (
      3           TYPE ORACLE_LOADER
      4            DEFAULT DIRECTORY saubhik
      5            ACCESS PARAMETERS (
      6              RECORDS DELIMITED BY NEWLINE
      7              BADFILE saubhik:'lob_tab_%a_%p.bad'
      8              LOGFILE saubhik:'lob_tab_%a_%p.log'
      9              FIELDS TERMINATED BY ','
    10              MISSING FIELD VALUES ARE NULL
    11               (pdfname char(100),blob_file_name CHAR(100))
    12              COLUMN TRANSFORMS (pdfFile FROM lobfile(blob_file_name) FROM (saubhik) BLOB)
    13            )
    14            LOCATION('mypdfs.txt')
    15         )
    16         REJECT LIMIT UNLIMITED;
    Table created.
    SQL> SELECT pdfname,DBMS_LOB.getlength(pdfFile) pdfFileLength
      2  FROM   mypdf_external;
    PDFNAME                                            PDFFILELENGTH
    Algorithms.pdf                                           1208247
    Sedgewick-Algorithms.pdf                                30375002
    SQL> Now, you can use this table for any operation very easily. Even for your loading into another table!.
    2. Use of DBMS_LOB like this
    /* Loading a image Winter.jpg in the BLOB column as BLOB!*/
    DECLARE
      v_src_blob_locator BFILE := BFILENAME('SAUBHIK', 'Winter.jpg');
      v_amount_to_load   INTEGER := 4000;
      dest_lob_loc BLOB;
    BEGIN
      --Insert a empty row with id 1
      INSERT INTO test_my_blob_clob VALUES(1,EMPTY_BLOB(),EMPTY_CLOB())
       RETURNING BLOB_COL INTO dest_lob_loc;
      DBMS_LOB.open(v_src_blob_locator, DBMS_LOB.lob_readonly);
      v_amount_to_load := DBMS_LOB.getlength(v_src_blob_locator);
      DBMS_LOB.loadfromfile(dest_lob_loc, v_src_blob_locator, v_amount_to_load);
      DBMS_LOB.close(v_src_blob_locator);
      COMMIT;
    --id=1 is created with Winter.jpg populated in BLOB_COL and CLOB_COL is empty.  
    END;Now user this code to create a procedure with parameter and use that in loop.

  • Root cause of ServletException.javax.servlet.jsp.JspException:

    Hi guys, hope you all can help me on this.
    I have added this repeater netui tag into my jsp page and always get this servletException
    error, not very sure why. when i remove this repeater tag everything seems alright.
    anyone can give me any hints or clue on this? the checkbox repeater tag is wrong,
    i need to ask you guys on that the next posting! thanks!
    The Repeater tag in my jsp
    <netui-data:callPageFlow method="getFunctionHashtable" resultId="funcNamehashtable"/>
    <netui-data:repeater dataSource="{actionForm.functionHashtable}">
    <netui-data:repeaterHeader>
    <table border="1">
    <tr>
    <td><b>Functions</b></td>
    <td><b>Assign/Unassign</b></td>
    </tr>
    </netui-data:repeaterHeader>
    <netui-data:repeaterItem>
    <tr>
    <td>
    <netui:label value="{container.item}" />
    </td>
    <td>
    <netui:checkBox dataSource="{pageFlow.initchecked}" onClick="checkFunctions();"/>
    </td>
    </tr>
    </netui-data:repeaterItem>
    <netui-data:repeaterFooter>
    </table>
    </netui-data:repeaterFooter>
    </netui-data:repeater>
    and here's the exception error i have got.
    id=16240211,name=RapidWeb,context-path=/RapidWeb)] Root cause of ServletExceptio
    n.
    javax.servlet.jsp.JspException: Input/output error: java.net.SocketException:
    Co
    nnection reset by peer: socket write error
    at org.apache.struts.util.ResponseUtils.write(ResponseUtils.java:160)
    at com.bea.wlw.netui.tags.html.Html.doEndTag(Html.java:282)
    at jsp_servlet._assignunassignfunctions.__getrole._jspService(getRole.js
    p:78)
    at weblogic.servlet.jsp.JspBase.service(JspBase.java:33)
    at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run
    (ServletStubImpl.java:971)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubIm
    pl.java:402)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:28)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.ja
    va:27)
    at com.bea.wlw.netui.pageflow.PageFlowJspFilter.doFilter(PageFlowJspFilt

    Hi
    The message java.net.SocketException: Connection reset by peer is caused by network problem or even forced closure of IE browser.
    This should not affect functionality and is harmless.
    If the page process is too long, the user closed the browser or the user reloaded the page, this error could happen. Reloading a page or closing the browser is very common, and so we should not log this message.
    BEA has a known issue CR219602 where we catch the SocketException and ignore the logging in the doEndTag() method.
    If this error message is annonying please contact BEA Support to get a patch.
    Thanks
    Vimala

  • Building and deploying J2EE apps ?  Now there is a solution for production root cause analysis.

    Is your organization building and deploying J2EE apps? If so, Halo
    can help solve one of the toughest issues facing enterprises today:
    Finding the root cause of software faults.
    "Halo monitors, pinpoints, reports on and provides a source-code level
    root cause of software faults in deployed J2EE apps. Halo is unique
    because it's the only technology that can give you a root cause
    diagnosis in a fully deployed, live production application. Halo has
    such low performance overhead that customers deploy their final,
    production versions of their applications with Halo enabled.
    Used with Web Application Servers like WebLogic, Halo helps ensure
    that deployed code is reliable and able to be quickly fixed if
    problems turn up. Most important, because Halo is an "always on"
    technology, you get all the information you need to rapidly solve a
    problem on the first fault. Problem replication and bug reports are
    obsolete with Halo
    "Halo has a unique ability to provide a root cause diagonosis and
    understanding
    of software problems in production systems, without needing to
    replicate the
    issue.
    Test on WebLogic proved that Halo runs with extremely low overhead and
    is suitable for use in deployed production systems"
    Andrew Sliwkowski, Software Engineer
    BEA Systems, Inc.
    The key is Halo's high performance, low overhead TraceBack
    instrumentation technology. Based on technology out of MIT and proven
    in the field, TraceBack enables you to instrument JARs, EARs and WARs
    within minutes, without touching source code.
    Halo is useful throughout the entire application life cycle, from
    development through test, beta and deployment.
    If you have interest in learning more visit our website at
    www.incer.com or email me directly at [email protected] (Rick Martin)

    I have two questions. We have just started developing apps using jdev9i, 9iAS v2 and are new to the j2ee environment so my questions may be very easy ones.
    Question 1: We have set up Oracle pooling connection to our databases. We have a development, test and production database. When I deploy my application, it includes the connections. This is preventing me from moving the EAR files from dev to test to prod without modification and re deploying to my EAR file. Is there a way or a place that I can put my database connections that will not be included in my EAR files and the application still find them?datasources.xml is where the info regradings connection to databases is licated. If you're using 9iAS
    you can use EM to create datasource entry at the global level. In OC4J standalone you could use admin.jar
    or edit the file. Check out the standalone user's guide at http://otn.oracle.com/tech/java/oc4j/pdf/oc4j_so_usersguide_r2.pdf.
    Also, you will othe OC4J docs on OTN.
    Question 2: I have a stand alone oc4j set up for our developers to use while testing their applications. The applications include libraries supplied in jdev such as xml parser v2. I do not want to deploy those lib files with the app because I will have to redeploy all my apps if I upgrade jdev. I just want to be able to upgrade the libraries, test the apps and not have to redeploy everything. I can do this by coping the jdev lib to 9iAS but I can't seem to find the right place to put the lib for the stand alone oc4j instance. You can use the library tag within application.xml for server wide availability. Check out the article
    http://otn.oracle.com/oramag/oracle/02-sep/o52oc4j_2.html specifically class loading in OC4J section
    Any help would be greatly appreciated. Thanks in advance.

  • Repository access failed Root Cause: Password could not be retrieved

    We have both infrastructure and application server installed on same unix box.
    When I click on "OC4J_BI_Forms" link on EM page it throws below error. The status "unknown" icon.
    Error
    =====
    An error was encountered while loading page. Failed to initialize configuration management user session.. Repository access failed Root Cause: Password could not be retrieved. Password could not be retrieved
    Any help on this is highly appreciated.
    Regards

    We have both Apps & Infrastructure installed on the same HP Box. we have all components for both are running when I check them from unix prompt. On Infrastructure page shows all processes up.
    On Middle Tier OC4J_BI_Forms or HTTP_Server or any OC4J components shows status unkown and other components Forms, BC4J, Forms, SSO shows up.
    When I click on OC4J_BI_Forms or HTTP_Server or any OC4J components on mid tier I get the below error
    Error
    An error was encountered while loading page. Failed to initialize configuration management user session.. Repository access failed Root Cause: Password could not be retrieved. Password could not be retrieved
    Any hint/help to resolve this issue is highly appreciated.
    Regards
    NTRao

  • Root cause for 100 % disk utilization

    Hi All,
    Whenever in one of the dailog instance disk utilization is showing 100% sap system is getting very slow,what is the root cause for 100% utilization .At that moment of time when I checked the CPU usage it showed 98% CPU idle. server is on Windows NT and around 13GB of RAM is always free.
    We have only one drive in the apps server.Can this be the cause of 100% disk utilization?
    Waiting for ur frequent responses.
    Regards,
    Prashant

    Hi,
    I have answered similar questions :
    [Re: Disk Utilization 100%]
    as described above, one thing that cause 100% disk utilization is that writer processes (and read process too) is concentrated only on a single disk (equivalent to single volume group or single logical volume).
    we can imagine that a large number of user hitting the single disk (where datafiles exist) on the same time with a huge transaction load.
    if we can manage writable datafiles in several locations (equivalent to several volume groups or several logical volumes) where several datafiles exist, writer processes (and read process too) will be distributed to that several locations, so that each location (each disk, each VG or each LV) utilization may not reach 100%.
    so it is important to design writable datafiles in multiple location in different disks to prevent 100% disk utilization.
    one thing should be remember is to consult with hardware and storage vendor if there is any possibilities bottleneck or I/O issue on your hardware or storage.
    hope it help you.
    rgds,
    Alfonsus Guritno

  • How can I make an easy *.CSV file to load into database table

    Hi All,
    I have a huge excel sheet having columns item#, description and qty. The description column sometimes maybe one word name, two word name separated with space or may be , spearated name. I want to write and PL/SQl code which will read this file and load it into database table. Now the *.CSV file is either comma delimited or tab text delmited which both do not solve my issue. Is there any better solution with anyone which can prevent the manual editing to the *.CSV file and I can easily load it to table.
    Your help is appreciated,
    Thanks
    Zahir

    SQL*Loader is probably the fastest method, but since you specifically asked for a PL/SQL method:
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:464420312302

  • Loading MS Access Table and Data into Oracle

    Hi,
    I have few tables in MS Access. I want to create same layout of tables in Oracle and want to populate data from MS Access tables to Oracle tables.
    Please let me know if there is a way by which I can create tables and load data automatically (thru some option or script)?
    I have Oracle 10g database and its clients.
    Thanks in advance,
    Rajeev.

    You can use Oracle migration workbench
    Loading MS Access Table and Data into Oracle
    It´s very easy to use and good to import
    regards,
    Felipe

  • Problem during  Data Warehouse Loading (from staging table to Cube)

    Hi All,
    I have created a staging Module in owb to load my flat files to my staging tables.I have created an Warehouse module to load my staging tables to Dimension and Cube that I have created.
    My senario:
    I have a temp_table_transaction which had loaded my flat files to it .This table had loaded with 168,271,269 milion record as through this flat file.
    I have created a mapping in owb which loaded my temp_table_transaction which has join with other tables and some expression and convert function that these numbers fill to a new table called stg_tbl_transaction in my staging module.Running this mapping takes 3 hours and 45 minutes with this configue of my mapping:
    Default operation mode in running parameter of Mapp config=Set based
    My dimesion filled correctly but I have two problem when I want to transfer my staging table to my Cube:
    #1 Problem:
    i have created a cube is called transaction_cube with owb and it generated and deployed correctly.
    i have created a map to fill my cube with 168,271,268 milon recodes in staging table was called stg_tbl_transaction and deployed it to server (my cube map operating mode is set based)
    but after running this map it did not complete after 9 hour and I forced to cancel my running's map by kill its sessions .I want to know this time for loading this capacity of data is acceptable or for this capacity of data we should spend more time.Please let me know if anybody has any Issue.
    #2 Problem
    To test my map I have created a map with configure set based in operation modes and select my stg_tbl_transaction as source with 168,271,268 records in it and I have created another table to transfer and load my data in it.I wanted to test the time we should spend on this simple map but after 5 hours my data had not loaded in new table.I want to know where is my problem.Should I have set something in configue of map or anothe things.Please guide me about these problems.
    CONFIGURATION OF MY SERVER:
    i run owb on two socket xeon 5500 series with 192 GB ram and disks with RAID 10 Array
    Regards,
    Sahar

    For all of you
    It is possible to load from Infoset to Cube we did it, and it was ok.
    Data are really loaded from Infoset (Cube + master dat) to cube.
    When you create a transformation under a cube Infoset is proposed, and it works fine ....
    Now the process is no more operationnal and i don't understand why .....
    Load from infoset to cube is possible, i can send you screen shot if you want ....
    Christophe

  • FDMEE Import error "No periods were identified for loading data into table 'AIF_EBS_GL_BALANCES_STG'

    Hi,
    We are having trouble while importing one ledger 'GERMANY EUR GGAAP'. It works for Dec 2014 but while trying to import data for 2015 it gives an error.
    Import error shows " RuntimeError: No periods were identified for loading data into table 'AIF_EBS_GL_BALANCES_STG'."
    I tried all Knowledge docs from Oracle support but no luck. Please help us resolving this issue as its occurring in our Production system.
    I also checked all period settings under Data Management> Setup> Integration Setup > Global Mapping and Source Mapping and they all look correct.
    Also its only happening to one ledger rest all ledgers are working fine without any issues.
    Thanks

    Hi,
    there are some Support documents related to this issue.
    I would suggest you have a look to them.
    Regards

  • Root cause for delivery to show up on LX47 transaction

    All,
    I am new to SAP and supporting an off site warehouse location.  I'm trying to research and locate the root cause for why a delivery shows up on LX47.
    The situation the warehouse is experiencing is that they are having to run the LX47 transaction multiple times a day to correct delivery issues.
    I know from my initial research that deliveries show up on LX47 due to the OB Delivery being blocked, and that this is caused by the OB delivery and related Transfer Orders do not match in the system.
    What I'm trying to figure out is, from an Operations standpoint, what is causing the OB Delivery and the TO's not to match?
    Is there only 1 reason?  Are there 100?  This is what I'm trying to understand.
    The only thing I have found thus far is that 2 people/transactions are in the delivery at the time of the update.  Is this accurate?

    Hi Nicholas,
    I am sure you are very much aware about the process flow however i just want to narrate again, if the plant and storage location is WM managed then you generally create a transfer order with reference to the delivery document to perform the picking process now at the time of confirmation of the transfer order you can update the Delivery immediately (Standard SAP) or you can set the indicator to delay the delivery update(Check the below mentioned screen and IMG path).if you choose to delay the delivery update those delivery will appear in the LX47 and you have the provision to manually update  the Delivery from there. (Apart from the above mentioned there are some instance where after confirmation of the transfer order and in between updating the delivery either material master / Batch master / or some other relevant parameter is in edit mode or locked which cause failure to update the delivery immediately and you will find those delivery will also appear in the LX47)
    Hope this will help to get some idea about LX47 and reason for delivery whose appear in the LX47
    SPRO PATH
    SPRO---Logistic Execution---Warehouse management ---Shipping ----Define Shipping control -- Shipping control per warehouse

Maybe you are looking for

  • Mini-DVI to Video to SD TV, no NTC display setting, fuzzy B&W picture

    I've seen several posts on this topic but I haven't seen a specific one that addresses my problem. I apologize for rehashing an old topic but I'm fairly desperate at this point. I have a Mini-DVI to Video adapter (from Best Buy and yes, I heard they

  • How to stop browser and mail applications opening automatically at start up

    Since upgrading to Lion, Firefox, Mail and Hogwasher (a newsreader application) all open automatically when the computer starts up. I don't want or need this to happen but cannot find any way of preventing it. I have checked my user login items, but

  • Open new window in the same form

    Hi I created a form with 6 records displayed. I have a check box for each record. I would like to select a record and make the check box checked. Once I check a checkbox I would like to automatically open a new window in the same form with further de

  • REP-56055: Exceed max connections allowed: 20

    I have a application server and a db server.I run several reports,then the error message comes up.So I reboot the app server but the problem isn't fixed. How to solve the problem?

  • Are all files backed up or are some skipped?

    hi i accidently deleted my web browser (chromium) history. i recovered all the application support files from time machine, but the history was still gone. i didn't tell the timemachine prefs to exclude anything. what gives? thanks.