SQLException Data area size calculation failed

I'm using weblogic to connect to db2. All my select queries are running well except one for wich I receive this error
Id=top-level;
Method=EstatisticasObj.getStats.getPeriodoHomologoMes(); Failure=java.sql.SQLException: [NEON][SCODBC_R DL                                                                                 L]Data area size calculation failed [ServiceException]
I never seen this before and it started to happen suddently.
Any help would be appreciated
thanks in advance,
Manuel Leiria

Id=top-level;
Method=EstatisticasObj.getStats.getPeriodoHomologoMes(
); Failure=java.sql.SQLException: [NEON][SCODBC_R DL
]Data area size calculation failed
[ServiceException]You are getting this exception from Neon Shadow DB2 driver. You could check their documentation for what that error message means.
http://www.neonsys.com/
The driver also has a trace log which you could find of some help.

Similar Messages

  • SNP Planned order start and end dates are not calculated correctly

    Hello SNP Guru's
    The SNP planned orders generated after the Heuristics run, have a start and end date based on the Activity Duration (Fixed), while the resource consumption is based on the Bucket Consumption (Variable), which is correct.
    The Activity Duration (Fixed) is based on the BOM Base Quantity. So if the Activity Duration = 1 day, and if the order quantity is more than a day, the start and end dates, still shows as 1 day. So no matter what is the order quantity, the start and end dates is always = 1 day.
    Does anyone have any experience in implementing any code to change the start and end dates on SNP Planned Order?
    Seems like it should work as standard.
    Am i missing something?
    Thanks,
    Mangesh

    Dear Mangesh,
    SNP is a infinite planning tool. If you have defined fixed duration to be 1 DAY in the activity, no matter how many quantity you input for your planned order, the order will last for one day. If the resourced is overloaded, you then run capacity levelling to
    banlance the capacity. What your expected beahavior happens in PPDS planning.
    Claire

  • BPC10MS SP12 - Agregate are not calculated

    Hi experts,
    Hope someone can help on this problem.
    I have a YTD model.
    We input data on base level,
    But, while we try to retrieve any agregate of any dimension, data are not calculated. and data disappeared :-(
    Any tips on this to help ?
    Tks a lot,
    Olivia.

    Olivia,
    This may seem like a silly question, but do you have the hierarchies defined on the dimensions?  Did things work before and just stop working?  If you have some screenshots of the dimensions or where you see the problem, that would be helpful, too...
    Jeff

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • UpdateCharacterStream for LONG column gives SQLException: Data size bigger than max

    I have a LONG column and when I call updateCharacterStream via the resultset I get the following error:
    SQLException: Data size bigger than max size for this type: 2391
    I am trying to update the column with a value that is 2391 in length. The column is defined as a LONG so it is plenty big enough.
    What gives?
    If I use a UTF-8 database it works OK. But if I use a db that is on the default character set I get the error. There are NO multibyte characters, I just happened to test it against my unicode db and it worked.

    This only happens with the thin driver. I am using 8i. It also works OK if the target db is in UTF8!
    D:\myjava>java OracleLongTest
    Registering the ORACLE JDBC drivers.
    Connecting to database
    Connection = oracle.jdbc.driver.OracleConnection@1616c7
    Insert via prepared statement with reader length = 2200
    Insert with prepared statement was successfull
    Updating via prepared statement with reader length = 2200
    Update with prepared statement was successfull
    ===========================================================================
    ticketDesc len = 2200 mod date = 970510081398
    Updating with reader length = 2200
    Updating C1 ts = 970510081608
    Updating the row...
    Exception 2 = java.sql.SQLException: Data size bigger than max size for this type: 2200
    java.sql.SQLException: Data size bigger than max size for this type: 2200
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:114)
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:156)
    at oracle.jdbc.dbaccess.DBError.check_error(DBError.java:775)
    at oracle.jdbc.ttc7.TTCItem.setArrayData(TTCItem.java:82)
    at oracle.jdbc.driver.OraclePreparedStatement.setItem(OraclePreparedStatement.java:761)
    at oracle.jdbc.driver.OraclePreparedStatement.setDatum(OraclePreparedStatement.java:1464)
    at oracle.jdbc.driver.OraclePreparedStatement.setCHAR(OraclePreparedStatement.java:1397)
    at oracle.jdbc.driver.OraclePreparedStatement.setObject(OraclePreparedStatement.java:1989)
    at oracle.jdbc.driver.OraclePreparedStatement.setObject(OraclePreparedStatement.java:2200)
    at oracle.jdbc.driver.OraclePreparedStatement.setOracleObject(OraclePreparedStatement.java:2216)
    at oracle.jdbc.driver.UpdatableResultSet.prepare_updateRow_binds(UpdatableResultSet.java:2070)
    at oracle.jdbc.driver.UpdatableResultSet.updateRow(UpdatableResultSet.java:1287)
    at OracleLongTest.run(OracleLongTest.java:146)
    at OracleLongTest.main(OracleLongTest.java:41)
    FINISHED!!
    StringBuffer sb = new StringBuffer("");
    for(int i = 0; i < 2200; i++)
    sb.append("X");
    Statement stmt = con.createStatement();
    try
    String insertSQL = "CREATE TABLE MYLONGTEST (C1 NUMBER(38), C2 LONG)";
    stmt.executeUpdate(insertSQL);
    catch( SQLException se) {}
    PreparedStatement insertPstmt = con.prepareStatement("INSERT INTO MYLONGTEST VALUES (?, ?)");
    PreparedStatement updatePstmt = con.prepareStatement("UPDATE MYLONGTEST SET C2 = ? WHERE C1 = ?");
    PreparedStatement selectPstmt = con.prepareStatement("SELECT C1,C2 FROM MYLONGTEST WHERE C1 = ?", 1004, 1008);
    long ts = System.currentTimeMillis();
    String value = sb.toString();
    Reader rdr = new StringReader(value);
    int len = value.length();
    try
    System.out.println("Insert via prepared statement with reader length = " + len);
    insertPstmt.setLong(1, ts);
    insertPstmt.setCharacterStream(2, rdr, len);
    insertPstmt.execute();
    System.out.println("Insert with prepared statement was successfull");
    catch( SQLException se )
    System.out.println("Caught exception = " + se.toString());
    try
    System.out.println("Updating via prepared statement with reader length = " + len);
    rdr = new StringReader(value);
    updatePstmt.setCharacterStream(1, rdr, len);
    updatePstmt.setLong(2, ts);
    int rowcount = updatePstmt.executeUpdate();
    System.out.println("Update with prepared statement was successfull");
    updatePstmt.close();
    catch( SQLException se )
    System.out.println("Caught exception = " + se.toString());
    System.out.println("===========================================================================");
    selectPstmt.setLong(1, ts);
    ResultSet rs = selectPstmt.executeQuery();
    rs.absolute(1);
    String c2 = rs.getString("C2");
    ts = rs.getLong("C1");
    System.out.println("c2 len = " + c2.length() + " mod date = " + ts);
    rs.absolute(1);
    ts = System.currentTimeMillis();
    System.out.println("Updating with reader length = " + len);
    rdr = new StringReader(value);
    rs.updateCharacterStream("C2", rdr, len);
    ts = System.currentTimeMillis();
    System.out.println("Updating C1 ts = " + ts);
    rs.updateLong("C1", ts);
    System.out.println("Updating the row...");
    rs.updateRow();
    null

  • Java.sql.SQLException: Io exception: Size Data Unit (SDU) mismatch

    Hi Experts,
    I'm encountering this error in my SAP-JDBC (Oracle) interface. The error message doesn't say much but the query being sent from SAP is working when executed directly into the database. I'm using thin JDBC driver for this connection.
    <SAP:AdditionalText>com.sap.engine.interfaces.messaging.api.exception.MessagingException: Error processing request in sax parser: Error when executing statement for table/stored proc. 'dbTableName' (structure 'statement_select'): java.sql.SQLException: Io exception: Size Data Unit (SDU) mismatch</SAP:AdditionalText>
    Appreciate your insights regarding this. Thanks a lot!
    Mark

    I'm encountering this error in my SAP-JDBC (Oracle) interface. The error message doesn't say much but the query being sent from SAP is working when executed directly into the database. I'm using thin JDBC driver for this connection.
    Welcome to the forum!
    Please get into the habit of SHOWING us what you are doing instead of just telling us.
    So don't say 'the query being sent'; SHOW us the query being sent.
    And don't say 'thin JDBC driver'; SHOW us the actual name and version of the JDBC jar file you are using.
    For JDBC issues you should also provide the full 4 digit Oracle version (select * from V$VERSION) and the full Java version that you are using.
    When you can reproduce the problem by using a simple set of Java code it helps if you post the code so we can try to reproduce the problem.
    A search of the web shows that there have been reports that upgrading to ojdbc6.jar resolves the problem but for other uses it seemed to be a case of several threads using the same connection.

  • Basic / Scheduling dates are calculated for both main orders and Collective

    How the Basic / Scheduling dates are calculated for both main orders and Collective orders?
    when in master data we have given different value of in house production time for both the orders.

    Hi,
    The basic dates are calculated from the data mentioned in the Material Master - In-house production time days, either lot size dependent or lot size independent.
    The scheduled dates are calculated from the timings maintained in the routing.
    In the configuration for Scheduling parameters for the order type - Adjust scheduling - the dates can be adjusted.
    Kindly revert back if you need any further clarifications.
    Warm regards,
    Umesh Poojari

  • Java.sql.SQLException: Data size bigger than max size for this type: 77835

    I'm having this strange SQLException while trying to insert BLOB image data to Oracle database. I'm saying this strange because for the same image that has been inserted before it's throwing the exception. My program is run from Oracle form and then some image data are inserted into database through a loop. I can't realize what's the problem inside my code that's causing this problem. In fact, when I run my program independently not from Oracle Form, it runs fine, every image data get inserted into database. Given below is my code snippet:
    public void insertAccDocs(String[] accessions) throws SQLException
        for(int q=0; q<accessions.length; q++)
        final String  docName = accessions[q];
        dbThread = new Thread(new Runnable(){
        public void run()
          try{
          System.out.println("insertDB before connection");
                   getConnected();
                   System.out.println("insertDB after connection");
                   st=con.createStatement();
             //String docName = acc; commented
         // String docName = singleAccession;
                   String text = formatFree;
                   String qry = "INSERT INTO DOCUMENT VALUES('"+docName+"','"+text+"','"+formatted+"','"+uiid+"')";
                   System.out.println("parentqry"+qry);
                   int ok=0;
                   ok=st.executeUpdate(qry);
                   if(ok==1)
                System.out.println("INSERTION SUCCESS= "+ok);
                        System.out.println("Image List Size"+ imgList.size());
                // inserting into child
                        for(int i=0;i<imgList.size(); i++)
                      String imgPath = ""+imgList.get(i);
                                  System.out.println("db"+imgPath);
                                  FileInputStream fin = new FileInputStream(imgPath);
                                  BufferedInputStream bufStr = new BufferedInputStream(fin);
                                  byte[] imgByte = new byte[bufStr.available()];
                                  String img = "" + imgNameList.get(i);
                                  System.out.println("imgid="+i);
                callable = con.prepareCall("{call prc_insert_docimage(?,?,?)}");
                callable.setString(1,docName);
                            callable.setString(2,img);
                callable.setBinaryStream(3, bufStr , (int)imgByte.length);
                callable.execute();
            callable.close();
                     con.commit();
            con.close();
                   else
                        System.out.println("INSERTION NOT SUCCESS");
       //   } //else end of severalAcc
                   }catch(Exception err){ //try
                        System.out.println(err.toString());
        } //run
      }); //runnable
      dbThread.start();     
    }Can someone help me solve this problem?
    Regards
    Rashed

    Hi thanks....would you please make me a bit clear about your suggestions? Where should I move to see the "max packet size allowed to send to server in jdbc driver"? If I want to change this on server side or as one of JDBC connection property, which topic should I look for in Oracle doc...please let me know.
    Regards
    Rashed

  • From some reason (i'm not sure why..) my whole iphoto library got deleted. I restored the whole library from my backup. The photos are blank but steel have the photo info. (date, res., size etc.) Do you know what's the prob.? why can't i see my pics.

    from some reason (i'm not sure why..) my whole iphoto library got deleted. I restored the whole library from my backup. Now all the photos are blank but steel have the photo info. (date, res., size etc.) Do you know what's the prob.? why can't i see my pics.

    It's says that file does not exist. How come? And why does it still show the image info.?
    Where can i find those files (i have a daily backup)
    The original image files are missing form your iPhoto library or iPhoto cannot find them, because the link to the originals has been broken. The image info is stored in the internal libraries, independent of the original files. That is, why you are still seeing them.
    I restored the whole library from my backup.
    Try restoring from an older backup, from before you first noticed the problem. 
    How large is the library, that you restored? Is the file size large enough to hold all your photos, or has the size been reduced?  If the library is still large, the photos may still be inside, even if iPhoto cannot find them.
    It's says that file does not exist. How come?
    What happened, before your iPhoto library got deleted? Which applications have you been running, or which new software have installed or upgraded?  Have you moved the library to a different disk or tried to use it from different user account or access it over the network?

  • SqlEjava.sql.SQLException: Data size bigger than max size for this type: 52

    SqlEjava.sql.SQLException: Data size bigger than max size for this type: 5200
    can any one help how to resolve this error

    Im using clob object I want to insert more data
    how to insert more data

  • Windows 2008 x64 TS Printing - The data area passed to a system call is too small

    Hi,
    I have a Windows 2008 TS using Easy Print. The server has two quad core cpu's with 16Gb of ram and is running a light processing accounting program for 80 users. The machine and easy print both work fine until the server reaches about 5Gb of memory in use. Once this happens user's documents stop printing with event id's 6161 The data area passed to a system call is too small being logged. None of the hotfixes seem to be applicable to my scenario as I am not getting the common "Access Denied" error. I have increased received and transmit buffer sizes on both NIC's to the 2048 maximum, and set the Sizreqbuf setting in the registry - to no avail.
    Any help appreciated.
    Thanks

    Hi,
    Thanks for the post.
    The Event error 6161 is caused by one of the following conditions:
    The printer is not reachable on the network
    Windows cannot allocate sufficient memory
    There was invalid or incomplete data received by the print spooler
    A driver upgrade failed
    There is a bad printer device driver
    For more information, please refer to the following article:
    http://technet.microsoft.com/en-us/library/cc773865(WS.10).aspx
    In this case, I assume that this issue may be caused by the insufficient memory.
    In Event Viewer, examine the event and look for the following text: "Win32 error code returned by the print processor: 8. Not enough storage is available to process this command".
    If so, please collect system information for 60 seconds and generate a System Diagnostics Report:
    Open an elevated Command Prompt window. (Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.)
    At the command prompt, type perfmon /report and then press ENTER. Reliability and Performance Monitor will start collecting data to create the System Diagnostics Report.
    When the report is ready for viewing, locate the Diagnostic Results section of the report, and then check for any warnings (indicated by Warnings in the report). You can follow links to additional help on resolving warnings from this section. In addition, you can expand each category in the Basic System Checks section to see more details about why warnings appear. Also, the Performance section provides process-level details about top consumers of resources. You might need to increase the size of the paging file or add physical memory.
    Thanks,
    Miles

  • WLI problem when processing a high number of records - SQLException: Data e

    Hi
    I'm having some trouble with a process in WLI when processing a high number of records from a table. I'm using WLI 8.1.6 and Oracle 9.2.
    The exception I'm getting is:
    javax.ejb.EJBException: nested exception is: java.sql.SQLException: Data exception -- Data -- Input Data length 1.050.060 is greater from the length 1.048.576 specified in create table.
    I think the problem is not with the table because it's pretty simple. I'll describe the steps in the JPD below.
    1) A DBControl checks to see if the table has records with a specific value in a column.
    select IND_PROCESSADO from VW_EAI_INET_ESTOQUE where IND_PROCESSADO = 'N'
    2) If there is one or more records, we update the column to another value (in other DBControl)
    update VW_EAI_INET_ESTOQUE  set IND_PROCESSADO = 'E' where IND_PROCESSADO = 'N'
    3) We then start a transaction with following steps:
    3.1) A DBControl queries for records in a specific condition
    select
    COD_DEPOSITO AS codDeposito,
    COD_SKU_INTERNO AS codSkuInterno,
    QTD_ESTOQUE AS qtdEstoque,
    IND_ESTOQUE_VIRTUAL AS indEstoqueVirtual,
    IND_PRE_VENDA AS indPreVenda,
    QTD_DIAS_ENTREGA AS qtdDiasEntrega,
    DAT_EXPEDICAO_PRE_VENDA AS dataExpedicaoPreVenda,
    DAT_INICIO AS dataInicio,
    DAT_FIM AS dataFim,
    IND_PROCESSADO AS indProcessado
    from VW_EAI_INET_ESTOQUE
    where IND_PROCESSADO = 'E'
    3.2) We transform all the records found to and XML message (Xquery)
    3.3) We update again update the same column as #2 to other value.
    update VW_EAI_INET_ESTOQUE set  IND_PROCESSADO = 'S'   where IND_PROCESSADO = 'E'.
    4) The process ends.
    When the table has few records under the specified condition, the process works fine. But if we test it with 25000 records, the process fails with the exception below. Sometimes in the step 3.1 and other times in the step 3.3.
    Can someone help me please?
    Exception:
    <A message was unable to be delivered from a WLW Message Queue.
    Attempting to deliver the onAsyncFailure event>
    <23/07/2007 14h33min22s BRT> <Error> <EJB> <BEA-010026> <Exception occurred during commit of transaction
    Xid=BEA1-00424A48977240214FD8(12106298),Status=Rolled back. [Reason=javax.ejb.EJBException: nested
    exception is: java.sql.SQLException: Data exception -- Data -- Input Data length 1.050.060 is greater from the length  1.048.576 specified in create table.],numRepliesOwedMe=0,numRepliesOwedOthers= 0,seconds since begin=118,seconds left=59,XAServerResourceInfo[JMS_cgJMSStore]=(ServerResourceInfo[JMS_cgJMSStore]=(state=rolledback,assigned=cgServer),xar=JMS_cgJMSStore,re-Registered =
    false),XAServ erResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=
    (state=rolledback,assigned=cgServer),xar=weblogic.jdbc.wrapper.JTSXAResourceImpl@d38a58,re-Registered =false),XAServerResourceInfo[CPCasaeVideoWISDesenv]=
    (ServerResourceInfo[CPCasaeVideoWISDesenv]=(state=rolledback,assigned=cgServer),xar=CPCasaeVideoWISDesenv,re-Registered = false),SCInfo[integrationCV+cgServer]=(state=rolledback),
    properties=({weblogic.jdbc=t3://10.15.81.48:7001, START_AND_END_THREAD_EQUAL=false}),
    local properties=({weblogic.jdbc.jta.CPCasaeVideoWISDesenv=weblogic.jdbc.wrapper.TxInfo@9c7831, modifiedListeners=[weblogic.ejb20.internal.TxManager$TxListener@9c2dc7]}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=
    (CoordinatorURL=cgServer+10.15.81.48:7001+integrationCV+t3+,
    XAResources={JMS_FileStore, weblogic.jdbc.wrapper.JTSXAResourceImpl, JMS_cgJMSStore, CPCasaeVideoWISDesenv},NonXAResources={})],CoordinatorURL=cgServer+10.15.81.48:7001+integrationCV+t3+): javax.ejb.EJBException: nested exception is: java.sql.SQLException: Data exception -- Data -- Input Data length 1.050.060 is greater from the length 1.048.576 specified in create table.
            at com.bea.wlw.runtime.core.bean.BMPContainerBean.ejbStore(BMPContainerBean.java:1844)
            at com.bea.wli.bpm.runtime.ProcessContainerBean.ejbStore(ProcessContainerBean.java:227)
            at com.bea.wli.bpm.runtime.ProcessContainerBean.ejbStore(ProcessContainerBean.java:197)
            at com.bea.wlwgen.PersistentContainer_7e2d44_Impl.ejbStore(PersistentContainer_7e2d44_Impl.java:149)
            at weblogic.ejb20.manager.ExclusiveEntityManager.beforeCompletion(ExclusiveEntityManager.java:593)
            at weblogic.ejb20.internal.TxManager$TxListener.beforeCompletion(TxManager.java:744)
            at weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(ServerSCInfo.java:1069)
            at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:118)
            at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAndChain(ServerTransactionImpl.java:1202)
            at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(ServerTransactionImpl.java:2007)
            at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:257)
            at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:228)
            at weblogic.ejb20.internal.MDListener.execute(MDListener.java:430)
            at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:333)
            at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:298)
            at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2698)
            at weblogic.jms.client.JMSSession.execute(JMSSession.java:2610)
            at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
            at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
    Caused by: javax.ejb.EJBException: nested exception is: java.sql.SQLException: Data exception -- Data -- Input Data length 1.050.060 is greater from the length 1.048.576 specified in create table.
            at com.bea.wlw.runtime.core.bean.BMPContainerBean.doUpdate(BMPContainerBean.java:2021)
            at com.bea.wlw.runtime.core.bean.BMPContainerBean.ejbStore(BMPContainerBean.java:1828)
            ... 18 more

    Hi Lucas,
    Following is the information regarding the issue you are getting and might help you to resolve the issue.
    ADAPT00519195- Too many selected values (LOV0001) - Select Query Result operand
    For XIR2 Fixed Details-Rejected as this is by design
    I have found that this is a limitation by design and when the values exceed 18000 we get this error in BO.
    There is no fix for this issue, as itu2019s by design. The product always behaved in this manner.
    Also an ER (ADAPT00754295) for this issue has already been raised.
    Unfortunately, we cannot confirm if and when this Enhancement Request will be taken on by the developers.
    A dedicated team reviews all ERs on a regular basis for technical and commercial feasibility and whether or not the functionality is consistent with our product direction. Unfortunately we cannot presently advise on a timeframe for the inclusion of any ER to our product suite.
    The product group will then review the request and determine whether or not the functionality/feature will be included in a future release.
    Currently I can only suggest that you check the release notes in the ReadMe documents of future service packs, as it will be listed there once the ER has been included
    The only workaround which I can suggest for now is:
    Workaround 1:
    Test the issue by keep the value of MAX_Inlist_values parameter to 256 on designer level.
    Workaround 2:
    The best solution is to combine 'n' queries via a UNION. You should first highlight the first 99 or so entries from the LOV list box and then combine this query with a second one that selects the remaining LOV choices.
    Using UNION between queries; which is the only possible workaround
    Please do let me know if you have any queries related to the same.
    Regards,
    Sarbhjeet Kaur

  • Disc recording log; Finder: Write (10), block: 4294967146, count: 32 - 3/73/03 Medium Error, Power calibration area error, Burn failed

    Hello All Apple Heads (Brit. short for headmaster or headmistress.)!
    My MATSHITA DVD-R UJ-846
    (Firmware Revision: FQ3T, ATAPI, Burn Support: Yes [Apple Shipping Drive], Cache: 2048 KB, Reads DVD: Yes, CD-Write: -R, -RW, DVD Write:-R, -RW, +R, +R DL, +RW, Write Strategies: CD-TAO, CD-SAO, DVD-DAO), optical super drive is unable to burn on any CD or DVD.
    The Disc Recording log list below writes: "Finder: Write (10), block: 4294967146, count: 32 -> 3/73/03 Medium Error, Power calibration area error, Burn failed".
    DiscRecording.log:
      Description:          Disc recording log
      Size:          2 KB
      Last Modified:          11-09-15 4:50 PM
      Location: /Users/-/Library/Logs/DiscRecording.log
      Recent Contents:          Finder: Burn started, Thu Sep 15 16:40:47 2011
    Finder: Burning to CD-R media with SAO strategy in MATSHITA DVD-R   UJ-846 FQ3T via ATAPI.
    Finder: Requested burn speed was max, actual burn speed is 24x.
    Finder: Burn underrun protection is supported, and enabled.
    Finder: Write (10), block: 4294967146, count: 32 -> 3/73/03 Medium Error, Power calibration area error
    Finder: Burn failed, Thu Sep 15 16:41:57 2011
    Finder: Burn sense: 3/73/03 Medium Error, Power calibration area error
    Finder: Burn error: 0x8002006D The disc can't be burned; it might be incompatible with this disc drive. Please try a different brand of disc, or try burning at a slower speed.
    Finder: Burn started, Thu Sep 15 16:43:37 2011
    Finder: Burning to CD-R media with SAO strategy in MATSHITA DVD-R   UJ-846 FQ3T via ATAPI.
    Finder: Requested burn speed was 8x, actual burn speed is 8x.
    Finder: Burn underrun protection is supported, and enabled.
    Finder: Burn finished, Thu Sep 15 16:49:31 2011
    Finder: Verify started, Thu Sep 15 16:49:31 2011
    Finder: Verify failed, Thu Sep 15 16:50:20 2011
    Finder: Verify sense: 3/02/00 Medium Error, No seek complete
    Finder: Verify error: 0x80020063 Verifying the burned data failed.
    Can this issue be solved by means of firmware/updates, terminal command lines, or through other open source OS?
    Is this issue related to lense cleanliness?
    Much thanks!
    Sam
    Message was edited by: saltislandsam

    I have also tried TDK, Maxell, Verbatim and Sony media
    Me too. Out of that list the only brand that has NEVER produced a coaster (on the same burner as yours) is Verbatim DVD-R and Fuji DVD-R. All the others produced a 25-50% failure rate.
    Have you tried burning at a slower speed?
    Using 16x DVD media is fine - in fact it is difficult to buy any other - but there is a consensus in the Apple Support Forums that a slower burn is a better burn i.e. 2x or 4x (slow burns are better burns!). I always use Toast for burning.
    The term "Best" means the fastest speed that the drive told Toast it can write to a specific disc. The drive's firmware and info on the disc decide what speed burns are available. When you press the speed setting button in Toast (after inserting a disc) you'll likely see some speeds in italics and some in bold face. The ones in bold face are supported by that media on that drive. The fastest one is what Toast calls Best.
    Audio CDs in particular should be burned at the lowest supported speed.
    Verification is a good indicator the disc is burned okay. However, other DVD players can still have problems with the disc. Media problems with various drives is not uncommon. Slower burning may reduce the chance of those problems, and is one of the reasons why RW (read/write) media is always rated slower than DVD-R.
    There are some interesting facts here:
    http://www.osta.org/technology/dvdqa/dvdqa4.htm and here:
    http://www.osta.org/technology/dvdqa/dvdqa4.htm
    But many will tell you that the 'slower burn is best' theory is outdated. Who really knows? At the end of the day, if your home-made DVD was verified by Toast and will play anywhere on anybody's DVD player, then that is the result we are all after!

  • Impact of Changing Data Package Size with DTP

    Hi All,
    We have delta dtp to load data from DSO to infocube. Default data package size with dtp is 50,000 records.
    Due to huge no of data, internal table memory space is used and data loading get fails.
    Then we changed the data package size to 10,000, which executes the data load successfully.
    DTP with package size of 50,000 took 40 minutes to execute and failed, but DTP with package size of 10,000 took 15 minutes (for same amount of data).
    Please find below my questions:
    Why a DTP with bigger size of packet runs longer than a DTP with lower packet size ?
    Also by reducing the standard data package size 50,000 to 10,000, will it impact any other data loading?
    Thanks

    Hi Sri,
    If your DTP is taking more time then check your transformation .
    1.Transformation with Routines always take more time so you if you want to reduce the time of execution then routine should be optimized for good performance .
    2.Also check if you have filter at DTP level .Due to filters DTP takes long time .If same data get filtered at routine level it take much lesser time .
    3.If you cannot change routine then you can set semantic keys at your DTP .The package data will be sorted as per semantic keys and thus it may be helpful at routine level for fast processing.
    4.Your routine is getting failed due to  internal table memory space so check if you have select statement in routine without FOR ALL ENTRIES IN RESULT_PACKAGE or SOURCE_PACKAGE line .if you will use this It will reduce record count .
    5.Wherever possible delete duplicate records and if possible filter useless data at start routine itself .
    6.Refresh internal table if data no longer needed .If your tables are global then data will be present at every routine level so refreshing will help to reduce size.
    7.The maximum memory that can be occupied by an internal table (including its internal administration) is 2 gigabytes. A more realistic figure is up to 500 megabytes.
    8.Also check no of jobs running that time .May be you have lots of jobs active at the same time so memory availability will be less and DTP may get failed .
    Why a DTP with bigger size of packet runs longer than a DTP with lower packet size ?
    *Start and end routine works at package level so routine run for each package one by one .By default package have sorted data based on keys (non unique keys (characteristics )of source or target) and by setting semantic keys you can change this order.So Package having more data will take more time in processing then package have lesser data .
    by reducing the standard data package size 50,000 to 10,000, will it impact any other data loading?
    It will only impact running of that load .but yes if lots of other loads are running simultaneously then server can allocate more space to them .So better before reducing package size just check whether it is helpful in routine performance (start and end ) or increasing overhead .
    Hope these points will be helpful .
    Regards,
    Jaya Tiwari

  • AUTOSMARTPORT-F-DEV_CALC_FAILED: XDP device type calculation failed: interface

    Hello,
    The company I work for has some Cisco hardware in the plant and, as our team has no expertise on configuring Cisco switches I hope you can help me and my team to get this problem solved. I will make a brief comment on what is going wrong here:
    I own two SG300-28 and they are connected using LAG over 2 interfaces (GI27 and GI28) but this is not the problem, I think, just the scenario. So I have two sides:
    CSC <--- LAG ---> TOO
    At CSC I have no problem and the switch is running fine for 38 days (we had a hard reboot back there).
    On the other hand, at TOO, we are facing a soft reboot problem and we are not able to fix it. The only message we got, just before the reboot is:
    2147476226          2013-Dec-09 06:20:05          Emergency           %AUTOSMARTPORT-F-DEV_CALC_FAILED: XDP device type calculation failed: interface gi21 - capability 2 ***** FATAL ERROR *****  Reporting Task: NCDP. Software  Version: 1.3.5.58 (date  10-Oct-2013 time  17:15:41) 0x16adbc 0x166f28 0x6df2b0 0x48fad8 0x4903e8 0x490608 0x6e8490 0x6e8528 0x6e8d90 0x82eb24 0x831 7b8 0x837a88 0x1223f0 ***** END OF FATAL ERROR *****
    After reboot:
    2147483510          2013-Dec-09 06:26:29          Warning           %CDP-W-NATIVE_VLAN_MISMATCH: Native VLAN mismatch detected on interface gi21.
    And the switch reboots. This is causing us a lot of problems. As an emergency exit (and it's not fully working) we disabled AutoSmartPort (which is a functionality we will never use in the network layout so we don't need it).
    Is there anything you can tell me to do? I am attaching running config for TOO. Befero someone ask we already tried to change the hardware (we own a spare SG300) and the same problem is there.

    It's on another post, sorry for this duplicate post:
    https://supportforums.cisco.com/message/4112232#4112232

Maybe you are looking for

  • Manage "Change PC settings" in Win 8.1 using PS

    how to manage "Change PC settings" -> Privacy in Win 8.1 using Powershell. Thanks. -ss

  • Purchase tax & CENVAT

    Hi All, Can somebody explain me about TAX When VAT is applicable & when CST is applicable. when both VAT+CST is applicable. and what are the tax percentage applicable.

  • Regarding sub headings in ALV

    Hello Experts,                          I am preparing a ALV report in that i want to display fields in a such a way that under a heading i need to have two sub headings ie, for example under a material heading i need to display material no and code

  • BRARCHIVE Error

    Hi We are running PI 7.1 EHP1 on Windows (VM) / Oracle. I ran into a error while doing a online consistent backup using Legato Networker. The database backup part runs fine however the one which has to do with the offline redo logs fails. The error t

  • How to I remove music files from my computer through Itunes?

    I had to reinstall itunes and create a whole new library. All my music is kept on an external hard drive which I keep plugged into my laptop, and Itunes reads the files from that. Before I had to reinstall, when I tried to delete a file, a pop up wou