Error Storing records in database

Hi all,
Can anybody help me.I am getting following error message when I try to store data into a MySql database using a JSP form.I am using Tomcat4.0 server.
The error message which I get is as follows :
java.lang.NoClassDefFoundError : org/apache/jasper/runtime/JspException
at org.apache.jasper.runtime.PageContextImpl.handlePageException(PageContextImpl.java:463)
at org.apache.jsp.smthank$jsp._jspService(smthank$jsp.java:451)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:107)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
at org.apache.jasper.servlet.JspServlet$JspServletWrapper.service(JspServlet.java:201)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:381)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:473)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:247)
at org.apache.catalina.core.ApplicationFilterChain.access$000(ApplicationFilterChain.java:98)
at org.apache.catalina.core.ApplicationFilterChain$1.run(ApplicationFilterChain.java:176)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:172)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:243)
at org.apache.catalina.core.StandardPipeline.invokeNext(StandardPipeline.java:566)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:472)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:943)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:190)
at org.apache.catalina.core.StandardPipeline.invokeNext(StandardPipeline.java:566)
at org.apache.catalina.valves.CertificatesValve.invoke(CertificatesValve.java:246)
at org.apache.catalina.core.StandardPipeline.invokeNext(StandardPipeline.java:564)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:472)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:943)
at org.apache.catalina.core.StandardContext.invoke(StandardContext.java:2347)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:180)
at org.apache.catalina.core.StandardPipeline.invokeNext(StandardPipeline.java:566)
at org.apache.catalina.valves.ErrorDispatcherValve.invoke(ErrorDispatcherValve.java:170)
at org.apache.catalina.core.StandardPipeline.invokeNext(StandardPipeline.java:564)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:170)
at org.apache.catalina.core.StandardPipeline.invokeNext(StandardPipeline.java:564)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:472)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:943)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:174)
at org.apache.catalina.core.StandardPipeline.invokeNext(StandardPipeline.java:566)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:472)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:943)
at org.apache.ajp.tomcat4.Ajp13Processor.process(Ajp13Processor.java:458)
at org.apache.ajp.tomcat4.Ajp13Processor.run(Ajp13Processor.java:551)
at java.lang.Thread.run(Thread.java:534)
I would appreciate if someone could let me know the cause of this
error message.What could be the possible reason for this error ?
thanks for any help in advance.
savdeep

Dear Martin,
thanks for the reply.The same JSP is running fine on my local Tomcat
server.I have tested it.
does this means that there is some problem in the classpath of the server in which i am uploading my JSP ?
It means, I have to contact the server administrator of the external server for getting the problem sorted out so that they can check the problem in their classpath.Please inform.
waiting for ur response.
thanks,
savdeep.

Similar Messages

  • Error Stored Procedure Upgrading database from SBO 2007A PL42 to SBO 8.8

    Hi,
    When executing the SBO 8.8 PL00  upgrade procedure for databases for an "old" database 2007A PL 42, upgrtade stops whith an error concerning a missing stored procedure "_TmSp_CorrectWrongDocLineNumberInOINMForIPF which is not found".
    The error details says that it occurs on "Stock Updates" step (just after Checks for Payment) and log file gives the following message here after.
    Does anybody have an idea how to upgrade the 2007A database ?
    Thanks
    Error message in log :
    Upgrade          Error              Technical     Failed to call Stored Procedure TmSpCorrectWrongDocLineNumberInOINMForIPF
    Edited by: Michel Diepart on Oct 24, 2009 10:55 PM

    Hi Paul,
    Thanks for your prompt response on a saturday and so late in the evening
    Concerning the Stored Procedure, I would like to delete it but ... it doesn't exist and that's the problem...
    No such procedure in my database (programmability / Stored procedurefolder in SQL).
    When searching on the forum about this stored procedure, no result.
    Does this stored procedure have a link with something regarding adjustment in stock ?
    "Stock updates" : what table is it in SAP B1 ? If necessary I could save the table, create a sql query to re-create it after delete and try to restart the upgrade procedure but I don't know what is the related table...
    I will also try with another 2007A PL 42 database to see if it works.
    Do you know if anybody has already try to upgrade a 2007 PL42 dabase to SBO 8.8 ?
    Best regards

  • Error retrieving record through database adapter.  (ORABPEL-11614)

    Am trying to pass a customer name into a database adapter to retrieve customer info. It is working, but it is also faulting.
    When I invoke the process with the name "3M COMPANY", I get the following audit message
    [2008/06/05 17:36:01] Faulted while invoking operation "GetCustomerDataSelect_paramName" on provider "GetCustomerData".less
    -<messages>
    -<input>
    -<customerName>
    -<part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="GetCustomerDataSelect_paramName_inparameters">
    -<GetCustomerDataSelect_paramNameInputParameters xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/top/GetCustomerData">
    <paramName>3M COMPANY
    </paramName>...
    WSIF JCA Execute of operation 'GetCustomerDataSelect_paramName' failed due to: DBReadInteractionSpec Execute Failed Exception.
    Query name: [GetCustomerDataSelect], Descriptor name: [GetCustomerData.Cstcgvmp].
    ; nested exception is:
         ORABPEL-11614
    DBReadInteractionSpec Execute Failed Exception.
    Query name: [GetCustomerDataSelect], Descriptor name: [GetCustomerData.Cstcgvmp].
    See root exception for the specific exception. Caused by Exception [TOPLINK-6044] (Oracle TopLink - 10g Release 3 (10.1.3.1.0) (Build 061004)): oracle.toplink.exceptions.QueryException
    Exception Description: The primary key read from the row [DatabaseRecord(
         => 101
         => A
         => 3M COMPANY
         => 3M CENTER
         => SAINT PAUL
         => MN
         => 551441000
         => 22)] during the execution of the query was detected to be null. Primary keys must not contain null.
    Query: ReadAllQuery(bpel___localhost_default_Onboarding_1_0__MD5_8f8d9220a0bc6957be6fb657cf4ffabd_.GetCustomerData.Cstcgvmp).
    </summary ...
    The retrieved data is correct. The Database Adapter is pointing to an AS/400 table which does not have primary keys defined. In the offline database, I've defined CGVMSTCST to be the primary key.
    The toplink_mappings.xml reflects that:
    <opm:alias>Cstcgvmp</opm:alias>
    <opm:primary-key>
    <opm:field table="CSTCGVMP" name="CGVMSTCST" xsi:type="opm:column"/>
    </opm:primary-key>
    The value of this field for 3M is 101 (which was correctly picked up).
    Why am I getting a primary key is null error?

    I did, but I'm not sure how.
    I deleted the Database Adapter and the associated offline file and reran the wizard (having a better idea of how to implement a parameter search) and it worked.
    I was not able to identify what was created differently the second time.

  • Insertion of an error report was throttled to prevent flooding of the Call Detail Recording (CDR) database

    Hello,
    One of my Lync 2013 standard front-end servers is frequently logging this error:
    Log Name:      Lync Server
    Source:        LS Data Collection
    Date:          28/12/2012 10:04:05
    Event ID:      56208
    Task Category: (2271)
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      LYNC-FE2.mydomain.local
    Description:
    Insertion of an error report was throttled to prevent flooding of the Call Detail Recording (CDR) database.
    Component: CDR Adaptor
    Cause: This is an expected condition if too many error reports of the same type were reported at the same time.
    Resolution:
    No action is needed. A large enough number of error reports of this type have already been inserted into the database and can be used for troubleshooting and reporting. Additional errors are not inserted to avoid flooding of the database with redundant information.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="LS Data Collection" />
        <EventID Qualifiers="35039">56208</EventID>
        <Level>3</Level>
        <Task>2271</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2012-12-28T10:04:05.000000000Z" />
        <EventRecordID>10745</EventRecordID>
        <Channel>Lync Server</Channel>
        <Computer>LYNC-FE2.mydomain.local</Computer>
        <Security />
      </System>
      <EventData>
        <Data>CDR Adaptor</Data>
      </EventData>
    </Event>
    I don't appear to have any problems or issues with Lync at all. Full Enterprise Voice is in use at this site.
    Anyone know why this may be occuring?
    Thanks,
    Ben

    Did you check any additional errors? I'm seeing the same issue and linked it to this actual error message that seems to point towards a broken data model or bugged type casting/checking in the interface:
    Failed to execute a stored procedure on the back-end.
    Component: CDR Adaptor
    Stored Procedure: RtcReportError
    Error: System.Data.SqlClient.SqlException (0x80131904): Error converting data type varchar to float.
       at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
       at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
       at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
       at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
       at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite)
       at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)
       at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite)
       at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
       at Microsoft.Rtc.Common.Data.DBCore.Execute(SprocContext sprocContext, SqlConnection sqlConnection, SqlTransaction sqlTransaction)
    ClientConnectionId:xxxxxxxxxxxxx
    Cause: Configuration issues, an unreachable back-end or an unexpected condition has resulted in the error.
    Resolution:
    Verify the back-end is up and this Lync Server has connectivity to it. If the problem persists, notify your organization's support team with the relevant details.

  • BPM: write to DB, BPM error, rollback of stored data in database possible??

    hello together,
    i will have following problem:
    i will send information from outside through XI over BPM and RFC to a SAP r/3 system. if the bpm process has an error after the storing information into the database all changes in the database have to be make undo. but how can i do this? how can i handle a rollback in the database regarding to BPM?
    thanks a lot
    alex

    Hi Sravya,
    it is not a specified error. i just want to know how to handle an error or exception if data are stored in a database (rollback and so on).
    the next question i have: when a bpm will be stopped by an exception handling or an error and i have the possibility to restart it (haven't i??) at which point of the bpm the restart will begin? at the point of error or at the start point of the bpm??
    thanks a lot
    alex

  • AN ERROR OCCURRED WHILE OBTAINING DATABASE SCHEMAS sqlserver stored proc

    I have used the documented Oracle procedures for building the necessary wsdl and xsd files for a sql server 2000 and 2005 stored proc. The wsdl files and xsd are generated and incorporated per the tech doc. The process was working for a while and now, when we try to access the dbadapter partner link created using the wsd, we get the following error:
    "AN ERROR OCCURRED WHILE OBTAINING DATABASE SCHEMAS. VERIFY THAT THE DATABASE CONNECTION IS VALID AND IS SUPPORTED."
    We have tried getting this corrected by using each of the specified jar libraries as part of the project:
    sqljdbc.jar
    msutil.jar
    msbase.jar
    mssqlserver.jar
    Still, no solution.
    Anybody have this happen? Fixes? Help......

    If you used a command-line utility to generate your WSDL and XSD that would indicate that your JDev version does not support SQL Server so you can't use the Adapter Configuration Wizard to create partner links that execute stored procedures on SQL Server. Creating the initial partner link using the generated WSDL and XSD file will work fine, but if you try to edit that partner link in JDev it won't work if your JDev doesn't support SQL Server. You'll have to regenerate the WSDL and XSD as needed and recreate the partner link.

  • How to get values/data stored in the database into a list-item.

    how to get values/data stored in the database into a list-item.
    i tried to make a list item without any values assigned to it...but i got the below error.
    FRM-30191: No list items defined for required poplist.
    or
    FRM-32082: Invalid value for given item type.
    List EMPNO
    Item: EMPNO
    Block: EMP
    Form: MODULE5
    FRM-30085: Unable to adjust form for output.
    then according to some docs, i tried the the following for the trigger
    when-new-form-instance
    declare
         rg_name varchar2(40) := 'emp_rec';
         status number;
         groupid recordgroup;
         it_id item;
    begin
         it_id := Find_Item('empno');
         groupid := create_group_from_query(rg_name, 'select empno from emp');
         status := populate_group(groupid);
         populate_list(it_id, groupid);
    end;
    but yet didnt work... :(
    so how the heck do i get values fetched from the database table into the list item?

    for list items you need to values in the record group, one is the shown value and one is the returned value.
    Check out the online help for the populate_list built-in.
    You'll need something like select ename,ename from emp as the record group query.

  • Error in creating Duplicate Database in same server..

    Hi,
    I am getting following error when creating duplicate database
    DB Version=10.2.0.4
    $ rman target sys/sys@test nocatalog auxiliary /
    Recovery Manager: Release 10.2.0.4.0 - Production on Mon Sep 28 15:13:32 2009
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    connected to target database: TEST (DBID=1702666620, not open)
    using target database control file instead of recovery catalog
    connected to auxiliary database: CLNTEST (not mounted)
    RMAN> DUPLICATE TARGET DATABASE TO "CLNTEST";
    Starting Duplicate Db at 28-SEP-09
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: sid=156 devtype=DISK
    contents of Memory Script:
    set until scn 597629461;
    set newname for datafile 1 to
    "/u01/data_new/clonetest/system01.dbf";
    set newname for datafile 2 to
    "/u01/data_new/clonetest/undotbs01.dbf";
    set newname for datafile 3 to
    "/u01/data_new/clonetest/sysaux01.dbf";
    set newname for datafile 4 to
    "/u01/data_new/clonetest/users01.dbf";
    set newname for datafile 5 to
    "/u01/data_new/clonetest/example01.dbf";
    set newname for datafile 6 to
    "/u01/data_new/clonetest/undotbs02.dbf";
    set newname for datafile 7 to
    "/u01/data_new/clonetest/alweb1.dbf";
    set newname for datafile 8 to
    "/u01/data_new/clonetest/indx1.dbf";
    restore
    check readonly
    clone database
    executing Memory Script
    executing command: SET until clause
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    Starting restore at 28-SEP-09
    using channel ORA_AUX_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 09/28/2009 15:13:40
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-06026: some targets not found - aborting restore
    RMAN-06023: no backup or copy of datafile 8 found to restore
    RMAN-06023: no backup or copy of datafile 7 found to restore
    RMAN-06023: no backup or copy of datafile 6 found to restore
    RMAN-06023: no backup or copy of datafile 5 found to restore
    RMAN-06023: no backup or copy of datafile 4 found to restore
    RMAN-06023: no backup or copy of datafile 3 found to restore
    RMAN-06023: no backup or copy of datafile 2 found to restore
    RMAN-06023: no backup or copy of datafile 1 found to restore
    RMAN> crosscheck backupset of datafile 1;
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=154 devtype=DISK
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    RMAN> crosscheck backupset of datafile 2;
    using channel ORA_DISK_1
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    RMAN> crosscheck backupset of datafile 3;
    using channel ORA_DISK_1
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    RMAN> crosscheck backupset of datafile 4;
    using channel ORA_DISK_1
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    RMAN> crosscheck backupset of datafile 5;
    using channel ORA_DISK_1
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    RMAN> crosscheck backupset of datafile 6;
    using channel ORA_DISK_1
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    RMAN> crosscheck backupset of datafile 7;
    using channel ORA_DISK_1
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    Thanks,

    Hi,
    RMAN> show all;
    using target database control file instead of recovery catalog
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/ora10g/product/10.2.0/db_1/dbs/snapcf_TEST.f'; # default
    Thanks,

  • Gating error while creating standby database in oracle 11g

    Dear Gurus
    I am getting following error while creating standby database. My database version is oracle 11g 11.2.0.1 in Redhat 5.2
    RMAN> duplicate target database for standby from active database;
    Starting Duplicate Db at 10-MAY-12
    using channel ORA_AUX_DISK_1
    contents of Memory Script:
    backup as copy reuse
    targetfile '/oracle/product/11.2.0/dbhome_1/dbs/orapworcl' auxiliary format
    '/oracle/product/11.2.0/dbhome_1/dbs/orapwstdb' ;
    executing Memory Script
    Starting backup at 10-MAY-12
    using channel ORA_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 05/10/2012 15:44:18
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-03009: failure of backup command on ORA_DISK_1 channel at 05/10/2012 15:44:18
    ORA-17629: Cannot connect to the remote database server
    ORA-17627: ORA-12528: TNS:listener: all appropriate instances are blocking new connections
    ORA-17629: Cannot connect to the remote database server
    RMAN>
    Regards
    Rabi

    Hello;
    Generally for the connection to work you need to add something like this to the listener.ora file :
    (SID_DESC =
        (global_dbname = STANDBY.hostname)
        (ORACLE_HOME = /u01/app/oracle/product/11.2.0.2)
        (sid_name = STANDBY)
    )You should stop and start the listener on the Standby after adding this. Also your tnsnames.ora must be correct on both the primary and the Standby.
    Also you need an INIT for the Standby side :
    STANDBY.__db_cache_size=343932928
    STANDBY.__java_pool_size=4194304
    STANDBY.__large_pool_size=4194304
    STANDBY.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    STANDBY.__pga_aggregate_target=281018368
    STANDBY.__sga_target=834666496
    STANDBY.__shared_io_pool_size=0
    STANDBY.__shared_pool_size=469762048
    STANDBY.__streams_pool_size=0
    audit_file_dest='/u01/app/oracle/admin/PRIMARY/adump'
    audit_trail='db'
    compatible='11.2.0.0.0'
    control_files='/u01/app/oracle/oradata/PRIMARY/control01.ctl','/u01/app/oracle/oradata/PRIMARY/control02.ctl'
    db_block_size=8192
    db_domain='SOME.DOMAIN.COM'
    db_flashback_retention_target=2880
    db_name='PRIMARY'
    db_recovery_file_dest_size=2147483648
    db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
    diagnostic_dest='/u01/app/oracle'
    dispatchers='(PROTOCOL=TCP) (SERVICE=PRIMARYXDB)'
    log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
    open_cursors=300
    pga_aggregate_target=277872640
    processes=150
    remote_login_passwordfile='EXCLUSIVE'
    sga_target=833617920
    undo_tablespace='UNDOTBS1'
    log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STANDBY'
    log_archive_dest_2='SERVICE=PRIMARY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) '
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=DEFER
    LOG_ARCHIVE_MAX_PROCESSES=30
    FAL_SERVER=STANDBY
    STANDBY_FILE_MANAGEMENT=AUTO
    DB_UNIQUE_NAME=STANDBYFinally
    startup nomount
    Start RMAN and issue duplicate command
    $ORACLE_HOME/bin/rman target=sys/@primary auxiliary=sys/@standby
    RMAN>duplicate target database for standby from active database NOFILENAMECHECK;
    Keys to success
    1. New Standby start NOMOUNT on new password file. ( On Oracle 11 you must copy and rename the file from Primary server )
    2. Hard coded listener on new Standby server.
    3. Correct tnsnames.ora files.
    4. Correct duplicate command.
    Please consider closing some of you old answered questions
    Best Regards
    mseberg
    Edited by: mseberg on May 10, 2012 7:06 AM

  • SQL Loader-704: Internal error: Maximum record length must be = [10000000]

    Hi,
    running SQL*Loader (Release 8.1.7.2.1) causes an error "SQL*Loader-704: Internal error: Maximum record length must be <= [10000000]". This error occurs when SQLLoader is trying to load several thousand records into a database table. Each record is less than 250 bytes in length.
    Any idea what could cause the problem?
    Thanks in advance!
    Ingo
    And here's an extract from the log file generated by SQLLoader :
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Bind array: 1360 rows, maximum of 10485760 bytes
    Continuation: none specified
    Path used: Conventional
    Table "SYSTEM"."BASICPROFILE$1", loaded from every logical record.
    Insert option in effect for this table: APPEND
    TRAILING NULLCOLS option in effect
    Column Name Position Len Term Encl Datatype
    UUID FIRST * O(X07) CHARACTER
    DOMAINID NEXT * O(X07) CHARACTER
    LASTMODIFIED NEXT * O(X07) DATE DD/MM/YYYY HH24:MI:SS
    ANNIVERSARY NEXT * O(X07) CHARACTER
    BIRTHDAY NEXT * O(X07) CHARACTER
    COMPANYNAME NEXT * O(X07) CHARACTER
    DESCRIPTION NEXT * O(X07) CHARACTER
    FIRSTNAME NEXT * O(X07) CHARACTER
    COMPANYNAMETRANSCRIPTION NEXT * O(X07) CHARACTER
    FIRSTNAMETRANSCRIPTION NEXT * O(X07) CHARACTER
    GENDER NEXT * O(X07) CHARACTER
    HOBBIES NEXT * O(X07) CHARACTER
    HONORIFIC NEXT * O(X07) CHARACTER
    JOBTITLE NEXT * O(X07) CHARACTER
    KEYWORDS NEXT * O(X07) CHARACTER
    LASTNAME NEXT * O(X07) CHARACTER
    LASTNAMETRANSCRIPTION NEXT * O(X07) CHARACTER
    NICKNAME NEXT * O(X07) CHARACTER
    PREFERREDLOCALE NEXT * O(X07) CHARACTER
    PREFERREDCURRENCY NEXT * O(X07) CHARACTER
    PROFESSION NEXT * O(X07) CHARACTER
    SECONDLASTNAME NEXT * O(X07) CHARACTER
    SECONDNAME NEXT * O(X07) CHARACTER
    SUFFIX NEXT * O(X07) CHARACTER
    TITLE NEXT * O(X07) CHARACTER
    CONFIRMATION NEXT * O(X07) CHARACTER
    DEFAULTADDRESSID NEXT * O(X07) CHARACTER
    BUSINESSPARTNERNO NEXT * O(X07) CHARACTER
    TYPECODE NEXT * O(X07) CHARACTER
    OCA NEXT * O(X07) CHARACTER
    SQL*Loader-704: Internal error: Maximum record length must be <= [10000000]

    As a second guess, the terminator changes or goes missing at some point in the data file. If you are running on *NIX, try wc -l data_file_name.  This will give a count of the number of lines (delimited by CHR(10) ) that are in the file.  If this is not close to the number you expected, then that is your problem.
    You could also try gradually working through the data file loading 100 records, then 200, then 300 etc. to see where it starts to fail.
    HTH
    John

  • Error while opening the database.--urgent---pls help...

    Hi,
    Got the below error while opening the database.
    Database version is :10.1.0.4.0
    database is in noarchive log mode.
    the database is coming in to no-mount state and mount state quite easily but not able to open the database.
    Got the below error while opening the database.
    ORA-00600: internal error code, arguments: [4194], [37], [31], [], [], [], [], []
    ORA-00600: internal error code, arguments: [4194], [37], [31], [], [], [], [], []
    Mon Sep 29 15:32:39 2008
    Errors in file /xxx/udump/aaa_ora_18737.trc:
    ORA-00600: internal error code, arguments: [4194], [37], [31], [], [], [], [], []
    ORA-00600: internal error code, arguments: [4194], [37], [31], [], [], [], [], []
    Mon Sep 29 15:32:41 2008
    DEBUG: Replaying xcb 0xc0000000350c1c18, pmd 0xc0000000353b21f0 for failed op 8
    Doing block recovery for file 2 block 9483
    No block recovery was needed.
    on Sep 29 15:34:01 2008
    Errors in file /xxx/udump/aaa/bdump/aaa_pmon_18723.trc:
    ORA-00600: internal error code, arguments: [4194], [37], [31], [], [], [], [], []
    Mon Sep 29 15:34:04 2008
    Errors in file /xxx/udump/aaa/bdump/aaa_pmon_18723.trc:
    ORA-00600: internal error code, arguments: [4194], [37], [31], [], [], [], [], []
    PMON: terminating instance due to error 472
    Instance terminated by PMON, pid = 18723
    Please help me....................

    Hi..
    This ORA-600 refers to a mismatch has been detected between Redo records and rollback (Undo) records.Is better to lock a call with oracle support.Secondly, apply latest patch on the DB.
    I had expereiced the same error, but my DB was in open mode, so i created a new undo tablespace and changed the undo_tablespace parameter and it worked.
    Anand

  • Entity Framework doesn't save new record into database

    Hy,
    I have problem with saving new record into database using Entity Framework.
    When I run program, everything seems normal, without errors . Program shows existing, manually added records into the database, and new one too. But new one isn't save into database after running program.
    I've got no idea where's problem. There is code for add new record, show existing.
    Thanks for help!!
    // add new record
    using (var db=new DatabaseEntitiesContext())
    var person = new Table()
    First_Name = "New_FName",
    Second_Name = "New_SName",
    PIN = "4569"
    db.Tables.Add(person);
    db.SaveChanges();
    //show all records
    using (var db=new DatabaseEntitiesContext())
    var selected = from x in db.Tables
    select x;
    foreach (var table in selected)
    Console.WriteLine("{0}{1}{2}",table.First_Name,table.Second_Name,table.PIN);

    Hi BownieCross;
    If you are using a local database file in your project the following may be the cause.
    From Microsoft Documentation:
    Issue:
    "Every time I test my application and modify data, my changes are gone the next time I run my application."
    Explanation:
    The value of the Copy
    to Output Directory property is Copy
    if newer or Copy
    always. The database in your output folder (the database that’s being modified when you test your application) is overwritten every
    time that you build your project. For more information, see How
    to: Manage Local Data Files in Your Project.
    Fernando (MCSD)
    If a post answers your question, please click "Mark As Answer" on that post and "Mark as Helpful".
    NOTE: If I ask for code, please provide something that I can drop directly into a project and run (including XAML), or an actual application project. I'm trying to help a lot of people, so I don't have time to figure out weird snippets with undefined objects
    and unknown namespaces.

  • Where is the value of report repository stored in the database?

    Hi all,
    One of our developers is trying to write a code that would send the output report (pdf)of a process through mail, typical work flow stuff.But the problem is that we are not supposed to hard code the value of report repositiry from where the report has to be picked.can anybody suggest a table in the database from where the value of report repository can be queried.
    we tried using
    select reportrepositorypt from pswebprofile.
    but this doesnt return the correct path. we went to PIA->peopletools->webprofile ->webprofie configuration
    here we updated the value in report repository field and saved it and queried again.
    But this didnt solve our problem.
    Its showing the previous value which is different from the actual report repository path.
    all the servers are in solaris box.
    please suggets.
    Thanks!

    Right now, I am trying to send a PDF (XML Publisher report) as an email to our vendors (it is their ACH Advice).
    What I have done so far is: Created an App Engine, one step to populate a state record with run control information and the second step that runs XML Publisher and creates PDF report(s) which are published to the Report Repository and saved to a file on our network. Now I think I just need to create a third step to recall the PDF(s) and send the PDF(s) out as emails to our vendors.
    I have found that the PDF information from Report Repository is stored in the Database in the CDM_LIST_VW record (this is a view and the main underlying table is CDM_LIST, but this contains what I need for now, so I'm going to try using it first). The Report ID is stored in the CONTENTID field and the Report Description (description shown on the Report Repository -- which in our case contains the email address) is stored in the CONTENT_DESCR field. So I am going to try over the next week to write the logic to query the CDM_LIST_VW for the data needed for the email address and the ReportID to go and grab the file....The hardest part is finding the actual file but I am hoping that I can also find a URL on that table. I'd pull the file from the network, but XML Publisher writes them out into numbered folders where all the PDFs are named the XML Report Defn - thus same name). I have not tried any of this yet, but am hoping I'm on the right path.
    I hope this helped answer your question.
    Good Luck!
    Jennifer

  • Image compressed and stored in the database.

    Hi,
    We have a tiff file stored in the database and then compressed using the following Intermedia Oracle method.
    process ('compressionFormat=FAX4,maxScale=1696 2200'); and save it back in the database.
    This oracle database is 8.1.7.4. The tiff file gets processed when we use TIFF file generated by old scanner but errors out with the TIFF file generated by latest scanners.
    The error I get is ORA 29400 , IMG-00704.
    There is a difference in TIFF version in the outputs generated by two scanners,
    The compression option : TIFF modified G3 in the old one
    while the new one has Lempel-Ziv
    Also, the tiff is loaded and saved fine but the image gets corrupted after the intermedia process method is run.

    Thank you for your reply.  I am glad to find that I did not miss an option.  I was aware that I could move my pictures into some other folder, but you have forgotten the solution that I chose.  That was to go back and use ZoomBrowser, which works to access photos in any folder I choose.  In addition it loads promptly.  I have only spent a brief time using the ImageBrowser but don't recall seeing any enhancements that over ZoomBrwoser.  Perhaps if I was attempting to interface with other software but as a stand-alone product, well perhaps you could enlighten me as to what makes ImageBrowser EX a better product..  Regarding the Accept as Solution: are you implying that Canon restricts their search function to only those responses that the OP marks as accepted?

  • Can Discoverer have link to display documents stored outside the database?

    I posted a message some time ago called "Possible for Discoverer to display BLOB type documents stored in database?" and got great answer.
    Now our customers are asking if it is possible, from Discoverer, to link somehow to a file stored outside the database on the Unix file system and get their computer to display it? Can anyone tell me if this is possible please?
    The only thing I've seen in the documentation that may be related is in Oracle Business Intelligence Discoverer Configuration Guide, section 10.6 List of Discoverer user preferences. It says there that Discoverer preference ProtocolList can be set so that Discoverer hyperlinks can be set to use protocols such as telnet, but the default is HTTP, HTTPS, and FTP.
    THank you in advance if you can help.
    Regards,
    Julie.

    Hi Rod,
    I have tried the second method: "create a Oracle directory pointing to the Unix directory containing the files". I have had success with it, but I'd be grateful if you could advise me if you would have done this the same way as described below:
    I put two Word docs and two text docs called clob_test1.txt, clob_test2.txt, blob_test1.doc, blob_test2 in the Unix directory corresponding to an Oracle directory called 'EIF'. I thought an extrenal table was needed so that Discoverer would have an object to write a queruy against. So I created a file called lob_test_data.txt with the following contents:
    1,01-JAN-2006,text/plain,clob_test1.txt
    2,02-JAN-2006,text/plain,clob_test2.txt
    3,01-JAN-2006,application/msword,blob_test1.doc
    4,02-JAN-2006,application/msword,blob_test2.
    THen I created an external table using the following DDL:
    CREATE TABLE jum_temp_lob_tab (
    file_id NUMBER(10),
    date_content DATE,
    mime_type VARCHAR2(100),
    blob_content BLOB
    ORGANIZATION EXTERNAL
    TYPE ORACLE_LOADER
    DEFAULT DIRECTORY EIF
    ACCESS PARAMETERS
    RECORDS DELIMITED BY NEWLINE
    BADFILE EIF:'lob_tab_%a_%p.bad'
    LOGFILE EIF:'lob_tab_%a_%p.log'
    FIELDS TERMINATED BY ','
    MISSING FIELD VALUES ARE NULL
    file_id CHAR(10),
    date_content CHAR(11) DATE_FORMAT DATE MASK "DD-MON-YYYY",
    mime_type CHAR(100),
    blob_filename CHAR(100)
    COLUMN TRANSFORMS (blob_content FROM LOBFILE (blob_filename) FROM (EIF) BLOB)
    LOCATION ('lob_test_data.txt')
    PARALLEL 2
    REJECT LIMIT UNLIMITED
    then created a Discoverer End User Layer folder against this external table, and used exactly the same technique as we did for downloading the BLOB from the database table (creating a new folder item containing a URL calling a database procedure which calls the Oracle code to download the doc). THis worked, but sometimes my PC didn't seem to know that the Word docs were Word docs and it needed to launch Word. Other times it did manage to do this OK. It always displayed the two .txt files as HTML docs.
    Just wondered if you'd be good enough to critique this approach.
    THank you, Julie.

Maybe you are looking for