Error on Report when using timestamp functions

I have been trying to generate a report with the sales pipeline for the following 12 months. The filter I have used is:
(Opportunity."Close Date" >= TIMESTAMPADD(SQL_TSI_DAY,Current_Date,365)
but seems to be there is something wrong, can anybody help me??
Message was edited by:
user612106

The error shown is:
S1000. Código: 10058. [NQODBC] [SQL_STATE: S1000] [nQSError: 10058] A general error has occurred. [nQSError: 16001] ODBC error state: 42884 code: -440 message: [IBM][CLI Driver][DB2/AIX64] SQL0440N No authorized routine named "TIMESTAMP_ISO" of type "FUNCTION" having compatible arguments was found. SQLSTATE=42884 . [nQSError: 16002] Cannot obtain number of columns for the query result. (S1000)

Similar Messages

  • Error ORA-06502 When using function REPLACE in PL/SQL

    Hi,
    I have a PL/SQL procedure which gives error 'Error ORA-06502 When using function REPLACE in PL/SQL' when the string value is quite long (I noticed this with a string 9K in length)
    variable var_a is of type CLOB
    and the assignment statement where it gives the error is
    var_a := REPLACE(var_a, '^', ''',''');
    Can anyone please help!
    Thanks

    Even then that shouldn't do so:
    SQL> select overload, position, argument_name, data_type, in_out
      2  from all_arguments
      3  where package_name = 'STANDARD'
      4  and object_name = 'LPAD'
      5  order by 1,2
      6  /
    OVERLOAD   POSITION ARGUMENT_NAME                  DATA_TYPE                      IN_OUT
    1                 0                                VARCHAR2                       OUT
    1                 1 STR1                           VARCHAR2                       IN
    1                 2 LEN                            BINARY_INTEGER                 IN
    1                 3 PAD                            VARCHAR2                       IN
    2                 0                                VARCHAR2                       OUT
    2                 1 STR1                           VARCHAR2                       IN
    2                 2 LEN                            BINARY_INTEGER                 IN
    3                 0                                CLOB                           OUT
    3                 1 STR1                           CLOB                           IN
    3                 2 LEN                            NUMBER                         IN
    3                 3 PAD                            CLOB                           IN
    4                 0                                CLOB                           OUT
    4                 1 STR1                           CLOB                           IN
    4                 2 LEN                            NUMBER                         INI wonder what happened?

  • 10.9.4: Graphical errors when using OSX functions like Mission Control, Spotlight

    Device: MBP 15" Retina 2.3 GHz Intel i7
    OSX: 10.9.4
    Monitor: 27" Thunderbolt Display
    Issue: When using OSX functions like Mission Control (Expose) and spotlight, I frequently get graphical errors that remain on the desktop unless I reset the computer. I can resolve the issue by adding a new desktop and dragging my windows over.
    This happens almost every day.

    You have installed all these non-Apple system modifications:
    Kernel Extensions: ?
      [loaded] at.obdev.nke.LittleSnitch (4052 - SDK 10.8) Support
      [loaded] com.nvidia.CUDA (1.1.0) Support
      [loaded] com.promise.driver.stex (5.2.7 - SDK 10.9) Support
      [not loaded] com.silabs.driver.CP210xVCPDriver (3.0.0d1) Support
      [not loaded] com.silabs.driver.CP210xVCPDriver64 (3.0.0d1) Support
      [not loaded] com.wacom.kext.pentablet (5.2.6) Support
      [not loaded] com.wacom.kext.wacomtablet (6.3.8 - SDK 10.9) Support
    Litlle snitch has been a bad actor for some users.
    You do not have any CUDA Hardware, so the CUDA Driver is not needed.
    I do not know what siLabs driver thinks it may be driving.
    You left the door open:
    Gatekeeper: ?
      Anywhere

  • When using TODATE function MDX query is not correctly generated

    Essbase 9.3.1.2 and OBIEE 10.1.3.4.1.
    When using TODATE function MDX query is not correctly generated.
    This leads to unexpected values not only on cumulative columns in report (generated with TODATE), but also other columns (calculated with AGO function or directly read from cube) have incorrect values.
    The problem occurs when you filter on a column that is not in the select list. If you filter on just one level of dimension, results are fine. You can filter on multiple dimensions as long as you filter on just one level of each dimension.
    If you filter on two or more levels of one dimension, than results are not correct. In some cases results for TODATE column are all zeros, in some cases it is a random value returned by Essbase (same random value for all rows of that column), and in some cases BI Server returns an error:
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. Essbase Error: Network error [10054]: Cannot Send Data (HY000).
    Here is generated MDX code:
    With
    set [Grupe proizvoda2] as '{[Grupe proizvoda].[N4]}'
    set [Grupe proizvoda4] as 'Generate([Grupe proizvoda2], Descendants([Grupe proizvoda].currentmember, [Grupe proizvoda].Generations(4), leaves))'
    set [Segmentacija2] as '{[Segmentacija].[RETAIL]}'
    set [Segmentacija4] as 'Filter(Generate({[Segmentacija2]}, Descendants([Segmentacija].currentmember, [Segmentacija].Generations(4),SELF), ALL), ([Segmentacija].CurrentMember IS [Segmentacija].[AFFLUENT]))'
    set [Vrijeme3] as '{[Vrijeme].[MJESEC_4_2009]}'
    member [Segmentacija].[SegmentacijaCustomGroup]as 'Sum([Segmentacija4])', SOLVE_ORDER = AGGREGATION_SOLVEORDER
    member [Accounts].[MS1] as '(ParallelPeriod([Vrijeme].[Gen3,Vrijeme],2,[Vrijeme].currentmember), [Accounts].[Trosak kapitala])'
    member [Accounts].[MS2] as '(ParallelPeriod([Vrijeme].[Gen3,Vrijeme],1,[Vrijeme].currentmember), [Accounts].[Trosak kapitala])'
    member [Accounts].[MS3] as 'AGGREGATE({PeriodsToDate([Vrijeme].[Gen2,Vrijeme],[Vrijeme].currentmember)}, [Accounts].[Trosak kapitala])'
    select
    { [Accounts].[Trosak kapitala],
    [Accounts].[MS1],
    [Accounts].[MS2],
    [Accounts].[MS3]
    } on columns,
    NON EMPTY {crossjoin ({[Grupe proizvoda4]},{[Vrijeme3]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
    from [NISE.NISE]
    where ([Segmentacija].[SegmentacijaCustomGroup])
    If you remove part with TODATE function, the results are fine. If you leave TODATE function, OBIEE returns an error mentioned above. If you manually modify variable SOLVE_ORDER and set value to, for example, 100 instead of AGGREGATION_SOLVEORDER, results are OK.
    In all cases when this variable was modified in generated MDX, and query manually executed on Essabse, results were OK. This variable seems to be the possible problem.

    Hi,
    Version is
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
    PL/SQL Release 10.2.0.5.0 - Production
    CORE    10.2.0.5.0      Production
    TNS for 64-bit Windows: Version 10.2.0.5.0 - Production
    NLSRTL Version 10.2.0.5.0 - Production
    Sorry, in my last post i forgot to mention that i already created a function based index but still it is not using because, there is a UNIQUE constraint on that column.
    Thanks

  • Can not print report when using HTML or DHTML

    Can not print report when using HTML or DHTML. When I open the report and click the print icon I get a small blank dialog box and then nothing. If I change to activex it works and I can print.
    Any Ideas?
    Thanks
    Jeff

    Hi,
    I did not come across this situation before. Did you apply any fix packs. Try to check whether its a issue which can be fixed by applying fix pack( try to check release notes of fix pack and see if they mentioned any thing about this).
    Thanks,
    SK.

  • Error:Type conflict when calling a function module RFC_ERROR_SYSTEM_Failure

    Hi Experts,
    When I run my Application in Portal, i am getting the following error.
    Type conflict when calling a function module., error key: RFC_ERROR_SYSTEM_FAILURE
    When I execute the BAPI, it is getting executed.
    My Bapi Strucute:
    Import Parameters
    IM_MAT_Search --> ZPTIP_MAT --> Import Parameters
    Tables
    IT_INFO_REC --> ZMM_GET_ITEM --> Output Parameters
    When I import the model, i am getting the structure like this
    BAPI_Name > ZMM_BAPI_Input> IM_MAT_Search(respective Parameters) , Output (respective Tables and their parameters)
                        > ZMM_Input1> Parameters
    This is the way, how i am executing in webdynpro java
    Zmm_Bapi_Ptip_Search_Input eleInput = new Zmm_Bapi_Ptip_Search_Input();
    wdContext.nodeZmm_Bapi_Ptip_Search_Input().bind(eleInput);
    Zptip_Asset eleInputAsset = new Zptip_Asset();
    eleInputAsset.setSearch("ACRS");
    wdContext.nodeZptip_Asset().bind(eleInputAsset);
    eleInput.setIm_Ast_Search(eleInputAsset);
    wdContext.nodeZmm_Bapi_Ptip_Search_Input().bind(eleInput);
    wdContext.nodeZmm_Bapi_Ptip_Search_Input().currentZmm_Bapi_Ptip_Search_InputElement().modelObject().execute();
    wdContext.nodeOutput().invalidate();
    Please let me know, how to do the same.
    Thanks in advance.
    Regards,
    Palani

    Hi David,
    I checked for the Parameter of setIm_Ast_Search, it is of Zptip_Asset.
    Hi Saleem,
    When I changed the same, i am getting the Type conflict error,
    Type conflict when calling a function module., error key: RFC_ERROR_SYSTEM_FAILURE
    Please let me know,what can be done in this regard to solve the problem.
    My BAPI Structure when imported as model
    SearchBAPI
    --> ZMM_BAPI_SEARCH_INPUT
    > IM_AST_SEARCH(zPTIP_ASSET)
    >Zptip_Asset
    >Search (Parameter)
    > OutPut(ZMM_BAPI_Search_Output)
    >IT_Asser_Rec(ZMM_Asset)
    >ZMM_Asset
    >TXT100 (output Parameter)
    --> ZMM_BAPI_SEARACH_OUTPUT
    --> ZPTIP_ASSET
    >Search (Parameter)
    Thanks & Regards,
    Palani
    Edited by: Palani Appan on Nov 11, 2009 5:31 PM

  • Error 1074396120 appears when using IMQA copy

    error 1074396120 appears when using IMQA copy,when i used in vision its working but when i tried to execute in labview its showing error image not found and not valid image. pls help me regarding this .

    Have a look
    Thanks and Regards
    Himanshu Goyal | LabVIEW Engineer- Power System Automation
    Values that steer us ahead: Passion | Innovation | Ambition | Diligence | Teamwork
    It Only gets BETTER!!!

  • Hi there , Why Do I Receive Error Code -1073807253 When Using VISA

    Error Code -1073807253

    Hi ahmed,
    you can attach picture directly in the forum...
    If you would use an error indicator you could read an explanation of the error. It's a "framing error", happening sometimes when using serial connections...
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Get ORA-01031: insufficient privileges error, but only when using dbstart.

    I am getting ORA-01031: insufficient privileges error, but only when using dbstart. the listener starts but not the database. How come I can start it from SQL prompt but not from dbstart scripts as the oracle user?
    [oracle@mallard bin]$ ./dbstart
    Processing Database instance "gf44": log file /prod/oracle/10/startup.log
    [oracle@mallard bin]$
    Log file:
    Wed Aug 20 10:15:02 CDT 2008
    SQL*Plus: Release 10.2.0.1.0 - Production on Wed Aug 20 10:15:02 2008
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    SQL> ERROR:
    ORA-01031: insufficient privileges
    SQL> ORA-01031: insufficient privileges
    SQL>
    /prod/oracle/10/bin/dbstart: Database instance "gf44" warm started.
    >
    oratab file:
    gf44:/prod/oracle/10:Y
    dbstart file section:
    # See if it is a V6 or V7 database
    VERSION=undef
    if [ -f $ORACLE_HOME/bin/sqldba ] ; then
    SQLDBA=svrmgrl
    VERSION=`$ORACLE_HOME/bin/sqldba command=exit | awk '
    /SQL\*DBA: (Release|Version)/ {split($3, V, ".") ;
          print V[1]}'`
    case $VERSION in
    "6") ;;
    *) VERSION="internal" ;;
    esac
    else
    if [ -f $ORACLE_HOME/bin/svrmgrl ] ; then
    SQLDBA=svrmgrl
    VERSION="internal"
    else
    SQLDBA="sqlplus /nolog"
    fi
    fi
    Permissions of file:
    [oracle@mallard bin]$ ls -la dbstart
    -rwxrwxr-x 1 oracle oinstall 10407 Aug 19 12:27 dbstart
    [oracle@mallard bin]$
    User permissions:
    [root@mallard 10]# id oracle
    uid=503(oracle) gid=503(oinstall) groups=503(oinstall),504(dba)
    [root@mallard 10]#
    I can start the listener manually using "./lsnrctl start" and start the database manually from sql prompt using "SQL>startup" (as sysdba) with no problems. this only happens when using dbstart file. I am logged in as oracle user and all environment variables are set
    Thank you for any help you could provide.

    I have the same problem, but i don't want insert this string
    Connect sys/{password} as sysdbaI have deployed an Oracle 10g with os SunOS
    $ uname -a
    SunOS DB02 5.10 Generic_141444-09 sun4v sparc SUNW,Sun-Blade-T6320
    I can connect with sys/password, but I can't login with
    $ sqlplus /nolog
    SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jan 7 15:19:50 2010
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    SQL> connect / as sysdba
    ERROR:
    ORA-01031: insufficient privilegesthe the startup and dbshut don't work.
    Someone maybe help me?
    Thanks,
    Regards.
    Lain

  • What is the best way to display errors to users when using JSPs?

              Hello,
              Could someone suggest me the best way to display errors to users when using JSPs?
              Many thanks in advance.
              Rino
              

              Thanks for the code snippet!
              Rino
              "Deepak Vohra" <[email protected]> wrote:
              >
              >
              >The 'errorPage' attribute of the 'page' directive forwards uncaught run-time
              >exceptions
              >to an error processing page. For example:
              >
              ><%@ page errorPage="error.jsp" %>
              >
              >redirects the browser to the JSP page error.jsp if an uncaught exception
              >is encountered.
              >
              >
              >Within error.jsp, indicate that it is an error-processing page, via the
              >directive:
              >
              >
              >
              ><%@ page isErrorPage="true" %>
              >
              >The Throwable object describing the exception may be accessed within
              >the error
              >page via the 'exception' implicit object.
              >
              >
              ><% if (exception != null) { %>
              ><p> An exception was thrown: <b> <%= exception %>
              >
              ><p> With the following stack trace:
              ><pre>
              >
              ><%
              > ByteArrayOutputStream ostr = new ByteArrayOutputStream();
              > exception.printStackTrace(new PrintStream(ostr));
              > out.print(ostr);
              >%>
              ></pre>
              >
              >
              >
              >"Rino Srivastava" <[email protected]> wrote:
              >>
              >>Hello,
              >>
              >>Could someone suggest me the best way to display errors to users when
              >>using JSPs?
              >>
              >>Many thanks in advance.
              >>
              >>Rino
              >
              

  • Error when using CONVERT_OTF function module

    Dear Friends,
    In many of our reports we are using the function module CONVERT_OTF to send the output as a mail the users.
    But after the latest patch update we have came across a new problem. Mail is getting triggered and we can also see the icon or
    the file in mail. But when we are opening the file we are getting the message that "THERE WAS AN ERROR OPENING THIS
    DOCUMENT. THE FILE IS DAMAGED AND COULD NOT BE REPAIRED.
    Earlier this program has been working fine from last 2 years but after updating the patch in the of August 2009. This problem we are
    facing. All our mails are not working.
    Thanks
    Vamshi
    Edited by: Rob Burbank on Sep 7, 2009 3:12 PM

    Dear Clemens,
    Thank you for the immediate response. I have checked the program in Debug Mode. When we are passing the data to the CONVERT_OTF the output table of the function module is showing the data in some Unicode format in the tables OTF and LINES.
    Out of the FM is in completely special characters and this is after updating the latest patch.
    We are using Release 700 Level 19 and Service Package SAPKA70019.
    Thanks and Regards
    Vamshi Sreerangam
    Edited by: vamshi sreerangam on Sep 8, 2009 5:54 AM

  • Problem modifying the connection details in a Report when using Weblogic 12

    Hi
    I have a j2ee application that uses the Java Reporting Component (JRC). At runtime, the code programmatically changes the connection type and schema name of a crystal report before running it. The connection that was used when designing the report is replaced with new JNDI parameters pointing to a Weblogic/Oracle datasource.
    The application works perfectly when using Weblogic 11, but the same code and report fails when deployed to Weblogic 12.
    I used Version 12.2.207.916 of the JRC, and updating  to the most current version I could find (12.2.217) did not solve the problem.
    The code snippet below shows how the connection and schema name is replaced for each of the tables in the report (not all the code is shown here)...
            PropertyBag propertyBag = new PropertyBag();
            propertyBag.put("Database DLL", "crdb_jdbc.dll");
            propertyBag.put("JNDI Datasource Name", jndiName);
            propertyBag.put("Initial Context", "");
                while (tableList.hasNext()) {
                    ITable table = tableList.next();
                    ITable tableNew = (ITable) table.clone(true);
                    IConnectionInfo connectionInfo = table.getConnectionInfo();
                    connectionInfo.setAttributes(propertyBag);
                    connectionInfo.setKind(ConnectionInfoKind.SQL);
                    tableNew.setQualifiedName(newQualifier + "." + table.getName());
                    tableNew.setConnectionInfo(connectionInfo);
                    dbController.setTableLocation(table, tableNew);
    The setTableLocation() function throws the following exception ...
    2014-05-13 16:46:27,173 ERROR [[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)']  JRCCommunicationAdapter         detected an exception: Unexpected database connector error
                    at com.crystaldecisions.reports.queryengine.Table.u7(SourceFile:2409)
                    at com.crystaldecisions.reports.dataengine.datafoundation.AddDatabaseTableCommand.new(SourceFile:529)
                    at com.crystaldecisions.reports.common.CommandManager.a(SourceFile:71)
                    at com.crystaldecisions.reports.common.Document.a(SourceFile:203)
                    at com.businessobjects.reports.sdk.requesthandler.f.a(SourceFile:175)
                    at com.businessobjects.reports.sdk.requesthandler.DatabaseRequestHandler.byte(SourceFile:1079)
                    at com.businessobjects.reports.sdk.JRCCommunicationAdapter.do(SourceFile:1167)
                    at com.businessobjects.reports.sdk.JRCCommunicationAdapter.if(SourceFile:661)
                    at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(SourceFile:167)
                    at com.businessobjects.reports.sdk.JRCCommunicationAdapter$2.a(SourceFile:529)
                    at com.businessobjects.reports.sdk.JRCCommunicationAdapter$2.call(SourceFile:527)
                    at com.crystaldecisions.reports.common.ThreadGuard.syncExecute(SourceFile:102)
                    at com.businessobjects.reports.sdk.JRCCommunicationAdapter.for(SourceFile:525)
                    at com.businessobjects.reports.sdk.JRCCommunicationAdapter.int(SourceFile:424)
                    at com.businessobjects.reports.sdk.JRCCommunicationAdapter.request(SourceFile:352)
                    at com.businessobjects.sdk.erom.jrc.a.a(SourceFile:54)
                    at com.businessobjects.sdk.erom.jrc.a.execute(SourceFile:67)
                    at com.crystaldecisions.proxy.remoteagent.RemoteAgent$a.execute(SourceFile:716)
                    at com.crystaldecisions.proxy.remoteagent.CommunicationChannel.a(SourceFile:125)
                    at com.crystaldecisions.proxy.remoteagent.RemoteAgent.a(SourceFile:537)
                    at com.crystaldecisions.sdk.occa.report.application.ds.a(SourceFile:186)
                    at com.crystaldecisions.sdk.occa.report.application.an.a(SourceFile:108)
                    at com.crystaldecisions.sdk.occa.report.application.b0.if(SourceFile:148)
                    at com.crystaldecisions.sdk.occa.report.application.b0.b(SourceFile:95)
                    at com.crystaldecisions.sdk.occa.report.application.bb.int(SourceFile:96)
                    at com.crystaldecisions.proxy.remoteagent.UndoUnitBase.performDo(SourceFile:151)
                    at com.crystaldecisions.proxy.remoteagent.UndoUnitBase.a(SourceFile:106)
                    at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(SourceFile:2159)
                    at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(SourceFile:543)
                    at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(SourceFile:3898)
                    at com.crystaldecisions.sdk.occa.report.application.DatabaseController.setTableLocation(SourceFile:2906)
                    at com.systest.reporting.engine.crystal.CrystalReportEngine.replaceConnection(CrystalReportEngine.java:523)
                    at com.systest.reporting.engine.crystal.CrystalReportEngine.changeDataSource(CrystalReportEngine.java:449)
                    at com.systest.CrystalReportPane.setReportDataSourceDetails(CrystalReportPane.java:170)
                    at com.systest.CrystalReportPane.commandLoad(CrystalReportPane.java:136)
                    at com.systest.ReportRunner.CrystalReport.Load(CrystalReport.java:401)
                    at com.systest.ReportRunner.SaveReportToFile(ReportRunner.java:1385)
    Any idea what I can do to fix this ?
    Thanks in advance!

    Last reference in any documentation re. version of supported weblogic is 10.3.x. And it may very well be that things worked in weblogic 11, but as versions go by the differences get bigger and eventually the app stops working.
    I'll ping the Program Manager for definitive info and future support. Once I have the info, I'll update this Discussion.
    - Ludek
    Senior Support Engineer AGS Product Support, Global Support Center Canada
    Follow us on Twitter

  • Exception report when using tomcat 5 with JDBC

    i followed this guide to setup the JDBC with my TOMCAT 5
    http://jakarta.apache.org/tomcat/tomcat-5.0-doc/jndi-datasource-examples-howto.html#Database%20Connection%20Pool%20(DBCP)%20Configurations
    but i seem to get this error
    exception
    javax.servlet.ServletException: Unable to get connection, DataSource invalid: "org.apache.commons.dbcp.SQLNestedException: Cannot create JDBC driver of class '' for connect URL 'null'"
         org.apache.jasper.runtime.PageContextImpl.doHandlePageException(PageContextImpl.java:846)
         org.apache.jasper.runtime.PageContextImpl.handlePageException(PageContextImpl.java:779)
         org.apache.jsp.SID.test_jsp._jspService(test_jsp.java:81)
         org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:94)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:324)
         org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:292)
         org.apache.jasper.servlet.JspServlet.service(JspServlet.java:236)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
    root cause
    javax.servlet.jsp.JspException: Unable to get connection, DataSource invalid: "org.apache.commons.dbcp.SQLNestedException: Cannot create JDBC driver of class '' for connect URL 'null'"
         org.apache.taglibs.standard.tag.common.sql.QueryTagSupport.getConnection(QueryTagSupport.java:276)
         org.apache.taglibs.standard.tag.common.sql.QueryTagSupport.doStartTag(QueryTagSupport.java:159)
         org.apache.jsp.SID.test_jsp._jspx_meth_sql_query_0(test_jsp.java:100)
         org.apache.jsp.SID.test_jsp._jspService(test_jsp.java:58)
         org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:94)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:324)
         org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:292)
         org.apache.jasper.servlet.JspServlet.service(JspServlet.java:236)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
    i can compile *.java and run them when they need the jdbc drivers to interact with mysql 5 database.
    This is my server.xml
    <!-- Example Server Configuration File -->
    <!-- Note that component elements are nested corresponding to their
    parent-child relationships with each other -->
    <!-- A "Server" is a singleton element that represents the entire JVM,
    which may contain one or more "Service" instances. The Server
    listens for a shutdown command on the indicated port.
    Note: A "Server" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <Server port="8005" shutdown="SHUTDOWN" debug="0">
    <!-- Comment these entries out to disable JMX MBeans support -->
    <!-- You may also configure custom components (e.g. Valves/Realms) by
    including your own mbean-descriptor file(s), and setting the
    "descriptors" attribute to point to a ';' seperated list of paths
    (in the ClassLoader sense) of files to add to the default list.
    e.g. descriptors="/com/myfirm/mypackage/mbean-descriptor.xml"
    -->
    <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener"
    debug="0"/>
    <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
    debug="0"/>
    <!-- Global JNDI resources -->
    <GlobalNamingResources>
    <!-- Test entry for demonstration purposes -->
    <Environment name="simpleValue" type="java.lang.Integer" value="30"/>
    <!-- Editable user database that can also be used by
    UserDatabaseRealm to authenticate users -->
    <Resource name="UserDatabase" auth="Container"
    type="org.apache.catalina.UserDatabase"
    description="User database that can be updated and saved">
    </Resource>
    <ResourceParams name="UserDatabase">
    <parameter>
    <name>factory</name>
    <value>org.apache.catalina.users.MemoryUserDatabaseFactory</value>
    </parameter>
    <parameter>
    <name>pathname</name>
    <value>conf/tomcat-users.xml</value>
    </parameter>
    </ResourceParams>
    </GlobalNamingResources>
    <!-- A "Service" is a collection of one or more "Connectors" that share
    a single "Container" (and therefore the web applications visible
    within that Container). Normally, that Container is an "Engine",
    but this is not required.
    Note: A "Service" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <!-- Define the Tomcat Stand-Alone Service -->
    <Service name="Catalina">
    <!-- A "Connector" represents an endpoint by which requests are received
    and responses are returned. Each Connector passes requests on to the
    associated "Container" (normally an Engine) for processing.
    By default, a non-SSL HTTP/1.1 Connector is established on port 8080.
    You can also enable an SSL HTTP/1.1 Connector on port 8443 by
    following the instructions below and uncommenting the second Connector
    entry. SSL support requires the following steps (see the SSL Config
    HOWTO in the Tomcat 5 documentation bundle for more detailed
    instructions):
    * If your JDK version 1.3 or prior, download and install JSSE 1.0.2 or
    later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
    * Execute:
    %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA (Windows)
    $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA (Unix)
    with a password value of "changeit" for both the certificate and
    the keystore itself.
    By default, DNS lookups are enabled when a web application calls
    request.getRemoteHost(). This can have an adverse impact on
    performance, so you can disable it by setting the
    "enableLookups" attribute to "false". When DNS lookups are disabled,
    request.getRemoteHost() will return the String version of the
    IP address of the remote client.
    -->
    <!-- Define a non-SSL Coyote HTTP/1.1 Connector on port 8080 -->
    <Connector port="8080"
    maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
    enableLookups="false" redirectPort="8443" acceptCount="100"
    debug="0" connectionTimeout="20000"
    disableUploadTimeout="true" />
    <!-- Note : To disable connection timeouts, set connectionTimeout value
    to 0 -->
         <!-- Note : To use gzip compression you could set the following properties :
                   compression="on"
                   compressionMinSize="2048"
                   noCompressionUserAgents="gozilla, traviata"
                   compressableMimeType="text/html,text/xml"
         -->
    <!-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 -->
    <!--
    <Connector port="8443"
    maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
    enableLookups="false" disableUploadTimeout="true"
    acceptCount="100" debug="0" scheme="https" secure="true"
    clientAuth="false" sslProtocol="TLS" />
    -->
    <!-- Define a Coyote/JK2 AJP 1.3 Connector on port 8009 -->
    <Connector port="8009"
    enableLookups="false" redirectPort="8443" debug="0"
    protocol="AJP/1.3" />
    <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
    <!-- See proxy documentation for more information about using this. -->
    <!--
    <Connector port="8082"
    maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
    enableLookups="false"
    acceptCount="100" debug="0" connectionTimeout="20000"
    proxyPort="80" disableUploadTimeout="true" />
    -->
    <!-- An Engine represents the entry point (within Catalina) that processes
    every request. The Engine implementation for Tomcat stand alone
    analyzes the HTTP headers included with the request, and passes them
    on to the appropriate Host (virtual host). -->
    <!-- You should set jvmRoute to support load-balancing via JK/JK2 ie :
    <Engine name="Standalone" defaultHost="localhost" debug="0" jvmRoute="jvm1">
    -->
    <!-- Define the top level container in our container hierarchy -->
    <Engine name="Catalina" defaultHost="localhost" debug="0">
    <!-- The request dumper valve dumps useful debugging information about
    the request headers and cookies that were received, and the response
    headers and cookies that were sent, for all requests received by
    this instance of Tomcat. If you care only about requests to a
    particular virtual host, or a particular application, nest this
    element inside the corresponding <Host> or <Context> entry instead.
    For a similar mechanism that is portable to all Servlet 2.4
    containers, check out the "RequestDumperFilter" Filter in the
    example application (the source for this filter may be found in
    "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
    Request dumping is disabled by default. Uncomment the following
    element to enable it. -->
    <!--
    <Valve className="org.apache.catalina.valves.RequestDumperValve"/>
    -->
    <!-- Global logger unless overridden at lower levels -->
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="catalina_log." suffix=".txt"
    timestamp="true"/>
    <!-- Because this Realm is here, an instance will be shared globally -->
    <!-- This Realm uses the UserDatabase configured in the global JNDI
    resources under the key "UserDatabase". Any edits
    that are performed against this UserDatabase are immediately
    available for use by the Realm. -->
    <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
    debug="0" resourceName="UserDatabase"/>
    <!-- Comment out the old realm but leave here for now in case we
    need to go back quickly -->
    <!--
    <Realm className="org.apache.catalina.realm.MemoryRealm" />
    -->
    <!-- Replace the above Realm with one of the following to get a Realm
    stored in a database and accessed via JDBC -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="org.gjt.mm.mysql.Driver"
    connectionURL="jdbc:mysql://localhost/authority"
    connectionName="test" connectionPassword="test"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="oracle.jdbc.driver.OracleDriver"
    connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
    connectionName="scott" connectionPassword="tiger"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="sun.jdbc.odbc.JdbcOdbcDriver"
    connectionURL="jdbc:odbc:CATALINA"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!-- Define the default virtual host
    Note: XML Schema validation will not work with Xerces 2.2.
    -->
    <Host name="localhost" debug="0" appBase="webapps"
    unpackWARs="true" autoDeploy="true"
    xmlValidation="false" xmlNamespaceAware="false">
    <!-- Defines a cluster for this node,
    By defining this element, means that every manager will be changed.
    So when running a cluster, only make sure that you have webapps in there
    that need to be clustered and remove the other ones.
    A cluster has the following parameters:
    className = the fully qualified name of the cluster class
    name = a descriptive name for your cluster, can be anything
    debug = the debug level, higher means more output
    mcastAddr = the multicast address, has to be the same for all the nodes
    mcastPort = the multicast port, has to be the same for all the nodes
    mcastBindAddr = bind the multicast socket to a specific address
    mcastTTL = the multicast TTL if you want to limit your broadcast
    mcastSoTimeout = the multicast readtimeout
    mcastFrequency = the number of milliseconds in between sending a "I'm alive" heartbeat
    mcastDropTime = the number a milliseconds before a node is considered "dead" if no heartbeat is received
    tcpThreadCount = the number of threads to handle incoming replication requests, optimal would be the same amount of threads as nodes
    tcpListenAddress = the listen address (bind address) for TCP cluster request on this host,
    in case of multiple ethernet cards.
    auto means that address becomes
    InetAddress.getLocalHost().getHostAddress()
    tcpListenPort = the tcp listen port
    tcpSelectorTimeout = the timeout (ms) for the Selector.select() method in case the OS
    has a wakup bug in java.nio. Set to 0 for no timeout
    printToScreen = true means that managers will also print to std.out
    expireSessionsOnShutdown = true means that
    useDirtyFlag = true means that we only replicate a session after setAttribute,removeAttribute has been called.
    false means to replicate the session after each request.
    false means that replication would work for the following piece of code:
    <%
    HashMap map = (HashMap)session.getAttribute("map");
    map.put("key","value");
    %>
    replicationMode = can be either 'pooled', 'synchronous' or 'asynchronous'.
    * Pooled means that the replication happens using several sockets in a synchronous way. Ie, the data gets replicated, then the request return. This is the same as the 'synchronous' setting except it uses a pool of sockets, hence it is multithreaded. This is the fastest and safest configuration. To use this, also increase the nr of tcp threads that you have dealing with replication.
    * Synchronous means that the thread that executes the request, is also the
    thread the replicates the data to the other nodes, and will not return until all
    nodes have received the information.
    * Asynchronous means that there is a specific 'sender' thread for each cluster node,
    so the request thread will queue the replication request into a "smart" queue,
    and then return to the client.
    The "smart" queue is a queue where when a session is added to the queue, and the same session
    already exists in the queue from a previous request, that session will be replaced
    in the queue instead of replicating two requests. This almost never happens, unless there is a
    large network delay.
    -->
    <!--
    When configuring for clustering, you also add in a valve to catch all the requests
    coming in, at the end of the request, the session may or may not be replicated.
    A session is replicated if and only if all the conditions are met:
    1. useDirtyFlag is true or setAttribute or removeAttribute has been called AND
    2. a session exists (has been created)
    3. the request is not trapped by the "filter" attribute
    The filter attribute is to filter out requests that could not modify the session,
    hence we don't replicate the session after the end of this request.
    The filter is negative, ie, anything you put in the filter, you mean to filter out,
    ie, no replication will be done on requests that match one of the filters.
    The filter attribute is delimited by ;, so you can't escape out ; even if you wanted to.
    filter=".*\.gif;.*\.js;" means that we will not replicate the session after requests with the URI
    ending with .gif and .js are intercepted.
    The deployer element can be used to deploy apps cluster wide.
    Currently the deployment only deploys/undeploys to working members in the cluster
    so no WARs are copied upons startup of a broken node.
    The deployer watches a directory (watchDir) for WAR files when watchEnabled="true"
    When a new war file is added the war gets deployed to the local instance,
    and then deployed to the other instances in the cluster.
    When a war file is deleted from the watchDir the war is undeployed locally
    and cluster wide
    -->
    <!--
    <Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
    managerClassName="org.apache.catalina.cluster.session.DeltaManager"
    expireSessionsOnShutdown="false"
    useDirtyFlag="true">
    <Membership
    className="org.apache.catalina.cluster.mcast.McastService"
    mcastAddr="228.0.0.4"
    mcastPort="45564"
    mcastFrequency="500"
    mcastDropTime="3000"/>
    <Receiver
    className="org.apache.catalina.cluster.tcp.ReplicationListener"
    tcpListenAddress="auto"
    tcpListenPort="4001"
    tcpSelectorTimeout="100"
    tcpThreadCount="6"/>
    <Sender
    className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
    replicationMode="pooled"/>
    <Valve className="org.apache.catalina.cluster.tcp.ReplicationValve"
    filter=".*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;"/>
    <Deployer className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
    tempDir="/tmp/war-temp/"
    deployDir="/tmp/war-deploy/"
    watchDir="/tmp/war-listen/"
    watchEnabled="false"/>
    </Cluster>
    -->
    <!-- Normally, users must authenticate themselves to each web app
    individually. Uncomment the following entry if you would like
    a user to be authenticated the first time they encounter a
    resource protected by a security constraint, and then have that
    user identity maintained across all web applications contained
    in this virtual host. -->
    <!--
    <Valve className="org.apache.catalina.authenticator.SingleSignOn"
    debug="0"/>
    -->
    <!-- Access log processes all requests for this virtual host. By
    default, log files are created in the "logs" directory relative to
    $CATALINA_HOME. If you wish, you can specify a different
    directory with the "directory" attribute. Specify either a relative
    (to $CATALINA_HOME) or absolute path to the desired directory.
    -->
    <!--
    <Valve className="org.apache.catalina.valves.AccessLogValve"
    directory="logs" prefix="localhost_access_log." suffix=".txt"
    pattern="common" resolveHosts="false"/>
    -->
    <!-- Logger shared by all Contexts related to this virtual host. By
    default (when using FileLogger), log files are created in the "logs"
    directory relative to $CATALINA_HOME. If you wish, you can specify
    a different directory with the "directory" attribute. Specify either a
    relative (to $CATALINA_HOME) or absolute path to the desired
    directory.-->
    <Logger className="org.apache.catalina.logger.FileLogger"
    directory="logs" prefix="localhost_log." suffix=".txt"
    timestamp="true"/>
              <Context path="/testdb" docBase="APACHE_DIR/htdocs/testdb"
    debug="5" reloadable="true" crossContext="true">
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="localhost_DBTest_log." suffix=".txt"
    timestamp="true"/>
    <Resource name="jdbc/TestDB"
    auth="Container"
    type="javax.sql.DataSource"/>
    <ResourceParams name="jdbc/TestDB">
    <parameter>
    <name>factory</name>
    <value>org.apache.commons.dbcp.BasicDataSourceFactory</value>
    </parameter>
    <!-- Maximum number of dB connections in pool. Make sure you
    configure your mysqld max_connections large enough to handle
    all of your db connections. Set to 0 for no limit.
    -->
    <parameter>
    <name>maxActive</name>
    <value>10</value>
    </parameter>
    <!-- Maximum number of idle dB connections to retain in pool.
    Set to 0 for no limit.
    -->
    <parameter>
    <name>maxIdle</name>
    <value>5</value>
    </parameter>
    <!-- Maximum time to wait for a dB connection to become available
    in ms, in this example 10 seconds. An Exception is thrown if
    this timeout is exceeded. Set to -1 to wait indefinitely.
    -->
    <parameter>
    <name>maxWait</name>
    <value>10000</value>
    </parameter>
    <!-- MySQL dB username and password for dB connections -->
    <parameter>
    <name>username</name>
    <value>test</value>
    </parameter>
    <parameter>
    <name>password</name>
    <value>testpwd</value>
    </parameter>
    <!-- Class name for mm.mysql JDBC driver -->
    <parameter>
    <name>driverClassName</name>
    <value>com.mysql.jdbc.Driver</value>
    </parameter>
    <!-- The JDBC connection url for connecting to your MySQL dB.
    The autoReconnect=true argument to the url makes sure that the
    mm.mysql JDBC Driver will automatically reconnect if mysqld closed the
    connection. mysqld by default closes idle connections after 8 hours.
    -->
    <parameter>
    <name>url</name>
    <value>jdbc:mysql://localhost/testdb?autoReconnect=true</value>
    </parameter>
    </ResourceParams>
         </Context>
    </Host>
    </Engine>
    </Service>
    </Server>

    You haven't added a resource reference for your web application that gives the application a local name for the global resource "UserDatabase".

  • How can I prevent a memory flow error from occuring when using 3D contour plots?

    After displaying a single contour plot a number of times, LabVIEW crashes and reports a memory overflow error.  I only load a single 2D array, never storing previouls ones. I always index the data for contour plot 0, and don't explicitly store multiple 2D arrays in any buffer, such as appending data to a shift register.  There appears to be some sort of memory leak in LV when using this feature.  Is there a programatic way to flush the stored data in a contour plot prior to displaying the next 2D array of data?

    Hello MicTomReiSr,
    Is this in the LabVIEW development environment?  Are you making changes to the code and then running the VI?  Does the plot contain default data? Is the indicator cleared when the program stops?  The Plot Clean Data method may be what you're looking for.
    Remember that the development environment maintains an undo history of the changes you've made, and copies both the block diagram and FP contents each time.  If you have a lot of data displayed on the front panel you end up copying it multiple times.  This isn't a problem unless you're actively editing the VI in question.
    If you do need to edit and run the VI at the same time (perfectly reasonable, although you might want to consider a 64-bit installation or more RAM if you commonly work with resource-intensive data types like 3D plots) try reducing the Undo history length (Tools>>Options), although be aware that you won't be able to back up as many steps.
    Regards,
    Tom L.

  • OpenCV SEGFAULT when using imshow functions

    Hello,
    recently, when I run my OpenCV application on archlinux, I get a segfault when using  OpenCV's imshow() functions for displaying the video output. This still worked some weeks ago and I did not change my code since then.
    The error is related to GTK+ which causes the segfault after OpenCV uses the gtk_init() function.
    This is the stacktrace:
    #0  0x00000000006328a0 in signal ()
    #1  0x00007ffff1dff264 in ?? () from /usr/lib/libgtk-x11-2.0.so.0
    #2  0x00007ffff1524427 in g_option_context_parse ()
       from /usr/lib/libglib-2.0.so.0
    #3  0x00007ffff1dff80e in gtk_parse_args ()
       from /usr/lib/libgtk-x11-2.0.so.0
    #4  0x00007ffff1dff869 in gtk_init_check ()
       from /usr/lib/libgtk-x11-2.0.so.0
    #5  0x00007ffff1dff899 in gtk_init () from /usr/lib/libgtk-x11-2.0.so.0
    #6  0x00007ffff6a1fa38 in cvInitSystem ()
       from /usr/lib/libopencv_highgui.so.2.4
    #7  0x00007ffff6a20163 in cvNamedWindow ()
       from /usr/lib/libopencv_highgui.so.2.4
    #8  0x00007ffff6a20cfd in cvShowImage ()
       from /usr/lib/libopencv_highgui.so.2.4
    #9  0x00007ffff6a1c7f7 in cv::imshow(std::string const&, cv::_InputArray const&) () from /usr/lib/libopencv_highgui.so.2.4
    #10 0x00000000004187e0 in StereoMatrix::displayImages (this=0x7fffffffd6ff)
        at /home/user/mtdev/src/StereoMatrix.cpp:51
    #11 0x000000000041861a in StereoMatrix::generateStereoMat (
        this=0x7fffffffd6ff, input=..., resizeMat=false)
    I'm running the recent version of OpenCV from the repositories:
    extra/opencv 2.4.6.1-3
    Apart from that my system is also up-to-date.
    Can anyone reproduce this error or has an idea what causes it?
    Thanks

    Hi Thanks for the response
    Still beavering away at this but have made sizeable progress since this post.
    First off I ma looking for solutions within CVI and not LabView but I often find that if there is a solution in Labview then there is often one in CVI so my question becomes.....does Labview allow for capture of a 10 bit image? That is my biggest stumbling block.
    My solution so far involves writing a C wrapper dll for the existing C++ SDK that is supplied with these cards in Visual studio then using it within CVI.
    This works but has been allot of work and as disappointed as I would be to find that there is a simpler solution I would be happy to simplify the code involved in my overall project - and also to put details of the best solution on the forums for others that may be trying to do the same thing!
    Another disadvantage to the curretn method is it is a little slow - I am constantly looking at way of speeding up aquisition of the images. I have no solid numbers but I'd estimate I am working in the region fo around 10 frames a second when working with 10 bit images.
    For 8 bit images the best solution I have found is to make use of the IPP library that is provided by Intel. This hasgreat functions for taking the raw data from the card and presenting it in various ways at high speed. I have written 1 application with this that pulls the informaiton in real time but only at 8 bit as unfortunately there is no 10 bit option.

Maybe you are looking for

  • Material master deletion for a particular sales organisation

    Hi, I want to delete all the material master which are created in a particular sales organisation. Means user should be able to create sales order with that material nor user able to bill the existing sales order of that material. Regards, SATYA

  • Generate a dynamic Transparent container in WD

    Hi all, i' ve created a Transparent conatiner in where i will write my text, i also created a Button (ADD) what i want now is, after clicking on the (Add) Button, a new Transparent container will appear,  how can' I do this dynamically??  i dont want

  • Thumbnails from a single project disappeared but masters still exist???

    Anyone know how to rebuild the thumbnails for a project where the are now showing up when i select a project? The image counter even still shows the proper count but no previews show up....

  • Printing In  CS4 Help!!!

    In CS3 you could print multiple images on a sheet like a photo studio by going to File-Automate-Print Package and in CS4 that option isn't there and I can't find any information on it. Does anyone know what I am talking about or can anyone help me Pl

  • How to delete in backend?

    Hi all,        How to delete rows in backend by using webdynpro java?        Please provide code? Regards, Srini