Device buffer overflow error and Log component missing

I have some previous information in "Error device buffer overflow" post.
I am currently running a windows XP machine with a 3.4 GHz
processor and 3 GB of RAM. I tried using just one 9233 module with the
cDAQ9172 and it worked. I added the second 9233 module and it started
giving me the error. I checked the memory usage when acquisition is not
started it was 868 MB and the CPU usage was 0-2%. When I started
acquisition the memory usage did not change much it went to about 880
MB and the CPU to about 100% and the error was displayed. The code shows the error that device buffer is overflowing when I use the program on a new machine even for one module with cDAQ9172 but when I switch it back to the original machine it works for single module but not for multiple.
As the error suggests. I tried switching the update views in "Update
signals while running" and "Prepare log data for viewing" off but it
still did not help. If I switch both these off I wont be able to see
what kind of impact data I am collecting. Please do correct me if I am
wrong.
I  need to colelct the impact data on a printer and I have
a many locations on it that I need to collect data from.  Please
suggest a solution for this problem.
      Also, if i try using the same project on some other machine it gives me an error indicating some log component files are missing from signal express and need to be installed. This happens when i transfer the project from one machine to another. 
Thank you.

Stu,
You will still be able to print your data in a later step when setting the two options recommended by the error code.  What is likely happening when you try to move the project is that the project has log files still associated with it from previous attempts.  You should save a copy of the project and in that copy delete the log files, then try to move the project over.  SE projects are directly associated with the log files they create. This should allow you to move the project to a new computer.
When you go to run your project, you should shut down as many background processes as possible, such as IM clients.  If you can disconnect from your network, I would suggest doing that and turning off any antivirus/anti-spyware software you have to free up resources.
A couple of further questions:
Are you connecting to a USB 1.0 or USB 2.0 port?  You should be able to determine this by opening the device manager on your computer and expanding "Universal Serial Bus" section. If any of the drivers have "Enhanced" in their name, you have a USB 2.0 port.
Does this behavior repeat with any combination of the three modules?  For instance, does it work if you just have module one, module two, or module three install?  Do you still get the error if module two and three are used instead of one and two? (This will help determine if we are encountering a problem with the modules or with SE)
Finally, if you create a project that contains only one step that only reads the data off of all three modules, do you still get the error? (This will help isolate the problem)
Seth B.
Staff Test Engineer | National Instruments
Certified LabVIEW Developer
Certified TestStand Developer
“Engineers like to solve problems. If there are no problems handily available, they will create their own problems.”- Scott Adams

Similar Messages

  • Microsoft C++ Buffer Overflow Error

    I have created two movies from iPhoto on my Mac. When I move them to an XP machine, sometime after 30 minutes of playing, I get the Microsoft C++ Buffer Overflow Error message, and Quicktime for Windows aborts. I have no problems playing them to completion on several Macs.
    I upgraded to the latest QT version, still see the problem.
    Any help appreciated.

    You're having the same issue as I...see my post here: http://discussions.apple.com/thread.jspa?threadID=672272&tstart=30
    Unfortunately, though, I have not gotten a response to my original post.

  • Doing Data acq. and buffered period measurement using counters simultaneously, gives an buffer overflow error

    Iam doing Data acquisition using NI-PXI 4472 and buffered period Measurement using NI-PXI 6602 simultaneously,my program gives an buffer overflow error

    murali_vml,
    There are two common buffer overflow and overwrite errors.
    Overflow error -10845 occurs when the NI-DAQ driver cannot read data from the DAQ device's FIFO buffer fast enough to keep up with the acquired data as it flows to the buffer (i.e., the FIFO buffer overflows before all the original data can be read from it). This is usually due to limitations of your computer system, most commonly the result of slow processor speeds (< 200 MHz) in conjunction with PCMCIA DAQ boards, which have small FIFO buffers (e.g., the DAQCard-500). Sometimes using a DAQCard with a larger FIFO can solve the problem, but a better solution is to lower the acquisition rate or move to a faster system. Another cause of the -10845 error could be due to an interrupt-driven
    acquisition. For example, PCMCIA bus does not support Direct Memory Access (DMA). If the system is tied up processing another interrupt (like performing a screen refresh or responding to a mouse movement) when it is time to move data from the board, then that data may get overwritten.
    Overwrite error -10846 occurs when the data in the software buffer that you created for an analog input operation gets overwritten by new data before you can retrieve the existing data from the buffer. This problem can be solved by adjusting the parameters of your data acquisition, such as the lowering the scan rate, increasing the buffer size, and/or increasing the number of scans to read from the buffer on each buffer read. Additionally, performing less processing in the loop can help avoid the -10846 error.
    See the NI-DAQ Function Reference Manual for a listing of all NI-DAQ error codes.
    Have a great day.

  • Flex Log Buffer Overflow Error

    Hello,
    We are running SunOne Server 6 SP4 on Solaris 2.8.
    We have a site that has numerous URL Forwards that all work. We added another one today and when you try and go to that one we get the following error in the error log:
    flex log buffer overflow- greater than 4096 characters
    Any help on what this means and how to fix it?
    thanks!!

    We found the problem.
    We had a recursive URL call.
    example: what not to do when setting up URL forwards.
    URL Forward /emp directory to /emp/some_file

  • Character set Conversion Buffer Overflow Error

    Hi,
    I have got an issue while loading data from a flat file to a staging table. i.e., Character set Conversion Buffer Overflow. Suppose there are 10,000 records in a flat file, after running control file only 100+ records are loading to the staging table. Remaining are errored out. I think there is no issue with control file because when I load data from different flat file containing same no. of records as the previous flat file, it is loading all the records. what could be the reason and solution for this issue.
    Can anyone please suggest me how to resolve this issue.

    DBNS_OUTPUT is a poor choice for debugging. It has very limited used. And as you've discovered, merely debugging code can now result in new exceptions in the code.
    The proper approach would be to create your own debug procedure (or package). Have your code call this instead of DBMS_OUTPUT.
    In your debug procedure, you can decide what you want to do with that debug data for that specific program in the current environment and circumstances.
    The program that runs could be a DBMS_JOB in which case DBMS_OUTPUT is useless. The program can be called several layers deep from other PL/SQL code.. and you want to know just who is calling your code. Etc.
    Having your own debug procedure allows you to:
    - create an autonomous transaction and log the debug data to a log table
    - write it to a DBMS_PIPE for interactive debugging
    - write it to DBMS_OUTPUT
    - record the PL/SQL call stack to determine who is calling who
    - record the current session's environment (e.g. session_context)
    - record the current session's statistics, opens cursors, current SQL, etc. (courtesy of the V$ views)
    etc. etc.
    In other words, your debug procedure gives you the flexibility to decide on HOW to handle the debugging.
    And when you code goes into production, your debug procedure ships with, containing a simple NULL command.. Which means that at any time the DBA can (when the need arise), add his/her debug methods into it in order to trace a production problem.
    Using DBMS_OUTPUT is a very poor, and often just wrong, choice.
    It is fine for writing a quick test. But when you are developing production code and using DBMS_OUTPUT, you must ask yourself whether you have made the right choice.
    And this is not just about wrapping DBMS_OUTPUT. But also wrapping other system calls like RAISE_APPLICATION_ERROR and so on.

  • MODPLSQL generates Buffer Overflow errors trying to login

    I am not entirely sure if this the right place but here it goes anyway:
    We are using Oracle Workflow Manager Standalone(2.6.4) as part of our Warehouse Builder setup on a 10.2.0.3.0. Enterprise database on Linux .
    As such the setup has just recently stopped working where as before it worked for a long time.
    The problem is that it is not possible to log in to Oracle Workflow Manager with any user.
    I have traced this problem to the mod_plsql.so library of the Oracle HTTP Server part of the owf setup.
    What happens is that this module tries to login to the database when a user tries to login with hhis browser and sends an ALTER SESSION statement.
    (This is also described in the docs)
    This statement is misformed however, it contains to much characters.
    Instead of :
    ALTER SESSION SET NLS_LANGUAGE='DUTCH' NLS_TERRITORY='THE NETHERLANDS' NLS_CURRENCY='E'
    the last bit , nls_currency, is being filled with random characters .
    Since the total is more than the allowed limit the database returns, or mod_plsql decides, a ora-1017.
    I used the proxy method described here, January 24, 2006: On a breakable Oracle, to find out what the mod_plsql.so package sends to the database.
    Just read DADS /mod_plsql for SQLPlus.
    I have to do this because these requests are handled as a SYS user and as such are not logged.
    The mod_plsql library is supposed to use the DADS.CONF directives over any environment values.
    However in the case of the PlsqlNLSLanguage directive this does not work.
    The environment variable NLS_LANGUAGE , which is set to dutch , is given precedence.
    It uses that to construct the ALTER SESSION statement.
    If i change the environment variable to AMERICAN, the modplsql.so uses this to pick the currency and it gets the $ sign for NLS_CURRENCY.
    Then the ALTER SESSION statement that is being sent is correct and there is no buffer overflow anymore.
    And the database subsequently allows us in. However this changing of NLS_LANGUAGE at an environment variable level is not desirable for us since we get other translate problems.
    Finally The Questions:
    Why does the mod_plsql.so package also send the NLS_CURRENCY ? This is mentioned in none of the (Oracle) documentation but we can clearly see it happening.
    Where does the mod_plsql.so package get this NLS_CURRENCY from? We don't set it anywhere in the environment or the .conf files, yet it is retrieved somewhere. In our case this is retrieveing some garbage data and thus causing the login to fail. Even looking in the .so library i see no mechanism for nls_currency.
    Why does the mod_plsql.so package favor the environment variable over the DADS.CONF PlsqlNLSLanguage directive. All the manuals say otherwise yet in our case it is not being used. And when i load the library in an editor i see remarks that indeed point to my statement.
    The most important question here is where do i need to look to get the NLS_CURRENCY . It is somehow corrupt and i want to correct this ofcourse.
    Another important one is how we can force the mod_plsql.so package to use the PlsqlNLSLanguage directive since we do not want to change the environment variable.
    I hope someone can help us out here.
    rgrds Mike

    Well i must say i am sorry not haveing received any answer whatsoever.
    This absence of Oracle people here is worrying me, and is the second time in a row lately.
    It seems Oracle is abandoning its own products.
    Anyway, just to answer my own thread so that somebody else gets some benefit from it:
    After investigation i find that it works like this when things go right:
    Modplsql creates a connection with the database and sends numerous key value pairs to the server.
    Such as:
    AUTH_TERMINAL
    AUTH_PROGRAM_NM
    AUTH_MACHINE
    AUTH_PID
    AUTH_SID
    AUTH_SESSKEY
    AUTH_PASSWORD
    AUTH_ACL
    AUTH_ALTER_SESSION    :
    NLS_LANGUAGE
    NLS_TERRITORY
    NLS_CURRENCY
    ... and more NLS_ stuff
    It then sends a pl/sql ALTER SESSION statement this time only with
    NLS_LANGUAGE and
    NLS_TERRITORY
    It then sends several pl/sql code bits probably to test if the database can access owa_match packages.
    In this part it also sends pl/sql to get the database NLS_LANGUAGE, NLS_TERRITORY, and NLS_CHARACTERSET.
    It also sends pl/sql to test owa_util.get_version for the proper version.
    The last part is all of the web stuff: all of the CGI variables including the POSTed data if any. Ofcourse when doing authentication tru basic authentication there is no POST data.
    The authentication info is passed on in the first step with AUTH_PASSWORD.
    The environment value NLS_LANGUAGE is used and parsed in the first bit. The corresponding bits pop up in the AUTH_ALTER_SESSION key-value pair. Modplsql finds the other info(i don't know where really) such as NLS_CURRENCY and puts that there.
    The dads.conf PlsqlNLSLanguage setting is used and parsed in the second step. The second step is formed like  alter session set nls_language='DUTCH' nls_territory='THE NETHERLANDS' .
    So my assumption about modplsql not using the Plsql is wrong here, but due to the error i encountered my debug info never got past step 1.
    If the environment value is not set then a default value of AMERICAN_AMERICA is used.
    What went wrong in my case ?
    The environment var was set to DUTCH. Modplsql uses this to lookup nls_currency as explained to output the info in step 1.
    However nls_currency returned garbage instead of just the euro sign. This is the real problem btw and not solved yet in our case. If someone knows where modplsql gets this info i would like to know this. !
    The other steps were never finished and therefore it looked to the database that the AUTH_ALTER_SESSION    key-value pair was too long.
    It could not authenticate this reqeuest causing the effect that nobody could login. Since these requests are handled as SYS users no logging takes place. Only a trace with the error:
    *** SERVICE NAME:(SYS$USERS) 2013-07-10 11:01:29.414
    *** SESSION ID:(458.3138) 2013-07-10 11:01:29.414
    Buffer overflow for attribute AUTH_ALTER_SESSION - max length[850] actual length[1131]
    indicates there is something wrong here.
    Setting the dads.conf file to override this environment parameter doesn'solve this ofcourse since this info is used somewhere else.
    Fixing it, for now at least , means clearing the envrionment variable , then starting the http server.
    And using DUTCH in the dads.conf file.
    After starting the http server we reset the environment variable.
    I am still looking for an answer on where modplsql gets the NLS_CURRENCY info since thats where the corruption is !
    Hope somebody can use this info.

  • RSO2 error and no component and infoset

    I try to use RSO2 to create a data source, but when I select Applic. component, there is no MM(material management) component.
    only PI_BASIS and BW component is there. so how to add MM component?
    error 2: under "extraction from SAP query", I can't find infoset I defined in other R/3 SAP server.  in our BI server, when I choose query area standard, I can't find table EKPO and EKKO, that's why I go to another R/3 server and create my infoset query Z1 to link EKKO and EKPO table. so how can I use infoset query Z1 in BI server or any other setting I need to do to be able to find table
    EKKO and EKPO in BW server?

    I'm creating data source in BW, why need to go to RSA6 to create a MM component manually? I thought this should be auto. created.
    The issue is: when I go to RSA9, I click "yes" for transport component hirarchy, it doesn't ask me to select MM or SD component.
    after that  a node "ROOT" with child "Acounting"  is created, this is not I want
    also, in RSA1, how would I know BI content add on is installed? I click BI content, it lists analysis process ,is it a add on?
    what else should I do to have MM component?
    My second error still not resolved, I'm asking how can EKKO, EKPO appear in SQ02 table selection of BI server?

  • OIM after BP12 patch system error and logging not working

    Hi,
    I am using Weblogic 10.3.2 and need to install OIM. I have installed the 9.1.0.1 and then the patchset to 9.1.0.2. Then I have installed the bundle patch BP07 and everything was ok.
    Finally I have installed the BP12: now i am unable to log in the OIM admin panel ( system error ) and i cannot connect with the design console (NullPointerException). The worst part is that OIM does not write a log file anymore, so i can't determine what's wrong. What should i do?
    thx in advance

    no errors, everything fine. The webapps (xellerate and nexaweb) are started normally.
    As an additional information, i just tried to revert to BP07, it still doesn't work.

  • Network file permission error and Log???  Need help...

    Hi everyone!
    Based on what I learned here:
    http://discussions.apple.com/thread.jspa?messageID=1760312&#1760312
    I have a log in my script like this:
    set log_path to (startup disk as string) & "Applications:My Folder:My Log"
    set the_date to (current date) as string
    set logfilecontent to (open for access file log_path with write permission)
    try
    write the_date & return & "Here's my Log:" & return & "item1" & return to logfilecontent starting at eof
    on error
    close access logfilecontent
    end try
    close access logfilecontent
    It has always worked but this is part of an app. I just realized that this will work on my admin login but not on a regular login (I mean logging into the computer). And if I change the owner/group of the app sometimes it will change if I can do it either on an admin login or standard. I often get a Network file permission error and it gets stuck. I know it's something to do with permissions, but maybe I don't fully understand still? I need it so it will work on admin or standard login without that error. In that same link above a user mentions he had the same problem. So there's something I'm just not getting??? And I know it's here with the Log because I took it out and no problem. Hope you guys can help out???
    Thanks in advance,
    Reg

    If the log already exists on the hard disk, you may not be able to write to it from a non-admin account, and you may want to avoid writing to the Applications folder. You can provide an empty log in the expected place which allows all accounts to write to it, but the permissions may be lost when the application is installed.
    Alternative options include creating different logs for each account:
    set log_path to (startup disk as string) & "Applications:My Folder:" & (do shell script "whoami") & "'s Log"
    or writing the log to the account's home folder:
    set log_path to (path to home folder as string) & "Library:Preferences:My Folder:My Log"
    (17283)

  • E-Commerce Catalog error and log configurator probelm?

    Hello,
    I am trying to browse a B2B shop (http://server:port/b2b/b2b/init.do) in CRM 5.0 and getting following error message:
    The catalog that you have selected is currently unavailable; try again later
    I am using CRM 5.0 System (IDES Client) and basically trying to setup a E-Commerce development environment.
    Following are the steps i have done so far for E-Commerce configuration:
    1. Configured ISADMIN (http://server:port/isauseradm/admin/xcm/init.do).
    1.1 Changed following params for Start->General Application Settings->Customer->isauseradm->isauseradmconfig:
    >> SSLEnabled: false
    >> appinfo: true
    >> show start.jsp: true
    >> AddUserToPartner: true
    >> AcceptExistingUser: true
    1.2 Created a new JCo connection under Start->Components->Customer->jco and entered my back end CRM System details and saved it as CRM_800_JCO. I also did Run Test which was successful.
    1.3 Created a new JCo connection under Start->Application Configurations->Customer and entered following params and saved it with name CRM_800_ISAUSERADMSTD:
    >> Base configuration: isauseradmStandard
    >> default configuration: X
    >> active configuration: X
    >> jcodata: CRM_800_JCO
    >> backendtype: crmdefault
    >> usertype: CRM_Standalone
    >> uidata: default
    2. Configured B2B (http://server:port/b2b/admin/xcm/init.do).
    2.1 Changed following params for Start->General Application Settings->Customer->b2b->b2bconfig:
    >> SSLEnabled: false
    >> appinfo: true
    >> show start.jsp: true
    2.2 Created a new JCo connection under Start->Components->Customer->jco and entered my backend CRM System details and saved it as CRM_800_JCO. I also did Run Test which was successful.
    2.3 Created a new JCo connection under Start->Application Configurations->Customer and entered following params and saved it with name CRM_800_B2BCRMSTD:
    >> Base configuration: b2bcrmstandard
    >> default configuration: X
    >> active configuration: X
    >> jcodata: CRM_800_JCO
    >> usertype: CRM_Standalone
    3. I was really not sure if ShopAdmin config is required or not but i did it. Configured SHOPADMIN (http://server:port/shopadmin/admin/xcm/init.do).
    3.1 Changed following params for Start->General Application Settings->Customer->shopadmin->shopadminconfig:
    >> SSLEnabled: false
    >> appinfo: true
    >> show start.jsp: true
    3.2 Created a new JCo connection under Start->Components->Customer->jco and entered my backend CRM System details and saved it as CRM_800_JCO. I also did Run Test which was successful.
    3.3 Created a new JCo connection under Start->Application Configurations->Customer and entered following params and saved it with name CRM_800_CRMSHOP:
    >> Base configuration: crmshop
    >> default configuration: X
    >> active configuration: X
    >> jcodata: CRM_800_JCO
    4. Restarted CRM J2EE.
    5. Setup TREX 7.0.
    5.1 From TREX Administration, created a new connection (Type A, i.e., using System Number and Application Server Host) for CRM System and also RFC Dest. (sm59) in CRM System.
    5.2 Restarted TREX and connected this connection.
    5.3 Following are the visible column values for this connection in TREX Administration:
    >> Connection Status: <connected>
    >> Configuration Satus: Green
    >> SAP System: CRM
    >> RFC Destination: TREX_DEFAULT
    >> Gateway: local
    >> RfcServer Instances: 1 (no automatic changes)
    >> TREXRfcServer Processes: 1
    >> Workprocesses: 6 (4 DIA, 2BGD)
    5.4 Did a RFC Connection test in CRM System using SM59 which was successful as well.
    5.5 Using Transaction SRMO (Retrieval : Search Server Relation Monitor) in CRM System changed the RFC Destination for Search server ID DRFUZZY (for both I and S type RFC Server destination action) to TREX_DEFAULT. Did a connection test for this and it was successful.
    6. Initiated Replication using transaction COMM_PCAT_IMS_INIT with following params:
    >> Product Catalog: PCSHOP
    >> Variant: VAR_EN
    >> Search Server Relation:  DRFUZZY
    >> Publishing Computer ID:
    >> Allows Parallel Processing:
    >> Publish Documents via HTTP:
    >> Transfer Document Content: X
    >> Package Size of Indexing: 5,000
    >> Processing Existing Indexes: Overwrite Index Only When OK
    >> Behavior when Error Messages Occur: Cancel Variant Replication
    7. Logged into ISADMIN User Config (http://server:port/isauseradm/useradmin/init.do) and created a new user using the option New User and New Contact Person with an existing company of type Sold-To Party and which belongs to the sales area that is linked with PCSHOP Product Catalog (from IDES data).
    Now, when i log into B2B Page (http://server:port/b2b/b2b/init.do) using the newly created user and click on the shop PC4BIZ_EN i get following error message:
    The catalog that you have selected is currently unavailable; try again later
    I also created a new Product Catalog and did the Initial Replication still getting the same above error message.
    Can anyone please tell me what am i missing or what mistake i have done?
    I even restarted CRM System, but the result is still the same.
    Also, how do i check the log files for B2B? I checked the E-Commerce Admin Console (http://server:port/b2b/admin/index.jsp) and clicked on the logging link, but i get the following message:
    Logging is now configured centrally in the J2EE Engine Visual Administrator (server service: Log Configurator)
    How exactly do i configure and what is the right path for B2B Logging in J2EE Visual Administrator? And where will the log files for B2B be stored on the server?
    I would really appreciate (and of course award points) for your help on this.
    thanks and regards,
    Vasu

    Thanks for the Note reference. I will go through it now and try to check the log files.
    And regarding the error message i don't think it could be because of The Catalog variant is not set right in the Shopadmin application as all the values in Shopadmin seem to be correct. Anyhow following are the current values selected for the custom Product Catalog I created:
    Shop Id: ZTEST
    General Information
    --> Usage
    > Business Scenario: Internet Sales B2B
    --> Authorizations
    > Authorization Group: <blank>
    --> User Administration
    > Partner Function Contact Person: 00000015
    > Country Group: <blank>
    --> Billing Documents
    > Display of Billing Documents: No Billing Documents
    --> Store Locator
    > Display Store Locator: <blank>
    Catalog
    --> Product Catalog
    > Catalog Search: <blank>
    > Catalog: ZTEST
    > Catalog Variant: VAR_EN
    > Catalog View: <blank>
    > Hide Internal Catalog: <blank>
    > Controlling Price Determination in the Catalog: via IPC
    > Profile group for pricing related attributes for exchange products: <blank>
    Transactions
    --> General
    > Allow transaction management for other business partners in hierarchy: <blank>
    > Large Documents: Display All Items
    > Document type can be chosen late: <blank>
    > Batch Processing Permitted: <blank>
    > Display product determination information: X
    --> Order
    > Choose Order Types: Order Type
    > Order Type: ISBB
    --> Order Template
    > Order Templates Allowed: X
    > Order Type: ISBB
    --> +Quotations
    > Creating a Quotation: No Quotation
    --> Contracts
    > Contract Determination: <blank>
    --> Contract Negotiations
    > Allow Contract Negotiations: <blank>
    Marketing
    --> Global Product Recommendation
    > Display Global Product Recommendation: <blank>
    --> Personalized Product Recommendation
    > Display Personalized Product Recommendation: <blank>
    --> +Product-Related Proposals +
    > Display Product-Related Proposals: <blank>
    --> +Campaigns +
    > Allow manual entry of campaigns: <blank>
    Auction
    --> Auction
    > Auctions allowed: <blank>
    Regarding the reason "The Catalog was not replicated properly and is hence unavailable on TREX", is there any way to verify this? When i ran Initial Replication (transaction COMM_PCAT_IMS_INIT with the values i said in my first post) everything was green.
    Also, how do i clear the Catalog Cache? Is it the same as clearing the Catalog Cache Statistics in E-Commerce Administration Console (http://server:port/b2b/admin/index.jsp)?
    Thanks and Regards,

  • Application Deployment error and logging...

    On OC4J 10.1.3.0.0, I am trying to deploy a web application and receive the following error:
    [SEVERE]: Error instantiating application at file:/C:/dev/oc4j-10.1.3.0.0/j2ee/home/applications/ipcmdb.ear: Unable to get ApplicationConfig for ipcmdb : Unable to find/read file META-INF/application.xml in C:\dev\oc4j-10.1.3.0.0\j2ee\home\applications\ipcmdb (META-INF/application.xml).
    My ear file contains an ejb module. I know for a fact, that if I comment out the reference to the ejb module in application.xml, that my application will deploy just fine. So, it appears that I have a configuration issue with my referenced ejb module inside my ear file and OC4J dies when trying to extract the ear file, only displaying the ever so helpful error message listed above.
    Does anyone know of logs or output messages from OC4J when attempting to extract modules inside a web application?
    Thank you,
    Jason

    Avi,
    Although the information in the http://kb.atlassian.com/content/atlassian/howto/orionproperties.jsp might be applicable to oc4j 10.1.3 after changing orion.jar to oc4j.jar, I would rather not associate orion to oc4j 10.1.3, the newest production release of oc4j anymore in any technical tips. That kind of association would be at be best misleading.
    What we should referring to here is the book OC4J "Configuration and Administration Guide" for 10.1.3 at the Oracle Application Server Documentation Library for 10g release 3. In chapter 4, "4 OC4J Runtime Configuration", search for "-listProperties".
    So you can see a long list of system properties that are used by oc4j by running "java -jar oc4j.jar -listProperties". If you search for ApplicationServerDebug, you will see, "ApplicationServerDebug - Gates whether to dump extra diagnostic information for the application server. This flag is used widely". So, this flag is considered documented.

  • OEM - errors and logs off automatically

    I am using OEM 10.2.0.3.0 - with Adobe SVG 3.0, Windows Vista and Java jdk 6 update 17 - the problem that I am having is OEM will log off with this error --> unrecognized DOCTYPE declaration Image might not display correctly.
    Does anyone know how I can resolve this issue - I get log off abruptly about three times per day.
    My co_workers are using the same setup except they have java jdk 6 update 7 and don't have the issue.
    Edited by: user10449670 on Mar 18, 2010 8:50 AM

    I have changed all my environment to match my co-workers - IE Version 8.0.6001.18882, adobe svg 3.0, flash player W 10,0.45,2 JDK 6 up 7, OEM grid 10.2.0.3.0 The only things that I changed from my co-workers is I updated my flash player version and went from jdk 6 update 17 to 7. I am not getting the unrecognized doctype now but it is logging me off of grid when I exit MS sharepoint in my browser. It seems to have settled down the issues with automatically logging me off multiple times during the day. In the future with the latest update of sql developer I will be needing to update my jdk version 11 or greater. Any ideas on what esle I need to check?

  • Oracle Failsafe error and logs

    Since I don't see any oracle failsafe forum I post here. Please let me know it this is wrong o there is a better section.
    Sometimes when I try to take offline my instance I receive this error:
    FS-10890: Oracle Services for MSCS failed during the offline operation
    FS-10013: Failed to take the cluster resource ORAWHITE offline
    FS-10728: Resource ORAWHITE timed out trying to go offlineThis is very strange since looking the alert log I see that the shutdown takes less than one minute while the timeout is 3 minutes..
    Failsafe has any logs that can be investigated?
    Hope you can help me.
    Cheers
    Adriano

    user10388158 wrote:
    Hi,
    i need a little help..
    for some reason, my prodcution listerner crashes.. but when looking into the logs of my listener, no error is shown.
    even if i check the time stamp, there is no error of the log.forgive me, but I need to ask a simple question.
    what proof exists that outage was not an Operating System crash?
    if the system crashed, then the Oracle listener would no longer be available to log any connection requests.
    Handle:     user10388158
    Status Level:     Newbie
    Registered:     Mar 10, 2010
    Total Posts:     25
    Total Questions:     9 (8 unresolved)
    why do you waste time here when your questions RARELY get answered?

  • Newbie Topic - compiling errors and jvm.dll missing

    Okay, we're starting at square .1 here. I'm trying to compile the ClickMe.java in the tutorial for newbies like myself. I know how to compile from the command line, etc., but I get an 2 errors. Seems to be saying line 6, cannot resolve symbol and line 28.
    - - - - code below - - - - - -
    import java.applet.Applet;
    import java.awt.*;
    import java.awt.event.*;
    public class ClickMe extends Applet implements MouseListener {
    private Spot spot = null;
    private static final int RADIUS = 7;
    public void init() {
    addMouseListener(this);
    public void paint(Graphics g) {
    // draw a black border and a white background
    g.setColor(Color.white);
    g.fillRect(0, 0, getSize().width - 1, getSize().height - 1);
    g.setColor(Color.black);
    g.drawRect(0, 0, getSize().width - 1, getSize().height - 1);
    // draw the spot
    g.setColor(Color.red);
    if (spot != null) {
    g.fillOval(spot.x - RADIUS,
    spot.y - RADIUS,
    RADIUS * 2, RADIUS * 2);
    public void mousePressed(MouseEvent event) {
    if (spot == null) {
    spot = new Spot(RADIUS);
    spot.x = event.getX();
    spot.y = event.getY();
    repaint();
    public void mouseClicked(MouseEvent event) {}
    public void mouseReleased(MouseEvent event) {}
    public void mouseEntered(MouseEvent event) {}
    public void mouseExited(MouseEvent event) {}
    - - - - - - end ClickMe.java code - - - - -
    I thought maybe it was b/c the Spot was capitalized, but I changed that to l/c and that didn't help. This code is, of course, taken right off of this page:
    http://java.sun.com/docs/books/tutorial/java/concepts/practical.html
    The other thing that's happening, is that when I try to run the NetBeans IDE 3.6, I get an error "Cannot load jvm.dll." I know there are jvm.dll's on this machine, but maybe they're in the wrong directory. I haven't messed with anything, just d-loaded the whole j2se SDK 1.4.1 and i'm already snafu'd. But then, I'm a newbie, so go ahead make fun. But, if you can, offer advice.
    thx in advance!

    I seem to be compiling right, b/c I was able to compile the classic HelloWorld.java ok and I can compile the spot.java file ok. I see what looks to be a valid spot.class file after I compile. If you're really bored, I could describe exactly how I'm launching the cmd.exe and how I point successfully to the javac.exe, but, newbie that I am, can we just go on faith that I'm doing that step right? And I also tried re-compiling the Spot.java file first, trashing the original ClickMe.java file so that everything was sequential. I'm wondering if this has something to do with the IDE & that jvm.dll out of whack.....
    As for the Netbeans IDE, I'm going to see if I can reinstall that separately. The whole SDK maybe didn't unpack correctly, who knows.
    But thanks for responding......

  • Spool file error - Buffer Overflow

    I've a sql script that is running from a.sql file. The script fetches almost 9 million data as a result of the query. Main problem is related to spool file generation. It is saying buffer overflow. But, when i write that query from sql prompts it is running within 2 mins and completes the task within 15 mins.
    I've written -
    set serveroutput on size ######
    But, still it is not working. Total application is freeze whenever i want to run it. Tell me any suggestion if u have.
    Satyaki.

    i guess what you are more concerned with is the output produced by your query (the spool file).
    you need not to see what is displayed on the screen all you need is open up the spool file to
    check for those data in there.
    to suppress the output from the screen and still generates spool file. use the SET TERMOUT option.
    e.g.
    -- code for a producing a sample spool file
    spool r:\sample_spool.txt;
    select * from emp;
    spool off;
    at the SQL*Plus command line:
    SQL> @r:\sample_spool.sql;
         EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
          7566 JONES      MANAGER         7839 02-APR-81       2975       1000         20
          7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
          7839 KING       PRESIDENT            17-NOV-81       5000                    10
          7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
          7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
          7369 SMITH      CLERK           7902 17-DEC-80        800                    20
          7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
          7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
          7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
          7788 SCOTT      ANALYST         7566 09-DEC-82       3000                    20
          7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
          7876 ADAMS      CLERK           7788 12-JAN-83       1100                    20
          7900 JAMES      CLERK           7698 03-DEC-81        950                    30
          7934 MILLER     CLERK           7782 23-JAN-82       1300                    10
    14 rows selected.
    SQL> -- the above example displays the output on the screen
    SQL> -- now we want to turn it off by using the SET TERMOUT option
    SQL> SQL> set termout off;
    SQL> @r:\sample_spool.sql;
    SQL>
    after executing the script sample_spool.sql it does not display output from the screen which
    we want to avoid the buffer overflow error. and still produce the spool file:
         EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO                                  
          7566 JONES      MANAGER         7839 02-APR-81       2975       1000         20                                  
          7902 FORD       ANALYST         7566 03-DEC-81       3000                    20                                  
          7839 KING       PRESIDENT            17-NOV-81       5000                    10                                  
          7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30                                  
          7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10                                  
          7369 SMITH      CLERK           7902 17-DEC-80        800                    20                                  
          7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30                                  
          7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30                                  
          7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30                                  
          7788 SCOTT      ANALYST         7566 09-DEC-82       3000                    20                                  
          7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30                                  
          7876 ADAMS      CLERK           7788 12-JAN-83       1100                    20                                  
          7900 JAMES      CLERK           7698 03-DEC-81        950                    30                                  
          7934 MILLER     CLERK           7782 23-JAN-82       1300                    10                                  
    14 rows selected.

Maybe you are looking for