Db_verify: Suspiciously high nelem Error from Berkely DB
I have an application which uses Berkely DB ( Version 3.2.9). My application runs on Solaris. Sometime I get following error message
thrown by the application -
"db_verify: Suspiciously high nelem of 4294967287 on page 0
DB_VERIFY_BAD: Database verification failed."
There is no problem with my database and all records are intact. I came to know that this is a known problem is Berkely DB and there is some patch available for this.
Can anyone please let me know what patch is available for this problem and where I can get the details of this patch ?
Regards
Lalit.
Hi Bogdan,
Thanks for your reply.
I came to know about this problem from Linux discussion from. Here is the link which taks about similar problem in Berkeley DB
https://www.redhat.com/archives/rpm-list/2002-June/msg00118.html
It talks about a Berkely DB patch #4491, but I am unable to find any information about this patch.
As I mentioned, my DB is not corrupt. If I ignore this error, the application work fine and all the records in the DB are intact.
Regards
Lalit.
Hi Lalit,
I came to know that this is a known
problem is Berkely DB and there is some patch
available for this.Where did you come to know that from?
I think that this corruption can happen if you don't
close the library properly.
What you can do is:
1. Upgrade;
2. You could salvage the database and re-load it when
corruption occurs, using the db_dump utility and the
-r or -R options.
3. You can transactionally protect your application
and running recovery in the case of application or
system failure.
Regards,
Bogdan Coman
Similar Messages
-
Error while running ejbc. Fatal error from EJB Compiler ---- Error while pr
Hi!
I was deploying a test application for a session bean with sun 1 studio 5 and I started getting this message while deploying.
I had tested the bean previously and I had no problems.
I found this in the sun app server 7 release notes, but I don't understand what I'm supposed to do...
"Deployment of CMP beans fails.
The following error is thrown because there are no <query-params> entries in the container-managed persistence (CMP) bean in sun-ejb-jar.xml file:
Error while running ejbc. Fatal error from EJB Compiler ---- Error while processing CMP beans.
Solution
Even if it isn't necessary for the CMP beans, add the query-params tag for finders in the sun-ejb-jar.xml file with the empty parameters."
Here is my sun-ejb-jar.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE sun-ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Sun ONE Application Server 7.0 EJB 2.0//EN" "http://www.sun.com/software/sunone/appserver/dtds/sun-ejb-jar_2_0-0.dtd">
<sun-ejb-jar>
<enterprise-beans>
<name>GestorDoBanco_EJBModule</name>
<ejb>
<ejb-name>Cliente</ejb-name>
<jndi-name>ejb/Cliente</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/Cliente.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<ejb>
<ejb-name>ClienteAssociadoAConta</ejb-name>
<jndi-name>ejb/ClienteAssociadoAConta</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/ClienteAssociadoAConta.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<ejb>
<ejb-name>Conta</ejb-name>
<jndi-name>ejb/Conta</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/Conta.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<ejb>
<ejb-name>GestorDoBanco</ejb-name>
<jndi-name>ejb/GestorDoBanco</jndi-name>
<pass-by-reference>false</pass-by-reference>
</ejb>
<ejb>
<ejb-name>MensagemM003</ejb-name>
<jndi-name>ejb/MensagemM003</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/MensagemM003.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<ejb>
<ejb-name>MensagemM003Rejeitada</ejb-name>
<jndi-name>ejb/MensagemM003Rejeitada</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/MensagemM003Rejeitada.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<ejb>
<ejb-name>MensagemM012</ejb-name>
<jndi-name>ejb/MensagemM012</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/MensagemM012.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<ejb>
<ejb-name>MensagemM012Rejeitada</ejb-name>
<jndi-name>ejb/MensagemM012Rejeitada</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/MensagemM012Rejeitada.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<ejb>
<ejb-name>MensagemM103</ejb-name>
<jndi-name>ejb/MensagemM103</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/MensagemM103.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<ejb>
<ejb-name>MensagemM112</ejb-name>
<jndi-name>ejb/MensagemM112</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/MensagemM112.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<ejb>
<ejb-name>Registo</ejb-name>
<jndi-name>ejb/Registo</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/Registo.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<ejb>
<ejb-name>TransaccaoConfirmada</ejb-name>
<jndi-name>ejb/TransaccaoConfirmada</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/TransaccaoConfirmada.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<ejb>
<ejb-name>TransaccaoFinalizada</ejb-name>
<jndi-name>ejb/TransaccaoFinalizada</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/TransaccaoFinalizada.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<ejb>
<ejb-name>TransaccaoIniciada</ejb-name>
<jndi-name>ejb/TransaccaoIniciada</jndi-name>
<pass-by-reference>false</pass-by-reference>
<cmp>
<mapping-properties>pcImpl0/moduleComp1/Data/TransaccaoIniciada.mapping</mapping-properties>
</cmp>
<refresh-period-in-seconds>600</refresh-period-in-seconds>
</ejb>
<pm-descriptors>
<pm-descriptor>
<pm-identifier>SunONE</pm-identifier>
<pm-version>1.0</pm-version>
<pm-class-generator>com.iplanet.ias.persistence.internal.ejb.ejbc.JDOCodeGenerator</pm-class-generator>
<pm-mapping-factory>com.iplanet.ias.cmp.NullFactory</pm-mapping-factory>
</pm-descriptor>
<pm-inuse>
<pm-identifier>SunONE</pm-identifier>
<pm-version>1.0</pm-version>
</pm-inuse>
</pm-descriptors>
<cmp-resource>
<jndi-name>mysqlpmanager</jndi-name>
<default-resource-principal>
<name>bes</name>
<password>besbes</password>
</default-resource-principal>
</cmp-resource>
</enterprise-beans>
</sun-ejb-jar>
Thanks in advance for any help.
Nunohttp://docs.sun.com/source/817-2175-10/decmp.html
Please go to the above docs and look thru the examples given in it.
Example 2
This query returns all products in a specified price range. It defines two query parameters which are the lower and upper bound for the price: double low, double high. The filter compares the query parameters with the price field:
"low < price && price < high"
The finder element of the sun-ejb-jar.xml file would look like this:
<finder>
<method-name>findInRange</method-name>
<query-params>double low, double high</query-params>
<query-filter>low < price && price <
high</query-filter
</finder>
I hope this hepls. In your case u just have to make it null. -
Error from the session log between Informatica and SAP BI
HI friends,
I am working extraction from bi by Informatica 8.6.1.
now, I start the process Chain from bi, and I got a error from Informatica's session log.Pls help me to figure out what's going on during me execution.
Severity Timestamp Node Thread Message Code Message
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6228 Writing session output to log file [D:\Informatica\PowerCenter8.6.1\server\infa_shared\SessLogs\s_taorh.log].
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6014 Initializing session [s_taorh] at [Fri Dec 17 11:01:31 2010].
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6683 Repository Name: [RepService_dcinfa01]
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6684 Server Name: [IntService_dcinfa01]
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6686 Folder: [xzTraining]
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6685 Workflow: [wf_taorh] Run Instance Name: [] Run Id: [43]
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6101 Mapping name: m_taorh.
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS.US]
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6703 Session [s_taorh] is run by 32-bit Integration Service [node01_dcinfa01], version [8.6.1], build [1218].
INFO 2010-12-17 11:01:31 node01_dcinfa01 MANAGER PETL_24058 Running Partition Group [1].
INFO 2010-12-17 11:01:31 node01_dcinfa01 MANAGER PETL_24000 Parallel Pipeline Engine initializing.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MANAGER PETL_24001 Parallel Pipeline Engine running.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MANAGER PETL_24003 Initializing session run.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING CMN_1569 Server Mode: [UNICODE]
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING CMN_1570 Server Code page: [MS Windows Simplified Chinese, superset of GB 2312-80, EUC encoding]
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TM_6151 The session sort order is [Binary].
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TM_6156 Using low precision processing.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TM_6180 Deadlock retry logic will not be implemented.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING SDKS_38029 Loaded plug-in 300320: [PowerExchange for SAP BW - OHS reader plugin 8.6.1 build 183].
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING SDKS_38024 Plug-in 300320 initialization complete.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING PCCL_97003 [WARNING] Real-time session is not enabled for source [AMGDSQ_IS_TAORH]. Real-time Flush Latency value must be 1 or higher for a session to run in real time.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING SDKS_38016 Reader SDK plug-in intialization complete.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TM_6307 DTM error log disabled.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TE_7022 TShmWriter: Initialized
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TM_6007 DTM initialized successfully for session [s_taorh]
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR PETL_24033 All DTM Connection Info: [<NONE>].
INFO 2010-12-17 11:01:31 node01_dcinfa01 MANAGER PETL_24004 PETL_24004 Starting pre-session tasks. : (Fri Dec 17 11:01:31 2010)
INFO 2010-12-17 11:01:31 node01_dcinfa01 MANAGER PETL_24027 PETL_24027 Pre-session task completed successfully. : (Fri Dec 17 11:01:31 2010)
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR PETL_24006 Starting data movement.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TM_6660 Total Buffer Pool size is 1219648 bytes and Block size is 65536 bytes.
INFO 2010-12-17 11:01:31 node01_dcinfa01 READER_1_1_1 OHS_99013 [INFO] Partition 0: Connecting to SAP system with DESTINATION = sapbw, USER = taorh, CLIENT = 800, LANGUAGE = en
INFO 2010-12-17 11:01:32 node01_dcinfa01 READER_1_1_1 OHS_99016 [INFO] Partition 0: BW extraction for Request ID [163] has started.
Edited by: bi_tao on Dec 18, 2010 11:46 AMINFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8167 Start loading table [VENDOR] at: Fri Dec 17 11:01:32 2010
INFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8168 End loading table [VENDOR] at: Fri Dec 17 11:01:32 2010
INFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8141
Commit on end-of-data Fri Dec 17 11:01:32 2010
===================================================
WRT_8036 Target: VENDOR (Instance Name: [VENDOR])
WRT_8044 No data loaded for this target
INFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8143
Commit at end of Load Order Group Fri Dec 17 11:01:32 2010
===================================================
WRT_8036 Target: VENDOR (Instance Name: [VENDOR])
WRT_8044 No data loaded for this target
INFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8035 Load complete time: Fri Dec 17 11:01:32 2010
LOAD SUMMARY
============
WRT_8036 Target: VENDOR (Instance Name: [VENDOR])
WRT_8044 No data loaded for this target
INFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8043 ****END LOAD SESSION****
INFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8006 Writer run completed.
INFO 2010-12-17 11:01:33 node01_dcinfa01 MANAGER PETL_24031
RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****
Thread [READER_1_1_1] created for [the read stage] of partition point [AMGDSQ_IS_TAORH] has completed. The total run time was insufficient for any meaningful statistics.
Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [AMGDSQ_IS_TAORH] has completed. The total run time was insufficient for any meaningful statistics.
Thread [WRITER_1_*_1] created for [the write stage] of partition point [VENDOR] has completed. The total run time was insufficient for any meaningful statistics.
INFO 2010-12-17 11:01:33 node01_dcinfa01 MANAGER PETL_24005 PETL_24005 Starting post-session tasks. : (Fri Dec 17 11:01:33 2010)
INFO 2010-12-17 11:01:33 node01_dcinfa01 MANAGER PETL_24029 PETL_24029 Post-session task completed successfully. : (Fri Dec 17 11:01:33 2010)
INFO 2010-12-17 11:01:33 node01_dcinfa01 MAPPING SDKS_38025 Plug-in 300320 deinitialized and unloaded with status [-1].
INFO 2010-12-17 11:01:33 node01_dcinfa01 MAPPING SDKS_38018 Reader SDK plug-ins deinitialized with status [-1].
INFO 2010-12-17 11:01:33 node01_dcinfa01 MAPPING TM_6018 The session completed with [0] row transformation errors.
INFO 2010-12-17 11:01:33 node01_dcinfa01 MANAGER PETL_24002 Parallel Pipeline Engine finished.
INFO 2010-12-17 11:01:33 node01_dcinfa01 DIRECTOR PETL_24013 Session run completed with failure.
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR TM_6022
SESSION LOAD SUMMARY
================================================
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR TM_6252 Source Load Summary.
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR CMN_1537 Table: [AMGDSQ_IS_TAORH] (Instance Name: [AMGDSQ_IS_TAORH]) with group id[1] with view name [Group1]
Rows Output [0], Rows Affected [0], Rows Applied [0], Rows Rejected[0]
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR TM_6253 Target Load Summary.
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR CMN_1740 Table: [VENDOR] (Instance Name: [VENDOR])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR TM_6023
===================================================
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR TM_6020 Session [s_taorh] completed at [Fri Dec 17 11:01:33 2010]. -
We are recording live streams on a Flash Media Interactive Server 3.5.3 r824. In general, everything works fine, so there are no appliaction-issues. But sometimes (it is not reproducable yet) a stream stops recording without any notice or event in the application. All we can find is such a message in our core.log:
2010-03-05 03:30:00 4747 (e)2611178 Error from libmp4.so: No Space left in the stsd box. -
2010-03-05 03:30:00 4747 (e)2611423 Failed to record [...]16891_14351_RGtBCODxPR4cM8QfML9CuxqhHqutMwWX.f4v (Unknown Error). -
Can anyone give me a hint, where i could start searching for the cause of this error?
These streams are in general sent by Adobe Media Live Encoder.
Thanks in advance
SuhaYou're running out of sample description spaces in your recording of an F4V - this is presumably because you're splicing together different H264 encodings or other types of media. No matter, you can configure this value up from it's default of 10 - check out Server.xml in your configs and you'll find this section
<Recording>
<!-- Maximum ELST entries in a recording. ELST entries are used when there -->
<!-- are gaps in a kind of content. Gaps occur during an append to the file -->
<!-- or when content like video ends while other content proceeds. If more -->
<!-- gaps or appends occur than configured here, recording would terminate -->
<!-- Making this value too high takes up unnecessary space in each recorded file-->
<!-- Default value is 100 -->
<MaxELSTEntries>100</MaxELSTEntries>
<!-- Each change in codec for a content type, like two different video codecs -->
<!-- takes a sample description. All space for sample descriptions is made on -->
<!-- file creation. If codec type changes more than descriptions available -->
<!-- recording will terminate. Adding too many descriptions takes unnecessary -->
<!-- space for every file record. Default is 10 for each type -->
<MaxDataSampleDescriptions>10</MaxDataSampleDescriptions>
<MaxAudioSampleDescriptions>10</MaxAudioSampleDescriptions>
<MaxVideoSampleDescriptions>10</MaxVideoSampleDescriptions>
</Recording>
You'll want to increase the appropriate SampleDescription Max. Not sure which it is yet (audio/video/data) but in theory you can increase any or all as needed. These boxes are set in size when you start your recording so all recordings will bloat very slightly to cover this case that most users don't run into, but feel free to set the SampleDescription limits higher and you should stop seeing this. -
I could not make high resolution pdf from adobe indesign cs6
I could not make high resolution pdf from indesign.
Error code 0x00080148 is coming and indesign closes automatically.
please solve my hiccup.
thanks
Lakshminarayanan GI could not make high resolution pdf from indesign.
Error code 0x00080148 is coming and indesign closes automatically.
please solve my hiccup.
thanks
Lakshminarayanan G -
Db_verify: PANIC: fatal region error detected; run recovery
We have an application that is using bdb. On one of the instances it frequently hits a panic condition. db_verify reports the following:
# db_verify -o file.db
db_verify: PANIC: fatal region error detected; run recovery
db_verify -VSleepycat Software: Berkeley DB 4.3.29: (June 16, 2006)
Quitting the application and removing the __db.001 file will allow a clean restart. How do I debug where this problem might be coming from?
thanks,
kevinHi Kevin,
user11964780 wrote:
# db_verify -o file.db
db_verify: PANIC: fatal region error detected; run recoveryThis is a generic error message that means the region files are corrupted. This is most often a problem in the application.
user11964780 wrote:
How do I debug where this problem might be coming from?Reference Guide - Chapter 24: [ Debugging Applications|http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/debug.html]
Bogdan Coman -
I just noticed today that any Graph or PDF report I try to view I get the
following error - Error from reports from ZAM - File does not begin with
'%PDF-'. I tried to just save the file and then open it, but get the same
error. I can open other pdf files I have downloaded from other sites ok.
Anyone have any idea? I haven't done anything to that server that I am
aware of in quite a while.
Thanks
BillI took a closer look at the files it downloaded, opened them with notepad,
haer is what it says:
XSL Transform or subsequent processing failedThe document has no pages.
"Bill" <[email protected]> wrote in message
news:2bBtk.2164$[email protected]..
>I just noticed today that any Graph or PDF report I try to view I get the
>following error - Error from reports from ZAM - File does not begin with
>'%PDF-'. I tried to just save the file and then open it, but get the same
>error. I can open other pdf files I have downloaded from other sites ok.
>Anyone have any idea? I haven't done anything to that server that I am
>aware of in quite a while.
>
> Thanks
>
> Bill
> -
As soon as I login with iChat, I get an error from MobileMe!
Hey, I've had this issue for a while, and my work around for it was just not using iChat. However, I want to use iChat now so I need a proper resolution. Now I have mobile me with my [email protected] address, and I have my iChat account with [email protected] (created before my MM subscription).
When I log into iChat for the first time, within minutes I get an error from MobileMe saying "You've entered an incorrect password for MobileMe, please try again"... I didn't do anything though.
If I go to MobileMe in System Preferences, it says: "Your password has changed. Enter your new password.
If you have not changed (or reset) your password contact MobileMe support."
If I enter my password in and get that fixed, the next time I start up iChat it says: "Your login ID or password is incorrect"... soo I'm confused.
The password for my @me and @mac addresses are different, is it possible they are sharing the same keychain? I do have the check box ticked to remember the password.
Any idea's?Hi,
I would not repair at this time.
In the list of password you should find the one for iChat Or possibly several for iChat if you have different Account/Screen Names list by their Account names.
Double click on the one that is causing you issue.
In the new window that pops up put a tick in the Password Box. You will probably have to confirm with your Admin Password to Allow Once.
Does this show the right password ?
If not change the Password.
If it does then click the Access Tab and see if iChat is Allowed (Or Allow All is being used)
Either change to Allow All or add iChat to the List.
7:45 PM Saturday; April 3, 2010
Please, if posting Logs, do not post any Log info after the line "Binary Images for iChat" -
How to get actual error from Crystal Report
We are using Crystal report in web service.
We faced some problem due to crystal report unexpected error.
Refer the below error message.
Xception E NSF NSFZ1100 20100608 145511565 GPRAB0 : GPRZ10 GUEC0001 [1] AbstractService Showing a modal dialog box or form when the application is not running in UserInteractive mode is not a valid operation. Specify the ServiceNotification or DefaultDesktopOnly style to display a notification from a service application.
Xception E NSF NSFZ1100 20100608 145511972 GPRAB0 : GPRZ10 GUEC0001 [1] AbstractService at System.Windows.Forms.MessageBox.ShowCore(IWin32Window owner, String text, String caption, MessageBoxButtons buttons, MessageBoxIcon icon, MessageBoxDefaultButton defaultButton, MessageBoxOptions options, Boolean showHelp)
at System.Windows.Forms.MessageBox.Show(String text, String caption, MessageBoxButtons buttons, MessageBoxIcon icon)
at CrystalDecisions.Windows.Forms.CrystalReportViewer.HandleExceptionEvent(Object eventSource, Exception e, Boolean suppressMessage)
at CrystalDecisions.Windows.Forms.CrystalReportViewer.HandleExceptionEvent(Object eventSource, Exception e)
at CrystalDecisions.Windows.Forms.ReportDocumentBase.GetLastPageNumber()
at CrystalDecisions.Windows.Forms.ReportDocumentBase.GetLastPage()
at CrystalDecisions.Windows.Forms.DocumentControl.ShowLastPage()
at CrystalDecisions.Windows.Forms.PageView.ShowLastPage()
at Biz.Nissan.Cats.CORE.REPORT.LibCrystalReport.TotalPageCount(ReportDocument Rpt)
at Biz.Nissan.Cats.CORE.REPORT.LibCrystalReport.Print(BaseReport RptDefinition)
at Biz.Nissan.Cats.CORE.REPORT.MCTLIST260Print.Print(IFData ifData)
at Biz.Nissan.W3.CATS.BC.Service.DistributeService.ExecuteMpp()
How we get the actual error from crystal report?
Thanks in AdvanceSame as
crystal report unexpected error in Web service (IIS)
Closing this thread.
Ludek -
got event ID 4015 and source DNS-Server-Service. please suggest how to fix this issue
The DNS server has encountered a critical error from the Active Directory. Check that the Active Directory is functioning properly. The extended error debug information (which may be empty) is "". The event data contains the error.
RajHi
first run "ipconfig /flushdns" and then "ipconfig /registerdns" finally restart dns service and check the situation,also you can check dns logs computer management ->Event viewer->Custom Views->Server roles->DNS. -
Smartform Printing : Error in spool C call: Error from TemSe
Hi! everybody,
I am stating my problem as follows : I have to print a bar-code sticker of size 10 X 7 cms. I have worked on bar-codes before this also. This time the output from a smartform is to be given to a ZEBRA TLP 2844 printer. I initially encountered problems in printing. The data does not fit on to the page.
After a lot of searching I found that ZEBRA was a partner of SAP and that a special device type needs to be created for output from a ZEBRA printer. I did this two times. Each time my steps were as described under.
To create the device type I did the following :
1. I finally found the driver from the ZEBRA website from http://www.zebra.com/id/products/global/en/utilities/sap_device_types.UtilityFile.File.tmp/Zebra_SAP_Device_Types.zip From this I uploaded the driver for 203 DPI zebra printer with IBM code 850 font file name "YZB200.PRI" into the transaction SA38.
2. Then I created a new device ZEB10 in SPAD after assigning my format to the device.
Thereafter, I tried to print my sticker. During this procedure, on selection of the new device type, the fonts automatically changed to ARIAL in the print preview. When I give the print command (Spool request : Print immed = X, Delete after output = X & New spool request = X) it gives an error message Error in spool C call: Error from TemSe.
Since the output had not been issued, I opened the spool request to view its TemSe characteristics. Here I found
Spool Attributes
Output Device ZEB10
Format ZTT Format
Doc. Category SMART
Recipient
Department
Deleted On 19.01.2011
Authorization
Output Attributes
No. of Copies 1
Priority 5
SAP Cover Page Do not print SAP cover page
OS Cover Sheet Print as set at printer
Requested 0
Processed 0 With Problem 0
With Error (Not Printed) 0
Storage Mode Print
TemSe Attributes
Object name SPOOL0000013836
Data type ????????????
Character set 0 -
> Character set of dev type = 1162
Number of parts 0
Record format
Size in bytes 0
Storage location
On seeing SP01
Spool no. Type Date Time Status Pages Title
13836 Smartforms(OTF) 11.01.2011 07:32 + 0 SMART LP01 USERID
I hope this data helps you help me. Please ask for more data if you wish. Also, I have searched vastly for this error on the net have already come across the link http://help.sap.com/saphelp_45b/helpdata/en/d9/4a8f9c51ea11d189570000e829fbbd/frameset.htm but to no use. On the SDN, I have not found a similar thread and that is why I decided to post this problem here, hoping to find a solution. Kindly help.
Regards,
ManasHi Manas,
I am facing the same issue for one of my clients.
Can you please share the solution with me if you have come out with it.
Regards,
Nirmal.K -
How to remove error from propagation and verify replication is ok?
Have a one-way schema level streams setup and the target DB is 3-node RAC (named PDAMLPR1). I run a large insert at source DB (35million rows). After committing on source, I made a failure test on the target DB by shutting down the entire database. The streams seems stopped as the heartbeat table sequence (it inserts a row each min) on target is still reflecting last night. We get the error in the dba_propagation:
ORA-02068: following severe error from PDAMLPR1
ORA-01033: ORACLE initialization or shutdown in progress
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 1087
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 7639
ORA-06512: at "SYS.DBMS_AQADM", line 631
ORA-06512: at line 1
08-FEB-10
while capture, propagation, and apply are all in enabled status. I restarted capture and propagation at source DB. But still see the error message above. My questions are:
1. How to delete the error from dba_propagation?
2. How to verify the streams is still running fine?
In normal test, during such large insert, the heartbeat table added a row in an hour. Very slow.
thanks for advice.Well, if I can give you my point of view: I think that 35 millions LCR is totally unreasonnable. Did you really post a huge insert of 35 millions then committed that single utterly huge transaction? Don't be surprised it's going to work very very hard for a while!
With a default setup, Oracle recommends to commit every 1000 LCR (row change).
There are ways to tune Streams for large transaction but I have not done so personnaly. Look on Metalink, you will find information about that (mostly documents id 335516.1, 365648.1 and 730036.1).
One more thing: you mentionned about a failure test. Your target database is RAC. Did you read about queue ownership? queue_to_queue propagation? You might have an issue related to that.
How did you setup your environment? Did you give enough streams_pool_size? You can watch V$STREAMS_POOL_ADVICE to check what Oracle think is good for your workload.
If you want to skip the transaction, you can remove the table rule or use the IGNORETRANSACTION apply parameter.
Hope it helps
Regards, -
Need Help in expdp for resolving ORA-39127: unexpected error from call
Hi All,
My Environment is -------> Oracle 11g Database Release 1 On Windows 2003 Server SP2
Requirement is ------------> Data Pump Jobs to be completed without any error message.
I am tryring to take export data pump of a schema
Command Used --> expdp schemas=scott directory=data_pump_dir dumpfile=scorr.dmp version=11.1.0.6.0
Export Log Show this details its completed with 2 error messages
Export: Release 11.1.0.6.0 - Production on Saturday, 23 April, 2011 13:31:10
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the OLAP option
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** schemas=scott directory=data_pump_dir dumpfile=scorr.dmp version=11.1.0.6.0
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
ORA-39127: unexpected error from call to export_string :=SYS.DBMS_CUBE_EXP.schema_info_exp('SCOTT',0,1,'11.01.00.06.00',newblock)
ORA-37111: Unable to load the OLAP API sharable library: (The specified module could not be found.
ORA-06512: at "SYS.DBMS_CUBE_EXP", line 205
ORA-06512: at "SYS.DBMS_CUBE_EXP", line 280
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_METADATA", line 5980Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39127: unexpected error from call to export_string :=SYS.DBMS_CUBE_EXP.schema_info_exp('SCOTT',1,1,'11.01.00.06.00',newblock)
ORA-37111: Unable to load the OLAP API sharable library: (The specified module could not be found.
ORA-06512: at "SYS.DBMS_CUBE_EXP", line 205
ORA-06512: at "SYS.DBMS_CUBE_EXP", line 280
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_METADATA", line 5980
. . exported "SCOTT"."DEPT" 5.945 KB 4 rows
. . exported "SCOTT"."EMP" 8.585 KB 14 rows
. . exported "SCOTT"."SALGRADE" 5.875 KB 5 rows
. . exported "SCOTT"."ACCTYPE_GL_MAS" 0 KB 0 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
D:\APP\ADMINISTRATOR\ADMIN\SIPDB\DPDUMP\SCORR.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" completed with 2 error(s) at 13:40:08
Please help me to resolve this issue.
Thank you,
ShanHi Shan,
I am getting very similar to yours
"ORA-37111: Unable to load the OLAP API sharable library: (The specified module could not be found."
error message while creating OLAP Analytic Workspace with AWM.
I am creating workspace for the first time, actually following some tutorial to get some knowledge about OLAP)
I see you managed to solve you problem.
I wonder how I can get this MOS DOC 852794.1 - is it possible to get it without going to Metalink?
Thanks in advance for any help.
Regards,
SC -
Error 500--Internal Server Error From RFC 2068 Hypertext Transfer Protocol
We are encountering the following error while navigating the dashboards/ reports through OBIEE: "Error 500--Internal Server Error From RFC 2068 Hyperion Transfer Protocol"
While navigating from one dashboard page to another this error is encountered or we get kicked out of the session and need to log back in.
Any one experienced the same before?Thanks Srini.
They issue we are facing is not post-installation specific. The environment functions fine but when we navigate dashboard pages, then this error pops up. It is intermittent and applicable to certain dashboards and reports. Not sure if you have ever encountered the dashboard/ report design causing this?
Ganesh -
Transport error from dev to quality
HI,
I am getting following error from Dev system to Quality while transporting the request please suggest me the solution.
Start of the after-import method RS_RSFO_AFTER_IMPORT for object type(s) RSFO (Activation Mode
Start of the after-import method RS_ISTD_AFTER_IMPORT for object type(s) ISTD (Activation Mode
Start of the after-import method RS_ISCS_AFTER_IMPORT for object type(s) ISCS (Activation Mode
Start of the after-import method RS_ISMP_AFTER_IMPORT for object type(s) ISMP (Activation Mode
Start of the after-import method RS_ISTS_AFTER_IMPORT for object type(s) ISTS (Activation Mode
Start of the after-import method RS_ISTS_AFTER_IMPORT for object type(s) ISTS (Delete Mode)
Start of the after-import method RS_ISMP_AFTER_IMPORT for object type(s) ISMP (Delete Mode)
Start of the after-import method RS_ISCS_AFTER_IMPORT for object type(s) ISCS (Delete Mode)
Start of the after-import method RS_ISTD_AFTER_IMPORT for object type(s) ISTD (Delete Mode)
Start of the after-import method RS_RSFO_AFTER_IMPORT for object type(s) RSFO (Delete Mode)
Errors occurred during post-handling RS_AFTER_IMPORT for ISCS L
The errors affect the following components:
BW-WHM (Warehouse Management)
Post-import methods of change/transport request completed
Start of subsequent processing ... 20100409231713
End of subsequent processing... 20100409231716
Thanks,
krantiStart of the after-import method RS_ODSO_AFTER_IMPORT for object type(s) ODSO (Activation
InfoObject ZCHKNUM is not available in version A
InfoObject ZCHKNUM is not available in version A
InfoObject ZCHKNUM is not available in version A
InfoObject ZCHKNUM is not available in version A
InfoObject ZCHKDATE is not available in version A
Inconsistencies found while checking DataStore object ZTCM_D02
Start of the after-import method RS_ODSO_AFTER_IMPORT for object type(s) ODSO (Delete Mode
Errors occurred during post-handling RS_AFTER_IMPORT for ODSO L
The errors affect the following components:
BW-WHM (Warehouse Management)
Post-import methods of change/transport request D21K918324 completed
Start of subsequent processing ... 20091112090953
End of subsequent processing... 20091112090954
can u please post complete error message that ur getting
in the above one the DSO is sent to quality but without infoobjects in Quality
so the tranport got failed
Maybe you are looking for
-
RE: DBSessions and Single-threading
Thanks Linh. Always good to here from you. thanks ka Kamran Amin Forte Technical Leader, Core Systems (203)-459-7362 or 8-204-7362 - Trumbull [email protected] From: Linh Pham[SMTP:[email protected]] Sent: Friday, November 13, 1998 2:51 PM To: Ajith Kalla
-
Captivate 5.5 videos won't open in IE
When I publish a video with .swf, .swf_skin, and htm files, it works in other browsers, but won't open in Internet Explorer. Does anyone have any suggestions? Most of our clients use IE.
-
CProjects and Sales Order link
Hi, We are currently using cProjects for the purpose of Resource allocation. As a part of process, projects are primarily created in SAP and they automatically gets created in cProjects. However, the management has now decided to use it for tracking
-
Some question, please, help
Hi all.(sorry for my bad inglish) I try install Sol8 to computer where i have Win2000 instaled. I'm free some disk size(i'm use PartitionMagick) and try install Sol8 to this free disk.(i have IBM Deskstar 30Gb hard drive = 57000 cylinders) Instalatio
-
Comment bloquer un PDF en Local ?
Bonjour, Je cherche comment interdire l'ouverture d'un PDF si il n'est pas lu sur internet. En gros, faire en sorte que si la personne télécharge le PDF sur son poste, le PDF ne s'ouvre pas ou un message d'alerte s'affiche et empeche la consultation.