Compiling Flex2 in Linux - Segmentation Fault
Dear all
I am a newbie in Flex and my system is now running Ubuntu
7.04. Java-6 and Java-5 are already installed.
I have download and unzip the flex2 sdk under
/usr/local/flex2.
But when I tried to build the sampes by running:
./build-samples.sh, it only show the following error and the flex
won't compile.
processing ./hybridstore/build.sh
Loading configuration file
/usr/local/flex2/frameworks/flex-config.xml
Segmentation fault (core dumped)
Can any body please help me out of these problem?
Thanks in advance.
Dear all
I am a newbie in Flex and my system is now running Ubuntu
7.04. Java-6 and Java-5 are already installed.
I have download and unzip the flex2 sdk under
/usr/local/flex2.
But when I tried to build the sampes by running:
./build-samples.sh, it only show the following error and the flex
won't compile.
processing ./hybridstore/build.sh
Loading configuration file
/usr/local/flex2/frameworks/flex-config.xml
Segmentation fault (core dumped)
Can any body please help me out of these problem?
Thanks in advance.
Similar Messages
-
Adobe Reader 9.3 for Linux: Segmentation Fault in PPKLite.api
Adobe Reader 9.3 cannot open signed pdf files on our system (64-bit Fedora 10 Linux, with 32-bit compatibility libraries installed). It crashes with a segmentation fault any time one tries to open the following file:
http://www.utoronto.ca/ic/software/forms/matlab_concurrent_renewal2010.pdf
Running with ACRODEBUG=1 and ACRO_CRASHLOG=1 results in a zero-length crash log file and the only debug messages shown are
Loading PlugIn /opt/Adobe/Reader9/Reader/intellinux/plug_ins/Annots.api ... [dlopen success for Annots.api, handle = 0xc4548f8]
Loading PlugIn /opt/Adobe/Reader9/Reader/intellinux/plug_ins/AcroForm.api ... [dlopen success for AcroForm.api, handle = 0xd0c3a50]
Loading PlugIn /opt/Adobe/Reader9/Reader/intellinux/plug_ins/DigSig.api ... [dlopen success for DigSig.api, handle = 0xd0dcc68]
Loading PlugIn /opt/Adobe/Reader9/Reader/intellinux/plug_ins/EScript.api ... [dlopen success for EScript.api, handle = 0xd126ac8]
Loading PlugIn /opt/Adobe/Reader9/Reader/intellinux/plug_ins/PPKLite.api ... [dlopen success for PPKLite.api, handle = 0xd7b1ff0]
If I run the gdb debugger on /opt/Adobe/Reader9/Reader/intellinux/bin/acroread, it shows the segmentation fault occurring inside /opt/Adobe/Reader9/Reader/intellinux/plug_ins/PPKLite.api (but since the file is stripped, all it shows is the binary offset of the segfault). I do have a coredump, though.
If I remove the PPKLite.api file (or rename it to PPKLite.api.hide -- note that chmod -x does not work, nor does renaming it to something else that still ends in .api, unlike some workarounds I've seen on the web for similar problems in earlier versions of Adobe Reader), then Adobe Reader can open the file without crashing, except that without PPKLite it cannot validate its signature and so it displays the message "This document enabled extended features in Adobe Reader. The document has been changed since it was created and use of extended features is no longer available. Please contact the author for the original version of this document" -- which I assume is because the signature validation failed due to the absence of PPKLite so it things the document has been altered even though it hasn't been.
So this removing or renaming of PPKLite.api works around the problem of the crash, at the expense of getting this misleading warning message.
However, obviously there is a problem inside PPKLite.api that needs to be fixed....I'm seeing what must be the same issue: acroread foo.pdf exits almost instantly without any message, but with status 1.
This is: AdobeReader_enu-9.3.1-1.i486 on a Fedora 12 box, AMD Phenom II 945.
If I set ACRODEBUG=1 and ACRO_CRASHLOG=1 and ulimit -c unlimited, and rename PPKLite.api, *then* I get a nonzero crashlog. Which is:
/usr/bin/acroread [0x84ff125] [@0x8048000]
(__kernel_sigreturn+0x0) [0xf7786400] [@0xf7786000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/AcroForm.api [0xf44dc99b] [@0xf3b6d000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/AcroForm.api [0xf41e1cf5] [@0xf3b6d000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/AcroForm.api [0xf41e436d] [@0xf3b6d000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/AcroForm.api [0xf41e57a3] [@0xf3b6d000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/AcroForm.api [0xf41f81b2] [@0xf3b6d000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/AcroForm.api [0xf41d4863] [@0xf3b6d000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/AcroForm.api [0xf3c44111] [@0xf3b6d000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/AcroForm.api [0xf3c47268] [@0xf3b6d000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/AcroForm.api [0xf3f7446a] [@0xf3b6d000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/AcroForm.api [0xf3f74e6d] [@0xf3b6d000]
/usr/bin/acroread [0x892da33] [@0x8048000]
/usr/bin/acroread [0x873dee9] [@0x8048000]
/usr/bin/acroread [0x8741530] [@0x8048000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/DigSig.api [0xf361fd53] [@0xf35f5000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/DigSig.api [0xf3628217] [@0xf35f5000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/DigSig.api [0xf36e3f01] [@0xf35f5000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/DigSig.api [0xf36e3f9e] [@0xf35f5000]
/usr/bin/acroread [0x892f7e7] [@0x8048000]
/usr/bin/acroread [0x8744a50] [@0x8048000]
/usr/bin/acroread [0x825bdec] [@0x8048000]
/usr/bin/acroread [0x826bb11] [@0x8048000]
/usr/bin/acroread [0x826bd32] [@0x8048000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/Annots.api [0xf4ca1b4a] [@0xf4ba9000]
/opt/Adobe/Reader9/Reader/intellinux/plug_ins/Annots.api [0xf4ca1bec] [@0xf4ba9000]
/usr/bin/acroread [0x826c1c4] [@0x8048000]
/usr/bin/acroread [0x826d0bb] [@0x8048000]
/usr/bin/acroread [0x850193f] [@0x8048000]
/usr/bin/acroread [0x85024d2] [@0x8048000]
/usr/bin/acroread(main+0x87) [0x856686d] [@0x8048000]
/lib/libc.so.6(__libc_start_main+0xe6) [0x5ddbb6] [@0x5c7000]
Without renaming the .api, all I got was five 'dlopen success' messages, and "Segmentation fault (core dumped)", but a zero length log.
Don -
Cfencode.linux segmentation fault
Does anybody have cfencode.linux working? I keep getting a
segmentation fault with I actually try to cfencode a file (cfencode
somefile.cfm). If I just do a cfencode, this seems to work in that
it prints out the usage message.
MikeI am receiving this error too. I was originally unable to run
the program at all because it was unable to find libporting.so, but
I added the path to this file to my LD_LIBRARY_PATH global
variable. After that I was able to run the raw program to generate
a list of expected parameters, but once I try to feed a template to
it I got the segmentation fault error.
Did you ever find a solution? -
Amarok2 compilation fails on 4%, segmentation fault
Hi!
I am trying to compile amarok2 using PKGBUILD from AUR. It results in error:
Linking CXX executable generator
/bin/sh: line 1: 13531 Segmentation fault /home/gpan/Desktop/amarok/src/amarok-2.0/src/scriptengine/generator/generator/generator --output-directory=/home/gpan/Desktop/amarok/src/amarok-2.0/src/scriptengine/generator/generator --include-paths=/usr/include
make[2]: *** [src/scriptengine/generator/generator/generator] Error 139
make[1]: *** [src/scriptengine/generator/generator/CMakeFiles/generator.dir/all] Error 2
make: *** [all] Error 2
==> ERROR: Build Failed.
Aborting...
What might be wrong?Yes, it works! Ive successfully compiled amarok 2.0.1.1. Thank you pressh.
Nevertheless its still strange. I have tried it twice as root yesterday and had no luck. Maybe it was something in the qt or cmake packages which I updated today before building again. -
Oracle 8i, SUSE Linux, Segmentation fault
When I hit the ./runInstaller it just bombs out with
segmentation fault at line 34. It looks like the runInstaller
script is running runInstaller (again?).
nullIs Suse using Glibc 2.1? Oracle requires glibc 2.1 and a seg
fault might be an indication your machine does not have this
capability.
Either install glibc 2.1 on your machine, or move to a distro
that is glibc 2.1 native (RedHat 6.0).
Best of Luck,
--Bryan
nick lockyer (guest) wrote:
: When I hit the ./runInstaller it just bombs out with
: segmentation fault at line 34. It looks like the runInstaller
: script is running runInstaller (again?).
null -
Oracle 9i linux segmentation fault
Gentoo linux, installed as suggested at http://www.puschitz.com/OracleOnLinux.shtml#12....
I get lots of these:
Errors in file /db/Ora9i/rdbms/log/oradb_ora_1396.trc:
ORA-07445: exception encountered: core dump [skgmidrealm()+338] [SIGSEGV] [Addre
ss not mapped to object] [0x49D3D001] [] []
Mon Mar 17 16:59:54 2003
Errors in file /db/Ora9i/rdbms/log/oradb_ora_1442.trc:
ORA-07445: exception encountered: core dump [skgmidrealm()+338] [SIGSEGV] [Addre
ss not mapped to object] [0x49D3D001] [] []
Mon Mar 17 17:05:32 2003
Errors in file /db/Ora9i/rdbms/log/oradb_ora_1489.trc:
ORA-07445: exception encountered: core dump [skgmidrealm()+338] [SIGSEGV] [Addre
ss not mapped to object] [0x49D3D001] [] []
etc......
Take a look:
oracle@glow ORADB $ sqlplus /nolog
SQL*Plus: Release 9.2.0.1.0 - Production on Mon Mar 17 17:13:13 2003
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
SQL> connect sys as sysdba
Enter password:
Connected to an idle instance.
SQL> @create/createdb1
SQL> spool createdb1.log
SQL> startup nomount pfile=/oracle/ORADB/pfile/initORADB.ora
ORA-03113: end-of-file on communication channel
SQL> CREATE DATABASE ORADB
2 controlfile reuse
3 logfile group 1 ('/oracle/ORADB/redo1/log1a.log',
4 '/oracle/ORADB/redo1/log1b.log') SIZE 2M,
5 group 2 ('/oracle/ORADB/redo2/log2a.log',
6 '/oracle/ORADB/redo2/log2b.log') SIZE 2M
7 datafile '/oracle/ORADB/data/system01.dbf' SIZE 300M
8 undo tablespace UNDOTBS
9 datafile '/oracle/ORADB/data/undotbs01.dbf' size 300M reuse
10 maxdatafiles 256
11 maxlogfiles 128
12 maxlogmembers 5
13 maxinstances 1
14 noarchivelog ;
CREATE DATABASE ORADB
ERROR at line 1:
ORA-03114: not connected to ORACLE
SQL> spool off
.... now in the alert log is this:
Mon Mar 17 17:13:28 2003
SCN scheme 2
Using log_archive_dest parameter default value
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 9.2.0.1.0.
System parameters with non-default values:
processes = 150
timed_statistics = TRUE
shared_pool_size = 33554432
java_pool_size = 4194304
control_files = /oracle/ORADB/ctl1/control01.ctl, /oracle/ORADB/ctl2/control02.ctl
db_block_size = 4096
db_cache_size = 46137344
compatible = 9.0.0
fast_start_mttr_target = 300
undo_management = AUTO
undo_tablespace = UNDOTBS
remote_login_passwordfile= EXCLUSIVE
db_domain = antisymmetric.com
instance_name = ORADB
dispatchers = (PROTOCOL=TCP)(SER=MODOSE), (PROTOCOL=TCP)(PRE=oracle.aurora.server.GiopServer), (PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)
background_dump_dest = /oracle/ORADB/bdump
user_dump_dest = /oracle/ORADB/udump
core_dump_dest = /oracle/ORADB/cdump
sort_area_size = 512000
db_name = ORADB
open_cursors = 300
and the trace file is this:
/db/Ora9i/rdbms/log/oradb_ora_1489.trc
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
ORACLE_HOME = /db/Ora9i
System name: Linux
Node name: glow.antisymmetric.com
Release: 2.4.20
Version: #1 Sat Dec 28 09:43:33 EST 2002
Machine: i686
Instance name: ORADB
Redo thread mounted by this instance: 0 <none>
Oracle process number: 0
1489
Exception signal: 11 (SIGSEGV), code: 1 (Address not mapped to object), addr: 0x49d3d001, PC: [0x97430c2, skgmidrealm()+338]
Registers:
%eax: 0x49d3cebd %ebx: 0x0ab94998 %ecx: 0x00000016
%edx: 0x00000000 %edi: 0xbfffe608 %esi: 0xbfffe4b8
%esp: 0xbfffe490 %ebp: 0xbfffe608 %eip: 0x097430c2
%efl: 0x00010286
skgmidrealm()+323 (0x97430b3) push %eax
skgmidrealm()+324 (0x97430b4) call 0x9747050
skgmidrealm()+329 (0x97430b9) mov %eax,0xfffffea8(%ebp)
skgmidrealm()+335 (0x97430bf) add $16,%esp
skgmidrealm()+338 (0x97430c2) cmp $0xbaceba11,0x144(%eax)skgmidrealm()+348 (0x97430cc) jne 0x97430d7
skgmidrealm()+350 (0x97430ce) cmp $3,0x120(%eax)
skgmidrealm()+357 (0x97430d5) je 0x97430e5
skgmidrealm()+359 (0x97430d7) mov 0x20(%ebp),%ecx
*** 2003-03-17 17:05:32.531
ksedmp: internal or fatal error
ORA-07445: exception encountered: core dump [skgmidrealm()+338] [SIGSEGV] [Address not mapped to object] [0x49D3D001] [] []
Current SQL information unavailable - no SGA.
----- Call Stack Trace -----
calling call entry argument values in hex
location type point (? means dubious value)
ksedmp()+267 call 00000000 1 ? 0 ? BFFFDB4C ? 847685E ?
0 ? 0 ?
ssexhd()+817 call ksedmp()+0 3 ? 0 ? BFFFE094 ? BFFFDB8C ?
__pthread_sighandle call ssexhd()+0 B ? BFFFE120 ? BFFFE1A0 ? 0 ?
r_rt()+100 AB94998 ? BFFFE4B8 ?
skgmidrealm()+338 signal __pthread_sighandle B ? BFFFE120 ? BFFFE1A0 ?
r_rt()+0
skgmlocate()+310 call skgmidrealm()+0 BFFFECE0 ? AB11640 ?
BFFFEBD8 ? 62C61D24 ?
62C61D24 ? 7A8003 ?
BFFFE678 ? BFFFE67C ?
BFFFE680 ? BFFFE684 ?
4341524F ? 9742911 ?
skgmattach()+366 call skgmlocate()+0 BFFFECE0 ? AB11640 ?
BFFFEBD8 ? 62C61D24 ? 0 ? 0 ?
ksmlsge()+201 call skgmattach()+0 BFFFECE0 ? AB11640 ?
BFFFEBD8 ? AB116BC ?
ksmlsg()+16 call ksmlsge()+0 0 ? 0 ? BFFFEE6C ? 8CC68A9 ?
BFFFEE6C ? 1 ?
opirip()+193 call ksmlsg()+0 0 ? BFFFF808 ? BFFFF8C4 ?
A883F79 ? BFFFF430 ?
404F7ABF ?
opidrv()+676 call opirip()+0 32 ? 0 ? 0 ? 82038F6 ?
BFFFF808 ? 4000A504 ?
sou2o()+36 call opidrv()+0 32 ? 0 ? 0 ? 8201952 ? 0 ?
main()+419 call sou2o()+0 BFFFF808 ? 32 ? 0 ? 0 ?
405D456C ? 40013840 ?
__libc_start_main() call main()+0 1 ? BFFFF8C4 ? BFFFF8CC ?
+164 8201590 ? 0 ? 4000A720 ?
A3077F6 ?
--------------------- Binary Stack Dump ---------------------
<<< ETC>>> snip snip
what's going on?!wow.. I just upgraded glibc to 2.3.2... and everything works!
:D -
I am working on porting an application that runs on sun solaris sparc (OS ver 5.8) with Berkeley DB ver 4.2.50 into suse linux [ Linux version 2.6.5-7.244-smp (gcc version 3.3.3 (SuSE Linux)) ]. I have compiled the application in linux and i am getting segmentation fault error while running the application. The error occurs while the objects are loaded from the database into berkeley cache. The application fails in db->put function while loading the objects. This segmentation fault error is not consistently occuring every time on the same object. Sometimes it throws the error while loading the first object itself or sometimes it throws after loading couple of objects. But the stack trace shows the same function whenever it crashes.
Given below is the stack trace of the application when it throws SIGSEGV.
[INFO] [-1756589376] 14:23:24.406 BerkeleyCache : restoring database table metadata for table [BSCOffice]...
[INFO] [-1756589376] 14:23:24.406 BerkeleyCache : restoring database index metadata for table [BSCOffice]...
[INFO] [-1756589376] 14:23:24.408 Obj [BSCOffice] Col [BSCOfficeId] : Typ[TEXT] MaxLen[10] Null[Y]
[INFO] [-1756589376] 14:23:24.408 Obj [BSCOffice] Col [City] : Typ[TEXT] MaxLen[100] Null[Y]
[INFO] [-1756589376] 14:23:24.408 BerkeleyCache : creating table [BSCOffice.tbl]
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 182927004352 (LWP 14638)]
0x0000002a9643203a in __db_check_txn () from /opt/home/pesprm/local/db-4.6.19/db-4.6.19/build_unix/.libs/libdb-4.6.so
(gdb) where
#0 0x0000002a9643203a in __db_check_txn () from /opt/home/pesprm/local/db-4.6.19/db-4.6.19/build_unix/.libs/libdb-4.6.so
#1 0x0000002a9643475b in __db_put_pp () from /opt/home/pesprm/local/db-4.6.19/db-4.6.19/build_unix/.libs/libdb-4.6.so
#2 0x000000000043586b in esp::BerkeleyCache::insert_i (this=0x7fbfffe380, transactionId=866640768, pObj=0x7fbfffd3e0) at BerkeleyCache.cpp:838
#3 0x00000000004119c5 in RefServer::loadObject (this=0x7fbfffdbe0, objInfo=<value optimized out>, strPrimaryObjName=@0x7fbfffd6f0,
procParams=<value optimized out>) at BerkeleyCache.h:569
#4 0x0000000000419166 in RefServer::loadObjects (this=0x7fbfffdbe0) at RefServer.cpp:579
#5 0x0000000000419748 in main (argc=<value optimized out>, argv=<value optimized out>) at RefServer.cpp:296
The code that calls the db->put() function is given below:
<pre>
bool BerkeleyCache::insert_i(size_t transactionId, const CachePersistable* pObj)
bdbcache::Table* pTable = findTable_i(pObj->getPersistInfo().getObjectName());
if (pTable == NULL)
return false;
DB_TXN* txnp = (DB_TXN*)transactionId;
if (txnp == NULL)
return false;
bdbcache::Index* pPrimaryKey = pTable->getPrimaryKey();
if (pPrimaryKey == NULL)
return false;
if (pTable->getDataOffset() == -1)
pTable->setDataOffset( (int)(pObj->getDataBufferStartPos() - pObj->getDataStartPos()) );
DB* pdb = pTable->getDB();
int rc = 0;
Synchronize sync(pPrimaryKey->getKeyBufferCriticalSection());
pPrimaryKey->getKeyValues(pObj, pPrimaryKey->getKeyBuffer());
DBT key, data;
memset(&key, 0, sizeof(DBT));
key.flags = DB_DBT_USERMEM;
key.data = (void*)pPrimaryKey->getKeyBuffer().getBuffer();
key.ulen = key.size = pPrimaryKey->getKeyBuffer().getBufferLength();
memset(&data, 0, sizeof(DBT));
data.flags = DB_DBT_USERMEM;
if (pTable->isPrimaryDb())
data.data = (void*)pObj->getDataBufferStartPos();
data.ulen = data.size = pObj->getDataBufferSize();
rc = pdb->put(pdb, (DB_TXN*)transactionId, &key, &data, DB_NOOVERWRITE);
else
const PersistInfo::Property* prop = pTable->getJoinProperty();
Variant var = pObj->getValue( prop );
int n = 0;
switch (prop->_eType)
case PersistInfo::CHAR_ARRAY:
case PersistInfo::STRING:
case PersistInfo::VAR_STRING:
const char* pch = (const char*)var;
data.data = (void*)pch;
data.ulen = data.size = (int)strlen(pch) + 1;
break;
case PersistInfo::INT:
n = (int)var;
data.data = (void*)&n;
data.ulen = data.size = sizeof(int);
break;
rc = pdb->put(pdb, (DB_TXN*)transactionId, &key, &data, 0);
if (rc!= DB_SUCCESS)
//_app.logError("BerkeleyCache : DB->put() failed [%s]", db_strerror(rc));
_app.logInfo("ERROR:BerkeleyCache : DB->put() failed [%s]", db_strerror(rc));
return false;
return true;
</pre>
At the end of this function, the return value of db->put is checked for DB_SUCCESS. The application never executed this line when it throws SIGSEGV.
I have tried using the BDB versions 4.5.20/4.6.18/4.6.19 with the application and it throws segmentation fault error with all versions of BDB. Following are the cache related configurable parameters that we use in our application.
cache size = 16 MB
page size = 16 KB
max locks = 3000000
dirty read = N
We are using the Sybase Adaptive DB server running in Solaris/Linux servers. The same application that runs in Solaris connecting to Sybase DB (in solaris) is working perfectly fine. And the application that i am working in linux connects to the Sybase DB running in linux server.
Please let me know what could be the issue that is causing our application to segmentation fault.
Thanks
SenthilI am working on porting an application that runs on
sun solaris sparc (OS ver 5.8) with Berkeley DB ver
4.2.50 into suse linux [ Linux version
2.6.5-7.244-smp (gcc version 3.3.3 (SuSE Linux)) ].
I have compiled the application in linux and i am
getting segmentation fault error while running the
application. The error occurs while the objects areWhich indicates in almost every typical case, for a well used library like berkeleydb, an application issue.
loaded from the database into berkeley cache. The
application fails in db->put function while loading
the objects. This segmentation fault error is not
consistently occuring every time on the same object.Also indicative of typical misuse of heap, pointer errors, or otherwise undefined behavior on the application's part.
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 182927004352 (LWP 14638)]
0x0000002a9643203a in __db_check_txn () from
/opt/home/pesprm/local/db-4.6.19/db-4.6.19/build_unix/
.libs/libdb-4.6.so
(gdb) where
#0 0x0000002a9643203a in __db_check_txn () from
/opt/home/pesprm/local/db-4.6.19/db-4.6.19/build_unix/
.libs/libdb-4.6.so
#1 0x0000002a9643475b in __db_put_pp () from
/opt/home/pesprm/local/db-4.6.19/db-4.6.19/build_unix/
.libs/libdb-4.6.so
#2 0x000000000043586b in
esp::BerkeleyCache::insert_i (this=0x7fbfffe380,
transactionId=866640768, pObj=0x7fbfffd3e0) at
BerkeleyCache.cpp:838
#3 0x00000000004119c5 in RefServer::loadObject
(this=0x7fbfffdbe0, objInfo=<value optimized out>,
strPrimaryObjName=@0x7fbfffd6f0,
procParams=<value optimized out>) at
BerkeleyCache.h:569
4 0x0000000000419166 in RefServer::loadObjects
(this=0x7fbfffdbe0) at RefServer.cpp:579
#5 0x0000000000419748 in main (argc=<value optimized
out>, argv=<value optimized out>) at
RefServer.cpp:296Did you build the libraries stripped or otherwise remove debugging information?
run "file /opt/home/pesprm/local/db-4.6.19/db-4.6.19/build_unix/.libs/libdb-4.6.so"
Also, if I'm not mistaken this is Linux you're porting to, right? Might want to drop the "/opt" Solaris-ism as you continue to port more to Linux.
The code that calls the db->put() function is given
below:
if (pTable->getDataOffset() == -1)Does this return an unsigned or signed value?
pTable->setDataOffset(
t( (int)(pObj->getDataBufferStartPos() -
pObj->getDataStartPos()) );You cast to int here, what's the background reasoning for it?
DB* pdb = pTable->getDB();
int rc = 0;You don't check pdb is valid.
Synchronize
e sync(pPrimaryKey->getKeyBufferCriticalSection());
pPrimaryKey->getKeyValues(pObj,
, pPrimaryKey->getKeyBuffer());
DBT key, data;
memset(&key, 0, sizeof(DBT));
key.flags = DB_DBT_USERMEM;
key.data =
= (void*)pPrimaryKey->getKeyBuffer().getBuffer();Do you verify this is even valid? Have you also checked the code within a debugger and verified the logic is correct?
key.ulen = key.size =
= pPrimaryKey->getKeyBuffer().getBufferLength();Same thing.
memset(&data, 0, sizeof(DBT));
data.flags = DB_DBT_USERMEM;
if (pTable->isPrimaryDb())
data.data = (void*)pObj->getDataBufferStartPos();
data.ulen = data.size = pObj->getDataBufferSize();How do we know these are even valid? You're coming from the perspective that your code makes no mistakes when I have a hunch it does.
rc = pdb->put(pdb, (DB_TXN*)transactionId, &key,
y, &data, DB_NOOVERWRITE);Right, and this will most definitely result in a segmentation violation if you don't pass it valid data.
You're on Linux, I highly suggest you download and build Valgrind, run your application w/ valgrind --tool=memcheck, and get back to us. Also, don't strip debugging information from the berkeley db libraries during this phase of troubleshooting - especially when you're still porting something over.
I have tried using the BDB versions
4.5.20/4.6.18/4.6.19 with the application and it
throws segmentation fault error with all versions of
BDB.Because the issue isn't within BDB.
We are using the Sybase Adaptive DB server running in
Solaris/Linux servers. The same application that
runs in Solaris connecting to Sybase DB (in solaris)
is working perfectly fine.More correctly, it was working perfectly fine under Solaris. That doesn't even mean that it's not taking part in undefined behavior - it just means it "worked." -
Hello:
I successfully crosscompiled kaffe for arm-linux.but when I test it with HelloWorldApp programe,the system reported Segmentation fault.
What's the reason?I really need your help.
Thank you.
Best Regards.You may try to compile kaffe with interpreter only - not jit
-
Pro c program in red hat linux gives segmentation fault
Hi,
I got Pro*C multithreaded program running redhat enterprise linux which gives segmentation faults at proc libraries.
I wish if some one could help me in resolving this issue
Below is valgrind log of the execution of one thread
total valgrind summary
Thread 2:
==31610== Invalid write of size 1
==31610== at 0x529DDF9: snlfgch (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52948E7: nlparhs (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52935FD: nlpaparse (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x5293378: nlpardfile (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x5292CED: nlpains (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x529661A: nlpacheck_n_load (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52962FE: nlpagap (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x531B103: nnfttran (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x531ADDC: nnftrne (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52121F5: nnfgrne (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52A8AB4: nlolgobj (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x5210121: nnfun2a (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== Address 0x67d0917 is 151 bytes inside a block of size 8,216 free'd
==31610== at 0x4A06084: free (vg_replace_malloc.c:366)
==31610== by 0x5292D88: nlpains (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x529661A: nlpacheck_n_load (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52962FE: nlpagap (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x531B103: nnfttran (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x531ADDC: nnftrne (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52121F5: nnfgrne (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52A8AB4: nlolgobj (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x5210121: nnfun2a (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x520FEAC: nnfsn2a (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x51F9B2D: niqname (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x5110603: kwfnran (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610==
==31610== Invalid read of size 8
==31610== at 0x52948F4: nlparhs (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52935FD: nlpaparse (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x5293378: nlpardfile (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x5292CED: nlpains (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x529661A: nlpacheck_n_load (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52962FE: nlpagap (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x531B103: nnfttran (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x531ADDC: nnftrne (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52121F5: nnfgrne (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52A8AB4: nlolgobj (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x5210121: nnfun2a (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x520FEAC: nnfsn2a (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== Address 0x0 is not stack'd, malloc'd or (recently) free'd
==31610==
==31610==
==31610== Process terminating with default action of signal 11 (SIGSEGV)
==31610== Access not within mapped region at address 0x0
==31610== at 0x52948F4: nlparhs (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52935FD: nlpaparse (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x5293378: nlpardfile (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x5292CED: nlpains (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x529661A: nlpacheck_n_load (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52962FE: nlpagap (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x531B103: nnfttran (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x531ADDC: nnftrne (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52121F5: nnfgrne (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x52A8AB4: nlolgobj (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x5210121: nnfun2a (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== by 0x520FEAC: nnfsn2a (in /opt/oracle/product10g/lib/libclntsh.so.10.1)
==31610== If you believe this happened as a result of a stack
==31610== overflow in your program's main thread (unlikely but
==31610== possible), you can try to increase the size of the
==31610== main thread stack using the --main-stacksize= flag.
==31610== The main thread stack size used in this run was 10485760.
==31610==
==31610== HEAP SUMMARY:
==31610== in use at exit: 3,207,418 bytes in 8,561 blocks
==31610== total heap usage: 13,879 allocs, 5,318 frees, 3,620,545 bytes allocated
your help would be appreciated
regards
SalOne of the challenges in supporting Linux is the frequent changing of what ought to be stable public interfaces. We have found a number of changes in Red Hat 6 that affect the behavior of the compiler and applications built with the compiler.
We plan to support RH 6 (and Oracle Linux 6) in a future release of Studio. -
SQL Server Driver for Linux causes Segmentation Fault
Hello,
I'm using the SQL Server Driver 11.0.1790 on Linux with mod_perl and Apache. While running fine with all my CLI Perl apps I occasionally get Segmentation Faults when using it from within mod_perl applications. Sometimes every other connect to the database
segfaults. I've created a core dump and did a stack backtrace:
Loaded symbols for /usr/lib/../lib64/libxml2.so.2
Core was generated by `/usr/sbin/httpd2 -X'.
Program terminated with signal 11, Segmentation fault.
#0 0x0000000000000000 in ?? ()
(gdb) bt
#0 0x0000000000000000 in ?? ()
#1 0x00007f8a8aeda803 in __connect_part_two (connection=0x7f8a99c885e0) at SQLConnect.c:1891
#2 0x00007f8a8aedffd6 in SQLDriverConnect (hdbc=0x7f8a99c885e0, hwnd=0x0, conn_str_in=0x7fff1e7369ee "",
len_conn_str_in=<value optimized="" out="">,
conn_str_out=0x7fff1e736a80 "DSN=XXXXX;UID=XXX;PWD=XXXXXXXX;WSID=XXXXXXXX;DATABASE=XXXXX;MARS_Connection=Yes;",
conn_str_out_max=512, ptr_conn_str_out=0x7fff1e736a7e, driver_completion=0) at SQLDriverConnect.c:1530
#3 0x00007f8a8b1458ee in odbc_db_login6 (dbh=0x7f8a99938ca0, imp_dbh=0x7f8a99c8acd0,
dbname=0x7fff1e736c80 "DSN=XXXXX;MARS_Connection=Yes;UID=XXX;PWD=XXXXXXXX", uid=0x7f8a9996e8b0 "XXX",
pwd=0x7f8a9996e8d0 "XXXXXXXX", attr=0x7f8a99938c40) at dbdimp.c:942
#4 0x00007f8a8b141822 in XS_DBD__ODBC__db__login (my_perl=<value optimized="" out="">, cv=<value optimized="" out="">) at ./ODBC.xsi:98
#5 0x00007f8a9125b091 in Perl_pp_entersub (my_perl=0x7f8a96274f50) at pp_hot.c:3046
#6 0x00007f8a912595f6 in Perl_runops_standard (my_perl=0x7f8a96274f50) at run.c:41
#7 0x00007f8a911eb755 in Perl_call_sv (my_perl=0x7f8a96274f50, sv=0x7f8a99938bc8, flags=2) at perl.c:2632
#8 0x00007f8a8b791d02 in XS_DBI_dispatch (my_perl=0x7f8a96274f50, cv=0x7f8a98cbbe60) at DBI.xs:3473
#9 0x00007f8a9125b091 in Perl_pp_entersub (my_perl=0x7f8a96274f50) at pp_hot.c:3046
#10 0x00007f8a912595f6 in Perl_runops_standard (my_perl=0x7f8a96274f50) at run.c:41
#11 0x00007f8a911ebab0 in Perl_call_sv (my_perl=0x7f8a96274f50, sv=0x7f8a97db4f68, flags=10) at perl.c:2647
#12 0x00007f8a9154ba31 in modperl_callback (my_perl=0x7f8a96274f50, handler=0x7f8a96230c90, p=0x7f8a96334838, r=0x7f8a963348b0,
s=0x7f8a962303b0, args=0x7f8a998376a8) at modperl_callback.c:101
#13 0x00007f8a9154c79c in modperl_callback_run_handlers (idx=6, type=4, r=0x7f8a963348b0, c=<value optimized="" out="">, s=0x7f8a962303b0,
pconf=<value optimized="" out="">, plog=0x0, ptemp=0x0, run_mode=MP_HOOK_RUN_FIRST) at modperl_callback.c:262
#14 0x00007f8a9154cb6f in modperl_callback_per_dir (idx=-1714610816, r=<value optimized="" out="">, run_mode=<value optimized="" out="">)
at modperl_callback.c:369
#15 0x00007f8a91546b93 in modperl_response_handler_run (r=0x7f8a963348b0) at mod_perl.c:1000
#16 modperl_response_handler (r=0x7f8a963348b0) at mod_perl.c:1039
#17 0x00007f8a95f01e08 in ap_run_handler ()
#18 0x00007f8a95f0226c in ap_invoke_handler ()
#19 0x00007f8a95f0ff00 in ap_process_request ()
#20 0x00007f8a95f0ce98 in ?? ()
#21 0x00007f8a95f08b28 in ap_run_process_connection ()
#22 0x00007f8a95f14e5a in ?? ()
#23 0x00007f8a95f15126 in ?? ()
#24 0x00007f8a95f15903 in ap_mpm_run ()
#25 0x00007f8a95eec9be in main ()
</value></value></value></value></value></value></value>
Calling something at NULL obviously doesn't look right…
Can onebody help fixing this issue?
Best regards,
StephanHi Stephan,
This is a bug in the unixODBC Driver Manager 2.3.0. Essentially, you can only have one HDBC per HENV. The mod_perl must be creating the connections upon the same HENV.
The bug appears to be fixed in version 2.3.1 but we have not yet certified that out driver can work with 2.3.1. See the 2.3.1 release notes at
http://www.unixodbc.org/ where the last item says:
"Driver version was not being held when a second connection was made to the driver"
The Driver Manager would "forget" that we are an ODBC V3 driver and try intereacting with us as an ODBC V2 driver for the second connection (which fails).
The workaround is to create a new HENV for each connection but I'm not sure if this is possible in mod_perl. -
GNU compiled app crashes with signal 11: Segmentation Fault
Hi Community,
I know this is not a GNU oriented forum, but maybe this is a common issue.
We have a C++ multi-threaded application running properly in Solaris 9 Sparc. Due to some issues mostly related to the NICs we needed to have the application running in Solaris 10 x86. The problem is that the compilation is ok (always with GNU) but during run time, we have the application crashing in different instructions because of a signal 11.
It always crashes in a malloc call within the libstdc++.so.6
It was compiled with the option -mt and -lthread. Is this a known problem? Do you recommend any direction to start seeking for a solution?
Pls, find attached the gdb outcomes after the crash.
Thanks in advance, Pablo
quiterio{root}# gdb cord /usr/nguser/core
GNU gdb 6.6
Copyright (C) 2006 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i386-pc-solaris2.10"...
Reading symbols from /lib/libsocket.so.1...done.
Loaded symbols for /lib/libsocket.so.1
Reading symbols from /export/home/mysql/mysql-5.0.51/lib/mysql/libmysqlclient.so.15...done.
Loaded symbols for /opt/mysql/mysql/lib/mysql/libmysqlclient.so.15
Reading symbols from /lib/libnsl.so.1...done.
Loaded symbols for /lib/libnsl.so.1
Reading symbols from /lib/librt.so.1...done.
Loaded symbols for /lib/librt.so.1
Reading symbols from /lib/libthread.so.1...
warning: Lowest section in /lib/libthread.so.1 is .dynamic at 00000074
done.
Loaded symbols for /lib/libthread.so.1
Reading symbols from /usr/local/lib/libmysqlpp.so.2...done.
Loaded symbols for /usr/local/lib/libmysqlpp.so.2
Reading symbols from /usr/lib/libz.so.1...done.
Loaded symbols for /usr/lib/libz.so.1
Reading symbols from /usr/local/lib/libstdc++.so.6...done.
Loaded symbols for /usr/local/lib/libstdc++.so.6
Reading symbols from /lib/libm.so.2...done.
Loaded symbols for /lib/libm.so.2
Reading symbols from /usr/local/lib/libgcc_s.so.1...done.
Loaded symbols for /usr/local/lib/libgcc_s.so.1
Reading symbols from /lib/libc.so.1...done.
Loaded symbols for /lib/libc.so.1
Reading symbols from /lib/libresolv.so.2...done.
Loaded symbols for /lib/libresolv.so.2
Reading symbols from /lib/libaio.so.1...done.
Loaded symbols for /lib/libaio.so.1
Reading symbols from /lib/libmd.so.1...done.
Loaded symbols for /lib/libmd.so.1
Reading symbols from /export/home/mysql/mysql-5.0.51/lib/mysql/libmysqlclient_r.so.15...done.
Loaded symbols for /opt/mysql/mysql/lib/mysql/libmysqlclient_r.so.15
Reading symbols from /lib/libpthread.so.1...
warning: Lowest section in /lib/libpthread.so.1 is .dynamic at 00000074
done.
Loaded symbols for /lib/libpthread.so.1
Reading symbols from /lib/ld.so.1...done.
Loaded symbols for /lib/ld.so.1
Core was generated by `/export/home/egasco/cord/cord'.
Program terminated with signal 11, Segmentation fault.
#0 0xfebd4ad2 in t_splay () from /lib/libc.so.1
(gdb) bt
#0 0xfebd4ad2 in t_splay () from /lib/libc.so.1
#1 0xfebd49b0 in t_delete () from /lib/libc.so.1
#2 0xfebd46ea in realfree () from /lib/libc.so.1
#3 0xfebd42ee in _malloc_unlocked () from /lib/libc.so.1
#4 0xfebd4138 in malloc () from /lib/libc.so.1
#5 0x080bdf64 in PMData::addData (this=0x818ae70, header=
{_M_t = {_M_impl = {<std::allocator<std::_Rb_tree_node<std::pair<const std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >> = {<__gnu_cxx::new_allocator<std::_Rb_tree_node<std::pair<const std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >> = {<No data fields>}, <No data fields>}, _M_key_compare = {<std::binary_function<std::basic_string<char, std::char_traits<char>, std::allocator<char> >,std::basic_string<char, std::char_traits<char>, std::allocator<char> >,bool>> = {<No data fields>}, <No data fields>}, _M_header = {_M_color = std::_S_red, _M_parent = 0x81ade48, _M_left = 0x81c2310, _M_right = 0x81ae900}, _M_node_count = 8}}},
data=0x81c1a20 "v=0\r\no=50 2890844526 2890842807 IN IP4 192.168.1.245\r\ns=SDP seminar\r\nc=IN IP4 192.168.1.245\r\nt= 0 0\r\nm=audio 9092 RTP/AVP 8 18\r\n", dest=ONEP_SIP) at PMData.cpp:23
#6 0x080b3a52 in UserCall::process_initialization (this=0x81c2630, packet=0x81ad128) at UserCall.cpp:505
#7 0x080b056f in UserCall::process (this=0x81c2630, packet=0x81ad128) at UserCall.cpp:132
#8 0x080791d7 in ONEPPacketManager::processPacket () at ActionNotAllowException.h:15
#9 0x080aa159 in onep_processing_thread (arg=0x8127e88) at CordApplication.cpp:1317
#10 0xfec34672 in _thr_setup () from /lib/libc.so.1
#11 0xfec34960 in L3_doit () from /lib/libc.so.1
#12 0xfe7e2400 in ?? ()
#13 0x00000000 in ?? ()A crash in malloc in multi-threaded code could be a bug in the system malloc or a failure to use a thread-safe malloc. The default Solaris malloc in /usr/lib/libc.so.* is thread-safe.
A crash in malloc can also be caused by a heap corruption. Probably the most common causes of heap corruption are
- writing beyond the bounds of a buffer or variable (off by one, for example)
- deleting the same object more than once
- using an invalid pointer:
--- uninitialized
--- pointing to a deleted object
--- pointing to an out-of-scope object
- failure to guard a critical region
- failing to declare shared objects as volatile -
LINUX wls 6.0 sp2 & jdk 131 - Unable to start the Server - Segmentation fault
Facing problems in starting a weblogic 6.0 server with service pack 2 for Linux.
OS: RedHat7.0
BEA WLS version : 6.0
Service pack : SP2
JDK version : jdk1.3.1
When starting the startWebLogic.sh, it fails to start up completely and ends in
a
"startWebLogic.sh line 142 13834 Segmentation fault".
( After taking in the valid password for the system).
Putting the verbose flag in the start-up script seems to see if it gave any clue,
teh last class loaded was the weblogic.apache.xerces.utils.StringHasher .
Does anyone have ideas as to what the cause maybe.
Is there a patch that is needed/known problem.
Any help would be appreciated.
Thanks
ShyamPlease read the Linux install notes for JDK 1.3.1.
http://java.sun.com/j2se/1.3/install-linux-sdk.html
I'll paste the relevant parts:
Known Problems
RedHat Linux 6.2 is the officially supported Linux platform for J2SDK 1.3.1.
Limited testing has been done on other Linux operating systems, and the
following are known problems on the non-supported platforms.
a.. If you use Red Hat Linux 7, we recommend version 7.1 rather than 7.0.
Limited testing has revealed problems when using J2SDK with Red Hat Linux
7.0, some of which are described below.
b.. The newer glibc-2.2.x libraries cannot correctly handle initial thread
stack sizes larger than 6 MB. This can cause a segmentation fault on come
Linux platforms that use the newer libraries. Such platforms include Red Hat
7.0, Mandrake 8.0, SuSe 7.2, and Debian 2.2. The problem will not occur on
Linux platforms that are using glibc-2.1.x such as Red Hat 6.1 and 6.2. It
will also not affect Red Hat 7.1 because it uses a different thread stack
layout. This problem is being tracked as bug 4466587.
Workaround - Use "ulimit -s 2048" in bash shell or "limit stacksize 2048"
in tcsh to limit the initial thread stack to 2 MB.
c.. When System.exit(int) is invoked on Red Hat 7.0, the program never
exits with a non-zero value. This problem is apparently due to a bug in the
exit function in libc.so library. To avoid this problem, use the supported
Red Hat release, version 6.2, or, if you want to use Red Hat 7, use version
7.1 rather than 7.0.
d.. On RedHat Linux 7.0, if you want to use the Classic VM rather than one
of the Java HotSpot VMs in J2SDK 1.3.1, you must download and install
glibc-2.2-9.i386.rpm file available at
http://www.redhat.com/support/errata/RHBA-2000-079.html. Do not install the
i686 files available on that same web page, as those will prevent proper
functioning of the Java HotSpot VMs in J2SDK 1.3.1.
e.. If you use RedHat Linux 7 Server, you must manually install
compat-libstdc++-6.21-2.9.0.0.i386.rpm to prevent "error while loading
shared libraries" when using the Java HotSpot VMs. This file is located in
the /RedHat/RPMS directory on the RedHat Linux 7 CD-ROM. You may also obtain
a copy of this file from http://rpmfind.net. To install the file, use this
command:
rpm --install compat-libstdc++-6.21-2.9.0.0.i386.rpm
It is not necessary to manually install
compat-libstdc++-6.21-2.9.0.0.i386.rpm if you are using RedHat Linux 7
Workstation.
f.. When using RedHat Linux versions other than 6.1, the font.properties
file may fail to display some Symbol/Dingbats characters properly on some
AWT components. To correct this, use this revised font.properties file to
replace the one at <JAVA_HOME>/jre/lib/.
g.. Caldera OpenLinux uses version 2.1.2-3 of glibc. Because that version
is not greater than or equal to 2.1.2-11, the Java 2 SDK's rpm installer
will fail during its dependency check. We recommend that you obtain an
updated version of the glibc library available from Caldera at the following
locations:
ftp.caldera.com:/pub/updates/eDesktop/2.4/current/RPMS
or
ftp.caldera.com:/pub/updates/eServer/2.3/current/RPMS
Regards,
Eric
"root" <[email protected]> wrote in message
news:[email protected]...
I am having the same problem , Changing the bash stack size even to 16kdidn't help;
>
RedHat 7.0, WebLogic 6.1, JDK1.3.1, have bumped ss , and mx as well as thebash
ulimit ,
Any Ideas .. I see other posts on the same topic. ..verbose load ends withsame
weblogic.apache.xerces.utils.StringHasher.class
as ther have reported.
sundaram wrote:
Facing problems in starting a weblogic 6.0 server with service pack 2
for Linux.
>>
OS: RedHat7.0
BEA WLS version : 6.0
Service pack : SP2
JDK version : jdk1.3.1
When starting the startWebLogic.sh, it fails to start up completely andends in
a
"startWebLogic.sh line 142 13834 Segmentation fault".
( After taking in the valid password for the system).
Putting the verbose flag in the start-up script seems to see if it gaveany clue,
teh last class loaded was the weblogic.apache.xerces.utils.StringHasher.
>>
Does anyone have ideas as to what the cause maybe.
Is there a patch that is needed/known problem.
Any help would be appreciated.
Thanks
Shyam -
Dear Friends,
I am Installing a Oracle E-Biz R12.1.1 (64 bit) on Red Hat Enterprise Linux Server release 5.5 (Tikanga) 64-bit.
While Installining R12.1.1, DB Tier, Configuration Upload failed. Also, when we checked from server, sqlplus, lsnrctl tnsping... every thing giving Segmentation Fault Error.
All the RPM's are Installed.
Please let me know if there is any fix.
Regards,Hi;
Please check log file for more details about error
How to locate the log files and troubleshoot RapidWiz for R12 [ID 452120.1]
Regard
Helios -
Segmentation fault on rwconverter on linux redHat 4
Hy,
I get Segmentation fault when i run rwconverter (reports from win to Linux)
Any help is appreciate
Thankshy, this is link on metalink, i had set noexec=off. on grub.conf then all go well.
https://metalink.oracle.com/metalink/plsql/ml2_documents.showFrameDocument?p_database_id=NOT&p_id=387148.1
Cause
Known issue with Red Hat 3 and 4 against the JDK 1.4.2 version installed with iAS/iDS 10.1.2.0.2.
The problem is caused by an underlying issue with the Linux operating system and how it handles java routines.
Require noexec=off.
regards -
Limitation on number of roles in oracle menu or segmentation fault in menu
Hi All,
Is there any limitation on the number of roles we can have in oracle menu? We are having around 300 roles created in our menu and after that if we try to add new role we are not able to compile the menu file. In the windows the form builder closes abruptly and in Linux it gives us segmentation fault error. Did anyone face the problem? Also if there is any solution then please provide the same.
Please let me know.
Thanks in advance!
AshishI just wonder how many forms do you have that you needed to create 300 roles. We normally create a role for one set of users. It means you've 300 different set of users for your application !!.
Maybe you are looking for
-
How to populate PO Open Interface table to create encumbrances?
We want to bring in our POs from an external system, and we want them in an "approved" status. Is there any way to bring in the data, and "batch approve" them, and do the funds checking/encumbrance piece as well? There are some fields in the interfac
-
How can i Filter a column in my iPad 2?
I bought numbers and I'm trying to Filter rows chosing a valué of a column. Can youtube help me?? Thanks in advance
-
Please help me. Video Transitions are missing in Premier Pro CS5.5
-
My hard drive crashed and i still have my music on my ipod?help??
i want to put my music in my mytunes library because i formatted my hard drive but still have a copy of all my music on my ipod. how do i transfer it into the empty harddrive and new library?
-
I either scan documents to PDF using Paperport program (came with printer) or have PDF's emailed to me from others. When I open PDF documents eg invoices, I often need to add comment/s and then save the PDF. Everytime I go to save after adding commen