Perl performance on solaris 10
hi all,
i executed this (http://www.metacard.com/perlbench.html) script on solaris 10 and linux machine. Solaris box took around 15sec to complete the test where as linux box took only 8 seconds.
Is there any way to increase the perfomance of perl on solaris 10.
Regards,
uttam
Yes, I doubt this has much to do with the OS unless they are exactly the same machines with equally good device support.
ultra5 333mhz ultrasparc 2gb ram sol10= 45 seconds
2x750 ultrasparc 2gb ram blade1000 sol10= 22 seconds
3200 amd 1gb ram fedora= 18 seconds
ultra20m2 1.8 ghz opteron 2gb ram sol10= 9 seconds
x2200m2 2x2.8ghz opterons 12gb ram sol10= 6 seconds
I like how the ancient blade1000 is not to far from the much more modern fedora box. Such a lovely machine.
Similar Messages
-
Using perl DBI on Solaris 10 gives core dump
I am using perl 5.8.4 which comes along with Solaris 10.
I have installed DBI-1.58.
Even a test command is not working.
perl -MDBI -e 'print "$DBI::VERSION\n";'
Bus Error (core dumped)
truss perl -MDBI -e 'print "$DBI::VERSION\n";'
shows
stat("/usr/local/lib/libc.so.1", 0xFFBFE5D0) Err#2 ENOENT
mprotect(0xFEED0000, 104171, PROT_READ|PROT_WRITE|PROT_EXEC) = 0
mprotect(0xFEED0000, 104171, PROT_READ|PROT_EXEC) = 0
munmap(0xFF370000, 8192) = 0
brk(0x000A2470) = 0
brk(0x000A4470) = 0
brk(0x000A4470) = 0
brk(0x000A6470) = 0
brk(0x000A6470) = 0
brk(0x000A8470) = 0
brk(0x000A8470) = 0
brk(0x000AA470) = 0
brk(0x000AA470) = 0
brk(0x000AC470) = 0
Incurred fault #5, FLTACCESS %pc = 0xFEED44CC
siginfo: SIGBUS BUS_ADRALN addr=0x00000001
Received signal #10, SIGBUS [default]
siginfo: SIGBUS BUS_ADRALN addr=0x00000001
$perl -v
command shows like its compiled with Intsize of 64 is this creating problem?
Summary of my perl5 (revision 5 version 8 subversion 4) configuration:
Platform:
osname=solaris, osvers=2.10, archname=sun4-solaris-64int
uname='sunos localhost 5.10 sun4u sparc SUNW,Ultra-2'
config_args=''
hint=recommended, useposix=true, d_sigaction=define
usethreads=undef use5005threads=undef useithreads=undef usemultiplicity=undef
useperlio=define d_sfio=undef uselargefiles=define usesocks=undef
use64bitint=define use64bitall=undef uselongdouble=undef
usemymalloc=n, bincompat5005=undef
Compiler:
cc='cc', ccflags ='-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -xarch=v8 -D_TS_ERRNO',
optimize='-xO3 -xspace -xildoff',
cppflags=''
ccversion='Sun WorkShop', gccversion='', gccosandvers=''
intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=87654321
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
ivtype='long long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
alignbytes=8, prototype=define
Linker and Libraries:
ld='cc', ldflags =''
libpth=/lib /usr/lib /usr/ccs/lib
libs=-lsocket -lnsl -ldl -lm -lc
perllibs=-lsocket -lnsl -ldl -lm -lc
libc=/lib/libc.so, so=so, useshrplib=true, libperl=libperl.so
gnulibc_version=''
Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-R /usr/perl5/5.8.4/lib/sun4-solaris-64int/CORE'
cccdlflags='-KPIC', lddlflags='-G'
Characteristics of this binary (from libperl):
Compile-time options: USE_64_BIT_INT USE_LARGE_FILES
Locally applied patches:
22667 The optree builder was looping when constructing the ops ...
22715 Upgrade to FileCache 1.04
22733 Missing copyright in the README.
22746 fix a coredump caused by rv2gv not fully converting a PV ...
22755 Fix 29149 - another UTF8 cache bug hit by substr.
22774 [perl #28938] split could leave an array without ...
22775 [perl #29127] scalar delete of empty slice returned garbage
22776 [perl #28986] perl -e "open m" crashes Perl
22777 add test for change #22776 ("open m" crashes Perl)
22778 add test for change #22746 ([perl #29102] Crash on assign ...
22781 [perl #29340] Bizarre copy of ARRAY make sure a pad op's ...
22796 [perl #29346] Double warning for int(undef) and abs(undef) ...
22818 BOM-marked and (BOMless) UTF-16 scripts not working
22823 [perl #29581] glob() misses a lot of matches
22827 Smoke [5.9.2] 22818 FAIL(F) MSWin32 WinXP/.Net SP1 (x86/1 cpu)
22830 [perl #29637] Thread creation time is hypersensitive
22831 improve hashing algorithm for ptr tables in perl_clone: ...
22839 [perl #29790] Optimization busted: '@a = "b", sort @a' ...
22850 [PATCH] 'perl -v' fails if local_patches contains code snippets
22852 TEST needs to ignore SCM files
22886 Pod::Find should ignore SCM files and dirs
22888 Remove redundant %SIG assignments from FileCache
23006 [perl #30509] use encoding and "eq" cause memory leak
23074 Segfault using HTML::Entities
23106 Numeric comparison operators mustn't compare addresses of ...
23320 [perl #30066] Memory leak in nested shared data structures ...
23321 [perl #31459] Bug in read()
Built under solaris
Compiled at Jul 26 2005 05:26:55
@INC:
/usr/perl5/5.8.4/lib/sun4-solaris-64int
/usr/perl5/5.8.4/lib
/usr/perl5/site_perl/5.8.4/sun4-solaris-64int
/usr/perl5/site_perl/5.8.4
/usr/perl5/site_perl
/usr/perl5/vendor_perl/5.8.4/sun4-solaris-64int
/usr/perl5/vendor_perl/5.8.4
/usr/perl5/vendor_perl
Other details:
file /usr/bin/perl
/usr/bin/perl: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, stripped
I tried installing DBI module using perlgcc also.
perlgcc Makefile.PL
make
make test
make install.
$uname -a
SunOS twirl 5.10 Generic_118822-25 sun4u sparc SUNW,Ultra-80
Please help me out.Try this guy's list, he maintain an archive of Solaris package.
http://www.ibiblio.org/pub/packages/solaris/sparc/html/creating.solaris.packages.html
Of course, if you can get directly from Sun will be better. -
NFS performance with Solaris 10
Hello,
We have been playing with one of the x4200s running s10u2, or snv_50 for that matter, and are getting terrible numbers from the NFS performance. Initially, we suspected it was just the ZFS filesystem on the back (which it was, though zil_disable made it a lot better), but even after exploring a little I am getting terrible numbers for NFS backed by UFS. Using afio to unafio a file on the disk gives:
Local:
afio: 432m+131k+843 bytes read in 263 seconds. The operation was successful.
Remote:
afio: 432m+131k+843 bytes read in 1670 seconds. The operation was
successful.
I have raised the ncsize to 1000000, and upped the server threads to 1024.
The same thing on a linux box(ext3) turns in local times of 100 seconds and remote at 180 seconds. The differences in the local and remote numbers are just crazy. The difference in the ZFS is way worse:
Local zfs:
afio: 432m+131k+843 bytes read in 137 seconds. The operation was successful.
NFS -> ZFS:
afio: 432m+131k+843 bytes read in 2428 seconds. The operation was
successful.
I have started looking into dtrace for tracking the problem, but don't have much to report yet.
Any suggestions appreciated.Ask this on the Solaris Forum, not the Java Networking forum.
Edit: typo -
OpenGL/Elite3D Performance in Solaris 10?
Hi Folks,
I've searched but can't seem to find anything on this.
I have an Ultra-2 with 2x300MHz, 640MB of RAM, and an Elite3d-M6 framebuffer. Life is good, and Solaris 10 is great.
But - I use a brain modeling application that converts MRI images of the skull to 3d maps of the brain. Under Solaris 9, performance was awesome - fully accelerated, very smooth rotation and scrolling of the 3d models.
Using the same software under solaris 10, its graphics performance is very poor. I've also noticed that the OpenGL plugin for Xmms (which ran great under Solaris 9) performs terribly in Solaris 10. The problem is the same in both CDE and JDS.
To me, this says 'OpenGL' problem. But I've done a full Solaris 10 install (and several re-installs, for other reasons), and performance is equally poor with all versions. OpenGL really seems to be installed and working properly, but performance is really bad. Does anyone have any ideas about this one?
Thanks,
timI found out about Update 4 very shortly after posting this, and I have now upgraded. Unfortunately, the problem still persists.
-
Performance on Solaris 10 - Operating system paging
Has anyone experience performance issues after an upgrade from Solaris 9 to Solaris 10 on SAP systems with limited memory?
We have many systems that are on servers with 4 Gig of memory and ran well on Solaris 9. After an upgrade to Solaris 10 we are experiencing very high OS system paging rates. The response times of the SAP systems are very poor when this occurs. It seems to take very little load to cause this.
I realize more memory or decreases in Oracle or SAP memory parms will solve this but am wondering if there is anything on the Solaris OS that could resolve this?
Thanks,
DanDISM can be used but in global zone only (according to Sun document "Best Practive for Running Database in Solaris Containers" , the proc_lock_memory privilege which is required to run the ora_dism_ process is not available in non globale zone)
The doc i got is from 2005, so don't know if the Sun recommendations has been updated since then.
In order to activate DISM (if you are in a global zone), sga_max_size should be set up larger than the sum of sga components: db_cache_size, shared_pool_size ...)
Also look for the Sun Blueprint "Dynamic Reconfiguration and Oracle 9i Dynamic Resizable SGA" on http://www.sun.com/blueprints
If you use ISM because in a non-global zone, you can use oracle parameter lock_sga to ensure the SGA is loaded into the RAM and useism_for_pga = true to ensure PGA is loaded into the RAM.
Make sure you have enough RAM to hold filesystem cache (OS memory) , oracle memory, and applications memory
Make sur your PGA and SGA are correctly sized size, since you won't be able to dynamically change the ISM allocation. (see v$shared_pool_advice, v$db_cache_advice, v$pga_target_advice ...)
Take the usual precautions:
- have a successfull backup first
- do the change on a test machine
- and/or ask your vendor before proceeding
Other Doc to read ...
Note 697483 - "Oracle Dynamic SGA on Solaris" (recommends to read Sun doc n°230653)
Note 724713 - parameter settings for Solaris 10, here is an extract :
Only one parameter from SAP note 395438 should remain in file
etc/system
set rlim_fd_cur=8192
As described in SunSolve document 215536, the "Large Page Out Of the Box" (LPOOB) feature of the Solaris 10 memory management, first implemented in Solaris 10 1/06 (Update 1), can lead to performance problems when the system runs out of large chunks of free contiguous memory. This is because the Solaris 10 kernel by default will attempt to relocate memory pages to free up space for creating larger blocks of contiguous memory. Known symptoms are high %system CPU time in vmstat, high number of cross calls in mpstat, and Oracle calling mmap(2) to /dev/zero for getting more memory.
Memory page relocation for satisfying large page allocation requests can be disabled by setting the following Solaris kernel parameter in /etc/system
set pg_contig_disable=1
This will not switch off the LPOOB feature. Large memory pages will still be used when enough free space of contiguous memory is available, so the benefits of this feature will remain
Note 870652 - Installation of SAP in a Solaris Zone
Note 1246022 - Support for SAP applications in Solaris Zones
Edited by: Emmanuel TCHENG on Oct 13, 2009 12:02 PM -
Poor Elite3D OpenGL performance in Solaris 10
I'm running an old Ultra10 with an Elite3D-m3. Since I installed Solaris 10, I've had a very slow desktop and awful performance in 3D applications. This tells me that there's something wrong between the video card and OpenGL, like it's in software graphics mode. How can I tell if I'm actually running in software OpenGL or if it's the proper performance for the card? Do I need special drivers to run it since the card is so old?
I found out about Update 4 very shortly after posting this, and I have now upgraded. Unfortunately, the problem still persists.
-
Poor I/O Performance on Solaris - v1.4.1_01
Does anyone have any comments on the following?
It's an I/O analysis done to determine which Java
methods might be used to replace an existing C++
platform-specific file re-compression subsystem.
The system has to handle up to 200,000 files per
day (every day).
Java IO test results for converting ZERO_ONE compressed
files to standard compressed files.
Java 1.4.1, 12-04-2002
The input dataset contains 623,230,991 bytes in 1391 files.
The input files are in ZERO_ONE compression format.
For all tests:
1) an input data file was opened in buffered mode.
2) the data was read from the input and expanded
(byte by byte).
3) the expanded data was written to a compressing
output stream as it was created.
4) repeat 1 thru 3 for each file.
64K buffers were used for all input and output streams.
Note: Items marked with "**" hang at random on Solaris
(2.7 & 2.8) when processing a large number of small files. They always hang on BufferedInputStream.read().
There may be a deadlock situation with the 'process
reaper' because we're calling 'exec()' and 'waitFor()'
so quickly. The elapsed times for those items are
estimates based on the volume of data processed up to
the point where the process hung. This 'bug' has been
reported to Sun.
-- elapsed time --
NT Solaris 2.7 Method
n/a 18 min Current C++ code:
fopen(r) -> system(compress)
19 min 19 min ** BufferedInputStream -> exec(compress)
29 min 21 min 1) BufferdInputStream -> file
2) exec(compress file)
24 min 42 min ** BufferedInputStream -> exec(gzip)
77 min 136 min BufferedInputStream -> GZIPOutputStream
77 min -- BufferedInputStream -> ZipOutputStream
The performance of GZIPOutputStream and ZipOutputStream
makes them useless for any Production system. The 2x
performance degradation on Solaris (vs NT) for these two
streams is surprising. Does this imply that the 'libz'
on Solaris is flawed? Notice that whenever 'libz' is
involved in the process stream (exec(gzip),
GZIPOutputStream, ZipOutputStream) the elapsed time
climbs dramatically on Solaris.Re-submitted Performance Matrix with formatting retained.
Note: for the "-> system()" and "-> exec()" methods, we write to the
STDIN of the spawned process.
-- elapsed time --
NT Solaris 2.7 Method
n/a 18 min Current Solaris C++ code:
fopen(r) -> system("compress -c >aFile")
19 min 19 min ** BufferedInputStream -> exec("compress -c >aFile")
29 min 21 min 1) BufferdInputStream -> "aFile"
2) exec("compress aFile")
24 min 42 min ** BufferedInputStream -> exec("gzip -c aFile")
77 min 136 min BufferedInputStream -> GZIPOutputStream("aFile")
77 min -- BufferedInputStream -> ZipOutputStream("aFile") -
I get this error when I run quick time server for Solaris on on e of my boxes. However the same thing works on another server with same OS and patch levels
HTTP/1.0 500 Perl execution failed Server: DSS 3.0 Admin Server/1.0 Date: Tue, 22 Apr 2003 15:04:31 GMT Content-Type: text/html Connection: close
Error - Perl execution failed
syntax error at (eval 17) line 2, at EOFDoes that happen both for booting from CDROM (i.e. the
"Software 1 of 2" CD) and from the DCA boot floppy?
Maybe a more recent Solaris 8 DCA floppy helps?
http://www.sun.com/bigadmin/hcl/drivers/dca_diskettes/
Yes, Its showing same error when I try even from DCA floppy.
I used DCA 0401 and DCA 1001. In case of DCA 0401 while
device identification , the screen becomes green and PC gets hang
and in case of DCA 1001 PC starts generating continuous beep sound.
- abhijit nath -
Hi All
can anybody point out a link, where i can download perl for solaris
i just want to copile and install perl.
Thanks in advance
kankiIf you want to compile it yourself you should get it from:
http://www.perl.com/download.csp#sourcecode
.7/M. -
My ISP is running on solaris. I have place a PERL (.pl) script which will fire a mail. He says 'Sendmail' is configured and I can create any folder and place my script and system will recognize is automatically. As we know, all the web servers will have 'cgi-bin' directory where the .pl or .cgi files will be placed for execution. My ISP tells me that he has configured the server in such a way that I can create a 'cgi' directory anywhere and can run it from the browser. I am unable to do it. Tell me how I can do it.
Unfortunately this stupid perl script isn't that
easy. It uses Rational cqperl, then some libraries
and modules I don't understand. I can't even find
the actual code it uses that uses flexlm.
That's why I was asking what is different with java
1.2 and java 1.4. Is there some way that Runtime
acts differently?Not in any fundamental way that I am aware of.
The fact that the script is running at all though tells me that the problem is on the PERL side of things. I suspect that some of those modules and libraries are not being loaded correctly or the same way on both systems.
I fear the answer is that you need to figure out what components the script needs to run and make sure that it has them under all circumstances. This is like a classpath problem except it is with your PERL script so you need to solve it as a PERL problem.
Again if it wasn't running at all that would be one concern but it is running just not finding all it's bits.. a problem of that nature strongly indicates that the issue is happening once the process has already been spawned (correctly) and so is out of the realm of the Java part of your program. -
Compiling Apache with PERL module on Solaris 8
Hi there,
After the Richard's reply, I found the Apache sources on the intallation CD 2/2. And after a while I successful compile the apache WITH ITS STANDARD modules with the following command:
$ ./configure --prefix=/usr/apache \
--enable-module=most \
--enable-shared=max \
--with-layout=Solaris
$ make
$ make install
But the above "configure" does not compile the companion mod_perl. So, I tried:
... $ ./configure --prefix=/usr/apache \
--enable-module=most \
--enable-shared=/usr/src/apache/mod_perl/src/modules/perl/mod_perl.c \
--add-module=max \
--enable-shared=max \
--with-layout=Solaris
but the above command just copies the "mod_perl.c" to the diretory
"usr/src/apache/src/modules/extra"
missing the corresponding *.h
Then trying
$ make
The compilation fails because of ALL the
files on the directory
/usr/perl5/5.00503/sun4-solaris/CORE
are not found.
Any Hint?
Thanks in advance
C�sarhave you tried downloading the source from apache.org and compiling it?
ive installed apache on sol8 many times, but all from apache.. not from the cd..
alphademon.com -
WebLogic 6.0 performance on Solaris (Excessive polling)
We have a problem with SunOs 5.8, multi (2) CPU, 1GB memory Weblogic 6.0 sp1 and
java 1.3.0.
While running some benchmarks we realized that the tests run much faster (25%-40%
faster) on a NT Pentium III 800 MHz desktop with 256MB RAM than on our Sun Ultra-250
with the above configuration.
So, we decided to do some profiling by turning on hprof (-X:runhprof:cpu=samples,depth=12)
and realized that over 50% of the time is spent in polling the sockets (weblogic.socket.PosixSocketMuxer.poll).
On NT, only 0.23% time is spent on the same activity. I do not know if on NT WebLogic
uses a different IPC model (select vs. interrupt) but the time taken on Solaris
doing polls is outrageous. I am wondering if anybody else out there has experienced
the same problem? Any insights you have to share on this will be appreciated.
TIA,
-Anil SinghalThe Xprof output is misleading - this does not mean that 50% of
the CPU is spent on polling sockets. It simply means that 50% of the thread samples
were in a polling method (probably waiting, not running).
Yes, I believe that an 800MHz PIII would blow away an ES-250 which runs at less
that 300MHz. So you have 2CPU -> 600MHz. Then
(800-600)/800 = 25% which is what you get.
Mike
"Michael Girdley" <----> wrote:
Are you using the performance pack? If not, you should.
Michael Girdley
BEA Systems
Learning WebLogic? http://learnweblogic.com
"Anil Singhal" <[email protected]> wrote in message
news:3adc4f27$[email protected]..
We have a problem with SunOs 5.8, multi (2) CPU, 1GB memory Weblogic6.0
sp1 and
java 1.3.0.
While running some benchmarks we realized that the tests run much faster(25%-40%
faster) on a NT Pentium III 800 MHz desktop with 256MB RAM than onour Sun
Ultra-250
with the above configuration.
So, we decided to do some profiling by turning on hprof(-X:runhprof:cpu=samples,depth=12)
and realized that over 50% of the time is spent in polling the sockets(weblogic.socket.PosixSocketMuxer.poll).
On NT, only 0.23% time is spent on the same activity. I do not knowif on
NT WebLogic
uses a different IPC model (select vs. interrupt) but the time takenon
Solaris
doing polls is outrageous. I am wondering if anybody else out therehas
experienced
the same problem? Any insights you have to share on this will beappreciated.
TIA,
-Anil Singhal -
WebLogic 6.0 performance on Solaris vs NT
Has anybody tried comparing the performance of WebLogic 6.0 sp1 on Solaris (SunOS
5.8) with NT 4.0? Please post your experiences.
TIA,
-AnilThe Xprof output is misleading - this does not mean that 50% of
the CPU is spent on polling sockets. It simply means that 50% of the thread samples
were in a polling method (probably waiting, not running).
Yes, I believe that an 800MHz PIII would blow away an ES-250 which runs at less
that 300MHz. So you have 2CPU -> 600MHz. Then
(800-600)/800 = 25% which is what you get.
Mike
"Michael Girdley" <----> wrote:
Are you using the performance pack? If not, you should.
Michael Girdley
BEA Systems
Learning WebLogic? http://learnweblogic.com
"Anil Singhal" <[email protected]> wrote in message
news:3adc4f27$[email protected]..
We have a problem with SunOs 5.8, multi (2) CPU, 1GB memory Weblogic6.0
sp1 and
java 1.3.0.
While running some benchmarks we realized that the tests run much faster(25%-40%
faster) on a NT Pentium III 800 MHz desktop with 256MB RAM than onour Sun
Ultra-250
with the above configuration.
So, we decided to do some profiling by turning on hprof(-X:runhprof:cpu=samples,depth=12)
and realized that over 50% of the time is spent in polling the sockets(weblogic.socket.PosixSocketMuxer.poll).
On NT, only 0.23% time is spent on the same activity. I do not knowif on
NT WebLogic
uses a different IPC model (select vs. interrupt) but the time takenon
Solaris
doing polls is outrageous. I am wondering if anybody else out therehas
experienced
the same problem? Any insights you have to share on this will beappreciated.
TIA,
-Anil Singhal -
We have planned to run a mysql database on a SUN SPARC box with Solaris 9, but compared to a small linux box the performance seems to be bad. Our test is with only one user and no additional load on the machines. The Linux box is about 5 times faster (response time ) than the SUN box. My questions are:
- will it get better on solaris10??
- can I tune mysql or Solaris for a better performance??We have planned to run a mysql database on a SUN SPARC box with Solaris 9, but compared to a small linux box the performance seems to be bad. Our test is with only one user and no additional load on the machines. The Linux box is about 5 times faster (response time ) than the SUN box. My questions are:
- will it get better on solaris10??
- can I tune mysql or Solaris for a better performance?? -
ORACLE 10g PERFORMANCE ON SOLARIS 10
Hi all,
In sunfire v890 we have installed oracle 10g release 2 on solaris 10.
prstat -a command shows :
NPROC USERNAME SIZE RSS MEMORY TIME CPU
105 root 9268M 6324M 20% 1:21:57 0.4%
59 oracle 24G 22G 71% 0:04:33 0.1%
2 nobody4 84M 69M 0.2% 0:11:32 0.0%
2 esbuser 13M 9000K 0.0% 0:00:46 0.0%
1 smmsp 7560K 1944K 0.0% 0:00:00 0.0%
4 daemon 12M 7976K 0.0% 0:00:00 0.0%
and top utility shows :
last pid: 8639; load avg: 0.09, 0.09, 0.09; up 2+06:05:29 17:07:50
171 processes: 170 sleeping, 1 on cpu
CPU states: 98.7% idle, 0.7% user, 0.7% kernel, 0.0% iowait, 0.0% swap
Memory: 32G phys mem, 22G free mem, 31G swap, 31G free swap
therefore from prstat we come to know that memory used by oracle is 71%
where as top says 31.25% used.....
which one is true in this scenario.....
shall we go ahead in trusting top utility????
Advance thanks to you.therefore from prstat we come to know that memory
used by oracle is 71%
where as top says 31.25% used.....
which one is true in this scenario.....
shall we go ahead in trusting top utility????In this case top is more accurate. prstat pretends all the memory used by each Oracle process is used only by that process. But lots of the memory used by Oracle is shared between several processes. prstat is counting that shared memory over and over for each process... resulting in the higher figure.
http://forum.java.sun.com/thread.jspa?threadID=5114263&tstart=105
Regards,
[email protected]
http://www.HalcyonInc.com
Maybe you are looking for
-
RFC remote function module call from XI
hi I am trying to call a remote function module directly from XI which is not a bespoke module. And i am getting the following error: com.sap.aii.af.ra.ms.api.DeliveryException: error while processing message to remote system:com.sap.aii.af.rfc.core.
-
Power View stuck in loading forever in SharePoint
Hi, we have just configured a new SharePoint instance. We've installed and configured PowerPivot, SSRS SharePoint mode, Excel Services, Performance Point Services, and Silverlight. We uploaded a workbook to a BI site. The workbook contains both stand
-
How do I delete updates displayed on skype home pa...
There are updates on my skype home page that are years old, and serve no purpose. There doesn't seem to be a way to delete them. There is a selection to hide them, but none to delete. Mac OSX 10.6.8 skype 6.15 (335) Thanks JLS
-
The update server is not responding or internet or firewall setting are incorrect
can anyone help me with this issue? i am trying to update elements 11, i have a new Nikon D7100 and can't download NEF (raw) files on to elements.
-
I am getting an error that the download has a virus when attempting to install
I uninstall Firefox due to a problem with the browser showing websites incorrectly after a windows reboot. I attempted then to download firefox from the firefox website and each time the antivirus software detect a virus and remove the installer