Segmentation error in background
Hello,
We have a problem creating a target group after a patch upgrade.
We want to create a target group using the infoset CRM_MKTTG_BP_BIRTHDATE. The filter that we apply is +1d.
If we built the TG in background, we are getting the error "Business partner does not exist" for all the partners in the TG. On the other
hand, if we take a look at the error, the number of partners that we are obtaining is correct. The program that generates the errors is
CRM_MKTTG_TG_BUILD.
But if we built the TG directly, not in background, we are getting no errors and everything looks good.
As I said, we are getting this error after a pacth upgrade.
In the attachment you have the result of what I comment. The first implementation has been online, and the second is the background and everyone has returned error "The business partner does not exist."
Thanks.
Hi Juan,
From the screenshot I see that you are using the segment builder applet. May I know what version of CRM are you using. When you say patch upgrade, to which patch level have you upgraded.
What ever you are trying to do here appears to be standard. I would suggest you to open a SAP Message for this issue .
Hemanth
Similar Messages
-
Cannot create data store shared-memory segment error
Hi,
Here is some background information:
[ttadmin@timesten-la-p1 ~]$ ttversion
TimesTen Release 11.2.1.3.0 (64 bit Linux/x86_64) (cmttp1:53388) 2009-08-21T05:34:23Z
Instance admin: ttadmin
Instance home directory: /u01/app/ttadmin/TimesTen/cmttp1
Group owner: ttadmin
Daemon home directory: /u01/app/ttadmin/TimesTen/cmttp1/info
PL/SQL enabled.
[ttadmin@timesten-la-p1 ~]$ uname -a
Linux timesten-la-p1 2.6.18-164.6.1.el5 #1 SMP Tue Oct 27 11:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@timesten-la-p1 ~]# cat /proc/sys/kernel/shmmax
68719476736
[ttadmin@timesten-la-p1 ~]$ cat /proc/meminfo
MemTotal: 148426936 kB
MemFree: 116542072 kB
Buffers: 465800 kB
Cached: 30228196 kB
SwapCached: 0 kB
Active: 5739276 kB
Inactive: 25119448 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 148426936 kB
LowFree: 116542072 kB
SwapTotal: 16777208 kB
SwapFree: 16777208 kB
Dirty: 60 kB
Writeback: 0 kB
AnonPages: 164740 kB
Mapped: 39188 kB
Slab: 970548 kB
PageTables: 10428 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 90990676 kB
Committed_AS: 615028 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 274804 kB
VmallocChunk: 34359462519 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
extract from sys.odbc.ini
[cachealone2]
Driver=/u01/app/ttadmin/TimesTen/cmttp1/lib/libtten.so
DataStore=/u02/timesten/datastore/cachealone2/cachealone2
PermSize=14336
OracleNetServiceName=ttdev
DatabaseCharacterset=WE8ISO8859P1
ConnectionCharacterSet=WE8ISO8859P1
[ttadmin@timesten-la-p1 ~]$ grep SwapTotal /proc/meminfo
SwapTotal: 16777208 kB
Though we have around 140GB memory available and 65GB on the shmmax, we are unable to increase the PermSize to any thing more than 14GB. When I changed it to PermSize=15359, I am getting following error.
[ttadmin@timesten-la-p1 ~]$ ttIsql "DSN=cachealone2"
Copyright (c) 1996-2009, Oracle. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
connect "DSN=cachealone2";
836: Cannot create data store shared-memory segment, error 28
703: Subdaemon connect to data store failed with error TT836
The command failed.
Done.
I am not sure why this is not working, considering we have got 144GB RAM and 64GB shmmax allocated! Any help is much appreciated.
Regards,
RajThose parameters look ok for a 100GB shared memory segment. Also check the following:
ulimit - a mechanism to restrict the amount of system resources a process can consume. Your instance administrator user, the user who installed Oracle TimesTen needs to be allocated enough lockable memory resource to load and lock your Oracle TimesTen shared memory segment.
This is configured with the memlock entry in the OS file /etc/security/limits.conf for the instance administrator.
To view the current setting run the OS command
$ ulimit -l
and to set it to a value dynamically use
$ ulimit -l <value>.
Once changed you need to restart the TimesTen master daemon for the change to be picked up.
$ ttDaemonAdmin -restart
Beware sometimes ulimit is set in the instance administrators "~/.bashrc" or "~/.bash_profile" file which can override what's set in /etc/security/limits.conf
If this is ok then it might be related to Hugepages. If TT is configured to use Hugepages then you need enough Hugepages to accommodate the 100GB shared memory segment. TT is configured for Hugepages if the following entry is in the /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/ttendaemon.options file:
-linuxLargePageAlignment 2
So if configured for Hugepages please see this example of how to set an appropriate Hugepages setting:
Total the amount of memory required to accommodate your TimesTen database from /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/sys.odbc.ini
PermSize+TempSize+LogBufMB+64MB Overhead
For example consider a TimesTen database of size:
PermSize=250000 (unit is MB)
TempSize=100000
LogBufMB=1024
Total Memory = 250000+100000+1024+64 = 351088MB
The Hugepages pagesize on the Exalytics machine is 2048KB or 2MB. Therefore divide the total amount of memory required above in MB by the pagesize of 2MB. This is now the number of Hugepages you need to configure.
351088/2 = 175544
As user root edit the /etc/sysctl.conf file
Add/modify vm.nr_hugepages= to be the number of Hugepages calculated.
vm.nr_hugepages=175544
Add/modify vm.hugetlb_shm_group = 600
This parameter is the group id of the TimesTen instance administrator. In the Exalytics system this is oracle. Determine the group id while logged in as oracle with the following command. In this example it’s 600.
$ id
$ uid=700(oracle) gid=600(oinstall) groups=600(oinstall),601(dba),700(oracle)
As user root edit the /etc/security/limits.conf file
Add/modify the oracle memlock entries so that the fourth field equals the total amount of memory for your TimesTen database. The unit for this value is KB. For example this would be 351088*1024=359514112KB
oracle hard memlock 359514112
oracle soft memlock 359514112
THIS IS VERY IMPORTANT in order for the above changes to take effect you to either shutdown the BI software environment including TimesTen and reboot or issue the following OS command to make the changes permanent.
$ sysctl -p
Please note that dynamic setting (including using 'sysctl -p') of vm.nr_hugepages while the system is up may not give you the full number of Hugepages that you have specified. The only guaranteed way to get the full complement of Hugepages is to reboot.
Check Hugepages has been setup correctly, look for Hugepages_Total
$ cat /proc/meminfo | grep Huge
Based on the example values above you would see the following:
HugePages_Total: 175544
HugePages_Free: 175544 -
Error in background job for program RSGET_SMSY error message No active job
Hello!
I would like to set up the Change Request Management functionality in SAP Solution Manager.
The only red warning by executing Test-button in Tcode SOLAR_ADMIN_PROJECT is:
<b>Error in background job for program RSGET_SMSY error message No active job found</b>
Can someone please tell me how to solve this problem?
(tcodes, technical steps)
Thank you very much!
regards
ThomThank you very much!
Can you also help me with the warning "Errors occurred during synchronization of the system landscape" in tcode SOLAR_PROJECT_ADMIN" --> System landscape --> Change requests --> "Refresh"?
The second issue I cannot see any projects in Tcode "/TMWFLOW/CMSCONF"
according to the SPRO-step "Set Project Assignment of Requests as Mandatory"
Thank you!
regards -
Hi Friends,
I have one doubt in ale/idocs that is " how to resolve the segment errors"?. if we are having a error in
segment.
Best Regards,
Narasimha Rao.Hi,
What kind of error u r facing?
After Creating a segment it should be RELEASED to use in a IDOC.
Please post ur error.
Thanks,
Nithya. -
Our facility is running AE CS6 ver 11.0.0.378 on 8 core Mac Pros with 32 Gb ram running OSX10.6.8
using Thinkbox Deadline 5.2 for distributed rendering to other 8 Core Mac Pros with 32 Gb ram.
The issue is that we are in crunch mode trying to deliver 4K comps (soon to be 8k comps) and we are experience substantial
delays 10-20 minutes after the AERENDER starts, but before it starts doing any fruitful rendering.
We are reading from and writing to Quicktime movie files (which normally works just fine for us) and thus the entire frame range needs
to get computed on a single host.
The symptom is that many comp jobs (but not all) stall for a while with no apparent actibity before spitting this messgage, but then happily
and swiftly complete each frame in a second or less with no errors:
0: STDOUT: PROGRESS: There is an error in background rendering so switching to foreground rendering after 0 frames completed out of total 576 frames.
0: STDOUT: PROGRESS: 00001 (1): 20 Min, 16 Sec
0: STDOUT: PROGRESS: 00002 (2): 1 Seconds
0: STDOUT: PROGRESS: 00003 (3): 0 Seconds
0: STDOUT: PROGRESS: 00004 (4): 1 Seconds
0: STDOUT: PROGRESS: 00005 (5): 2 Seconds
0: STDOUT: PROGRESS: 00006 (6): 1 Seconds
Given that the delay, seems to be related to the number of frames (350 frames creates a 10 min stall and 576 frames creates a 20 minute stall), it seems like it's trying to do something for every frame in background mode, before it gives up and does them quickly
in foreground mode. During this time, there doesn't seem to be any signficant CPU or Network or IO activity. I looked for command line args that might force AERENDER to foreground mode, but didn't find anything.
Here's what the digested submission args look like.
0: INFO: Startup Directory: "/Applications/Adobe After Effects CS6"
0: INFO: Process Priority: BelowNormal
0: INFO: Process Affinity: default
0: INFO: Process is now running
0: STDOUT: PROGRESS: 12/6/12 10:37:54 AM PST: Starting composition sc20_assem_R_v57 .
0: STDOUT: PROGRESS: Render Settings: Best Settings
0: STDOUT: PROGRESS: Quality: Best
0: STDOUT: PROGRESS: Resolution: Full
0: STDOUT: PROGRESS: Size: 1200 x 860
0: STDOUT: PROGRESS: Proxy Use: Use No Proxies
0: STDOUT: PROGRESS: Effects: Current Settings
0: STDOUT: PROGRESS: Disk Cache: Read Only
0: STDOUT: PROGRESS: Color Depth: Current Settings
0: STDOUT: PROGRESS: Frame Blending: On for Checked Layers
0: STDOUT: PROGRESS: Field Render: Off
0: STDOUT: PROGRESS: Pulldown: Off
0: STDOUT: PROGRESS: Motion Blur: On for Checked Layers
0: STDOUT: PROGRESS:
0: STDOUT: PROGRESS: Solos: Current Settings
0: STDOUT: PROGRESS: Time Span: Custom
0: STDOUT: PROGRESS: Start: 00001
0: STDOUT: PROGRESS: End: 00576
0: STDOUT: PROGRESS: Duration: 00576
0: STDOUT: PROGRESS: Frame Rate: 23.976 (comp)
0: STDOUT: PROGRESS: Guide Layers: All Off
0: STDOUT: PROGRESS: Skip Existing Files: Off
0: STDOUT: PROGRESS:
0: STDOUT: PROGRESS: Output Module: APR_422HQ - Millions
0: STDOUT: PROGRESS: Output To: /Volumes/Renders/Scenes/20/sc20_assem_R_v57_r1_2k.mov
0: STDOUT: PROGRESS: Format: QuickTime
0: STDOUT: PROGRESS: Output Info: Apple ProRes 422 (HQ)
0: STDOUT: PROGRESS: Output Info: Spatial Quality = 100
0: STDOUT: PROGRESS: Include: Project Link
0: STDOUT: PROGRESS: Output Audio: Off
0: STDOUT: PROGRESS: Channels: RGB
0: STDOUT: PROGRESS: Depth: Millions of Colors
0: STDOUT: PROGRESS: Color: Premultiplied
0: STDOUT: PROGRESS: Resize: -
0: STDOUT: PROGRESS: Crop: -
0: STDOUT: PROGRESS: Final Size: 1200 x 860
0: STDOUT: PROGRESS: Profile: -
0: STDOUT: PROGRESS: Embed Profile:
0: STDOUT: PROGRESS:
0: STDOUT: PROGRESS: Post-Render Action: None
We're under the gun to get this production delivered, and these delays are killing us. I worry when we switch to 8K comps
that the "fruitless delay time" will just get longer.
I'm currently trying to correlate the plug-in assortments in effect for the shots that exhibit this behaviour. Clearly in "foreground mode" the machines are capable of rendering the shot quickly. We've speculated, that posibly some plugins may be GPU or OpenGL accelerated (these machines have ATI Radeon HD 5770 cards) despite the fact that AERENDER probably doesn't require special graphics cards).
Anyone have any ideas, especially if there is some way for force the foreground mode from the start?
Regards,
Jay Production IT Support.Just to follow up, Our compositing supervisor was still convinced that running with MP was the way to go, so I worked
with him to run some benchmarks on medium and heavy shots that we determinied would trigger the issue in -MP mode.
The test scene was about 50 frames in length, and we ran it on the same hosts with -MP on and -MP off, and also ran a
version of bundled frames (10 frames per host/task) and another where we did all 50 frames on a single host (with and without MP).
Interestingly, in every scenerio of these 4 cases, the version with -MP on, was 2-15% faster (despite the 10-11 minute delay at the start in MP mode).
His conclusion was it's still faster with -MP mode ON, and requested that I continue to explore for patches or theories how we might get around the delay.
Have read links:
http://forums.creativecow.net/thread/2/1025525
http://helpx.adobe.com/content/help/en/after-effects/using/memory-storage1.html
Regards,
-Jay- -
836: Cannot create data store shared-memory segment, error 22
Hi,
I am hoping that there is an active TimesTen user community out there who could help with this, or the TimesTen support team who hopefully monitor this forum.
I am currently evaluating TimesTen for a global investment organisation. We currently have a large Datawarehouse, where we utilise summary views and query rewrite, but have isolated some data that we would like to store in memory, and then be able to
report on it through a J2EE website.
We are evaluating TimesTen versus developing our own custom cache. Obviously, we would like to go with a packaged solution but we need to ensure that there are no limits in relation to maximum size. Looking through the documentation, it appears that the
only limit on a 64bit system is the actual physical memory on the box. Sounds good, but we want to prove it since we would like to see how the application scales when we store about 30gb (the limit on our UAT environment is 32gb). The ultimate goal is to
see if we can store about 50-60gb in memory.
Is this correct? Or are there any caveats in relation to this?
We have been able to get our Data Store store 8gb of data, but want to increase this. I am assuming that the following error message is due to us not changing the /etc/system on the box:
836: Cannot create data store shared-memory segment, error 22
703: Subdaemon connect to data store failed with error TT836
Can somebody from the User community, or an Oracle Times Ten support person recommend what should be changed above to fully utilise the 32gb of memory, and the 12 processors on the box.
Its quite a big deal for us to bounce the UAT unix box, so l want to be sure that l have factored in all changes that would ensure the following:
* Existing Oracle Database instances are not adversely impacted
* We are able to create a Data Store which is able fully utilise the physical memory on the box
* We don't need to change these settings for quite some time, and still be able to complete our evaluation
We are currently in discussion with our in-house Oracle team, but need to complete this process before contacting Oracle directly, but help with the above request would help speed this process up.
The current /etc/system settings are below, and l have put in the current machines settings as comments at the end of each line.
Can you please provide the recommended settings to fully utilise the existing 32gb on the box?
Machine
## I have contrasted the minimum prerequisites for TimesTen and then contrasted it with the machine's current settings:
SunOS uatmachinename 5.9 Generic_118558-11 sun4us sparc FJSV,GPUZC-M
FJSV,SPARC64-V
System Configuration: Sun Microsystems sun4us
Memory size: 32768 Megabytes
12 processors
/etc/system
set rlim_fd_max = 1080 # Not set on the machine
set rlim_fd_cur=4096 # Not set on the machine
set rlim_fd_max=4096 # Not set on the machine
set semsys:seminfo_semmni = 20 # machine has 0x42, Decimal = 66
set semsys:seminfo_semmsl = 512 # machine has 0x81, Decimal = 129
set semsys:seminfo_semmns = 10240 # machine has 0x2101, Decimal = 8449
set semsys:seminfo_semmnu = 10240 # machine has 0x2101, Decimal = 8449
set shmsys:shminfo_shmseg=12 # machine has 1024
set shmsys:shminfo_shmmax = 0x20000000 # machine has 8,589,934,590. The hexidecimal translates into 536,870,912
$ /usr/sbin/sysdef | grep -i sem
sys/sparcv9/semsys
sys/semsys
* IPC Semaphores
66 semaphore identifiers (SEMMNI)
8449 semaphores in system (SEMMNS)
8449 undo structures in system (SEMMNU)
129 max semaphores per id (SEMMSL)
100 max operations per semop call (SEMOPM)
1024 max undo entries per process (SEMUME)
32767 semaphore maximum value (SEMVMX)
16384 adjust on exit max value (SEMAEM)Hi,
I work for Oracle in the UK and I manage the TimesTen pre-sales support team for EMEA.
Your main problem here is that the value for shmsys:shminfo_shmmax in /etc/system is currently set to 8 Gb therby limiting the maximum size of a single shared memory segment (and hence Timesten datastore) to 8 Gb. You need to increase this to a suitable value (maybe 32 Gb in your case). While you are doing that it would be advisable to increase ny of the other kernel parameters that are currently lower than recommended up to the recommended values. There is no harm in increasing them other possibly than a tiny increase in kernel resources, but with 32 GB of RAM I don't think you need be concerned about that...
You should also be sure that the system has enough swap space configured to supprt a shared memory segment of this size. I would recommend that you have at least 48 GB of swap configured.
TimesTen should detect that you have a multi-CPU machine and adjust its behaviour accordingly but if you want to be absolutely sure you can set SMPOptLevel=1 in the ODBC settings for the datastore.
If you want more direct assistance with your evaluation going forward then please let me know and I will contact you directly. Of course, you are free to continue using this forum if you would prefer.
Regards, Chris -
Lenovo Yoga segmentation error with IMg
Hi All
since upgrading to 11.3 and the latest patch FRU!
I see a segementation Error everytime I try to run img
I've had a look through the logs and this is the only intresting one that I can see is the dmesg
[ 92.066731] device-mapper: uevent: version 1.0.3
[ 92.066793] device-mapper: ioctl: 4.23.0-ioctl (2012-07-25) initialised: [email protected]
[ 93.943781] tntfs: disagrees about version of symbol module_layout
[ 104.041654] tntfs: disagrees about version of symbol module_layout
[ 104.259330] bootsplash 3.2.0-2010/03/31: looking for picture...
[ 104.259333] bootsplash: ...found, freeing memory.
[ 104.259382] bootsplash: status on console 0 changed to off
[ 115.802395] tntfs: disagrees about version of symbol module_layout
[ 115.869695] img[1980]: segfault at 0 ip b6f44fb8 sp bfe59158 error 6 in libc-2.11.3.so[b6ecb000+161000]
[ 121.269815] tntfs: disagrees about version of symbol module_layout
[ 121.337676] img[2030]: segfault at 0 ip b6edbfb8 sp bffbaaa8 error 6 in libc-2.11.3.so[b6e62000+161000]
I've already logged a #SR with novell anyone got any ideas?
Regards
Johnfinely found out whats causing the problem
We use Lenovo usb 2.0 Ethernet Cables for imaging our ultrabook style lenovos.
Seems that the latest imaging kernel doesn't like this.
Everytime they are plugged in result in a segmentation error from img.
I'm using the following in z_maint.cfg to load the driver in anyone else seen this?
newid="0x8086 0x08b2, asix"
Regards
John -
JAI can't load jpeg image, premature end of data segment error
Hi,
I have a customer sending in a jpeg image and I tried to use JAI to open the image. But the program throws the following error:
However, I can see the jpeg file on Windows, and open it using Windows Picture/Paint etc software.
Corrupt JPEG data: premature end of data segment
Error: Cannot decode the image for the type :
Occurs in: com.sun.media.jai.opimage.CodecRIFUtil
java.io.IOException: Premature end of input file
at com.sun.media.jai.codecimpl.CodecUtils.toIOException(CodecUtils.java:76)
at com.sun.media.jai.codecimpl.JPEGImageDecoder.decodeAsRenderedImage(JPEGImageDecoder.java:48)
at com.sun.media.jai.opimage.CodecRIFUtil.create(CodecRIFUtil.java:88)
at com.sun.media.jai.opimage.JPEGRIF.create(JPEGRIF.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at javax.media.jai.FactoryCache.invoke(FactoryCache.java:122)
at javax.media.jai.OperationRegistry.invokeFactory(OperationRegistry.java:1674)
at javax.media.jai.ThreadSafeOperationRegistry.invokeFactory(ThreadSafeOperationRegistry.java:473)
at javax.media.jai.registry.RIFRegistry.create(RIFRegistry.java:332)
at com.sun.media.jai.opimage.StreamRIF.create(StreamRIF.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at javax.media.jai.FactoryCache.invoke(FactoryCache.java:122)
at javax.media.jai.OperationRegistry.invokeFactory(OperationRegistry.java:1674)
at javax.media.jai.ThreadSafeOperationRegistry.invokeFactory(ThreadSafeOperationRegistry.java:473)
at javax.media.jai.registry.RIFRegistry.create(RIFRegistry.java:332)
at com.sun.media.jai.opimage.FileLoadRIF.create(FileLoadRIF.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at javax.media.jai.FactoryCache.invoke(FactoryCache.java:122)
at javax.media.jai.OperationRegistry.invokeFactory(OperationRegistry.java:1674)
at javax.media.jai.ThreadSafeOperationRegistry.invokeFactory(ThreadSafeOperationRegistry.java:473)
at javax.media.jai.registry.RIFRegistry.create(RIFRegistry.java:332)
at javax.media.jai.RenderedOp.createInstance(RenderedOp.java:819)
at javax.media.jai.RenderedOp.createInstance(RenderedOp.java:770)
PlanarImage image = img.createInstance();
Thanks a lot for the help,I'm having this issue too - did you find any more information on this?
-
JNI - Unhandled eptiexcon - Type=Segmentation error vmState=0x00000000
I am getting below error. Can someone help me to solve this problem.
Unhandled eptiexcon
Type=Segmentation error vmState=0x00000000
J9Generic_Signal_Number=00000004 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
Handler1=0026A0A4 Handler2=002AE854 InaccessibleAddress=4F525249
EDI=4F525245 ESI=243C800E EAX=00008000 EBX=A25A90F4
ECX=08844FD4 EDX=A25A90F0
EIP=00FD59A4 ES=C040007B DS=C040007B ESP=005FC06C
EFlags=00210246 CS=00000073 SS=0000007B EBP=005FC094
Module=/home/nr/ibm-java2-i386-50/jre/bin/libj9gc23.so
Module_base_address=00F9B000
Target=2_30_20080314_17962_lHdSMr (Linux 2.6.18-8.el5)
CPU=x86 (1 logical CPUs) (0x2b3a2000 RAM)
JVMDUMP006I Processing Dump Event "gpf", detail "" - Please Wait.
JVMDUMP007I JVM Requesting System Dump using '/home/nr/workspace/PingProbe/core.20080619.211936.6371.0001.dmp'
JVMDUMP010I System Dump written to /home/nr/workspace/PingProbe/core.20080619.211936.6371.0001.dmp
JVMDUMP007I JVM Requesting Snap Dump using '/home/nr/workspace/PingProbe/Snap.20080619.211936.6371.0002.trc'
JVMDUMP010I Snap Dump written to /home/nr/workspace/PingProbe/Snap.20080619.211936.6371.0002.trc
JVMDUMP007I JVM Requesting Java Dump using '/home/nr/workspace/PingProbe/javacore.20080619.211936.6371.0003.txt'
JVMDUMP010I Java Dump written to /home/nr/workspace/PingProbe/javacore.20080619.211936.6371.0003.txt
JVMDUMP013I Processed Dump Event "gpf", detail "".
Thanks in advance
Narasimha Rao KonjetiI know segmentation error means physical memory is a problemNo it doesn't. It is a virtual memory problem. It means the application has used an address that isn't mapped into the process's address space, i.e. a wild pointer value or a major array index out of bounds issue. It is a VM bug. Report it to the Bug Parade or as shown in the dump.
-
Hi,
I am using NW2004s BI7, Windows 2003 server, Oracle 10g, Having problem in DB02 shows segments errors and some others errors... could you please tell me how and where may I fix that problem
Screenshot
http://www.flickr.com/photos/38842895@N04/4583962877/sizes/o/
http://www.flickr.com/photos/38842895@N04/4584591004/sizes/o/in/photostream/
Regards,Hello Angeline,
The errors/warnings are expected in DB02 where check conditions have been set in table DBCHECKORA, sap note # 483856. The table can be updated with transaction DB17. The real issue is whether the errors/warnings are "true". You can check this yourself for example in the second screen shot you have missing stats for table "SAPSR3./BIC/B0000011000" and also ORA-01653 which indicates a tablespace overflow which can be solved using oss note #3155. It is also a good idea to have latest brtools patch level installed to avoid any bugs which could lead to incorrect or unjustified alerts in DB02/Dbacockpit transaction. At any time you can deactivate or delete checks via DB17 (Active = "N") in DBCHECKORA table, or in 10g if you have problems with incorrect alerts, you can reset the entire contents of the table DBCHECKORA to the standard SAP settings by importing the SQL script dbcheckora10_oltp.sql or dbcheckora10_olap.sql. For more information, see Note 403704 (the scripts are attached to this note).
Best Regards
Rachel -
Post Author: aschreiber
CA Forum: Data Integration
We are running ETLs that are getting segmentation errors. The Job Server is on an HP-UX Itanium server (HP-UX bizdev1 B.11.23 U ia64 ) and the Designer is on Windows XP. We are on version 11.5.3 of Data Integrator. Has anyone encountered this error and if so how was it resolved.
24611 1 SYS-170101 9/10/07 2:36:23 PM System Exception <Segmentation Violation> occurred in the context <|Dataflow HNS_DF_LOAD_SERVICE_ORDER|Transform24611 1 SYS-170101 9/10/07 2:36:23 PM QRY_ToJoinTwoSources_factServiceOrder>Post Author: wdaehn
CA Forum: Data Integration
Can you watch the al_engine process memory via the "top" command of HP-UX? It might be that you run out of memory (2GB is the limit). Especially given the name of you query, you should consider checking the join rank of your two source tables inside the DF, maybe DI caches the wrong one. In other words, is one source small, the other big? The give the big one a join rank of 100, the other a join rank of 1 (everything > 1 is fine, 0 means "automatic"). -
CFIMAGE gives "attempt to read data outside of exif segment" error when trying to resize jpg
When trying to resize a jpg using cfimage I receive the following error "attempt to read data outside of exif segment". Any ideas on what causes this or how to fix it? This only happens on certain images. Sample image it happens with attached, this photo is subject to copyright restrictions and should be treated appropriately. Thanks for any help.
Resize Code attached.
<!--- Set some defaults used by each image type, unless you override them --->
<cfparam name="jpgQuality" default=".8" />
<cfparam name="defaultInterpolation" default="bicubic" />
<cfparam name="defaultBackground" default="black" />
<!--- Set values for each image type --->
<cfparam name="thumbMaxWidth" default="" /> <!--- leave blank to allow any width (forced to size by height) --->
<cfparam name="thumbMaxHeight" default="60" /> <!--- leave blank to allow any height (forced to size by width, above) --->
<cfparam name="thumbQuality" default="1" /> <!--- number from 0 - 1, 1 being the best --->
<cfparam name="thumbFixedSize" default="false" /> <!--- you MUST set both MaxWidth & MaxHeight to use FixedSize --->
<cfparam name="thumbBackground" default="#defaultBackground#" /> <!--- color of background if fixed size is used --->
<cfparam name="thumbInterpolation" default="#defaultInterpolation#" /> <!--- Interpolation method used for resizing (HUGE performance hit depending on what is used) --->
<cfparam name="normalMaxWidth" default="476" />
<cfparam name="normalMaxHeight" default="324" />
<cfparam name="normalQuality" default="#jpgQuality#" />
<cfparam name="normalFixedSize" default="true" />
<cfparam name="normalBackground" default="#defaultBackground#" />
<cfparam name="normalInterpolation" default="#defaultInterpolation#" />
<cfparam name="zoomMaxWidth" default="670" />
<cfparam name="zoomMaxHeight" default="380" />
<cfparam name="zoomQuality" default="#jpgQuality#" />
<cfparam name="zoomFixedSize" default="true" />
<cfparam name="zoomBackground" default="#defaultBackground#" />
<cfparam name="zoomInterpolation" default="#defaultInterpolation#" />
<!--- Set values for folder paths and the watermark image --->
<cfparam name="originalFolder" default="path to folder for original images" />
<cfparam name="thumbFolder" default="path to folder for thumbnail images" />
<cfparam name="normalFolder" default="path to folder for large images" />
<cfparam name="zoomFolder" default="path to folder for large resized images" />
<cfparam name="watermarkImage" default="" />
<cfparam name="wmXPosition" default="50" /> <!--- value is a number from 0 - 100, 50 = centered --->
<cfparam name="wmYPosition" default="65" />
<cffunction name="genWatermarkImage">
<cfargument name="ImageFile" required="true" />
<cfargument name="MaxWidth" required="true" />
<cfargument name="MaxHeight" required="true" />
<cfargument name="StorePath" required="true" />
<cfargument name="FixedSize" required="true" type="Boolean" />
<cfargument name="Background" required="true" />
<cfargument name="Quality" required="true" />
<cfargument name="Interpolation" required="true" />
<cfargument name="AddWatermark" required="true" type="Boolean" />
<cfif IsImageFile(originalFolder & ImageFile)>
<cfset original = ImageNew(originalFolder & ImageFile) />
<cfset originalHeight = ImageGetHeight(original) />
<cfset originalWidth = ImageGetWidth(original) />
<cfset outfile = StorePath & ImageFile />
<cfset watermark = ImageNew(watermarkImage) />
<cfset ImageScaleToFit(original,MaxWidth,MaxHeight,Interpolation) />
<cfset new_w = ImageGetWidth(original) />
<cfset new_h = ImageGetHeight(original) />
<cfif FixedSize>
<cfset normal = ImageNew("",MaxWidth,MaxHeight,"rgb",Background) />
<cfset ImagePaste(normal,original,int((MaxWidth-new_w)/2),int((MaxHeight-new_h)/2)) />
<cfif AddWatermark>
<cfset ImagePaste(normal,watermark,( int(ImageGetWidth(normal)) - int(ImageGetWidth(watermark)) -3),( int(ImageGetHeight(normal)) - int(ImageGetHeight(watermark)) -3) )/>
</cfif>
<cfset ImageWrite(normal,outfile,Quality) />
<cfelse>
<cfif AddWatermark>
<cfset ImagePaste(original,watermark,( int(ImageGetWidth(normal)) - int(ImageGetWidth(watermark)) -3), (int(ImageGetHeight(normal)) - int(ImageGetHeight(watermark)) -3) )/>
</cfif>
<cfset ImageWrite(original,outfile,Quality) />
</cfif>
<cfelse>
<cfreturn "Image file not an image!" />
</cfif>
</cffunction>
<cfset zoomError = genWatermarkImage(Filename,zoomMaxWidth,zoomMaxHeight,zoomFolder,zoomFixedSize,zoomBackground,zoomQuality,zoomInterp olation,dowatermark) />Hmm, that was my best shot.
1) Do you have all of the latest updates applied?
2) Did you try all of the work-arounds listed in the comments. Granted some of them are definite hacks
3) Just to cover all the bases, do you get the same result with both ImageResize() and ImageScaleToFit()?
If all else fails, you could always go the java route and try some java code to do the resize. Obviously not the ideal, but it is worth a shot. IIRC there is a thread around here somewhere with the cf/java code. But that was from before the switch in forums and I will be darned if I can find it right now!
Update: I will play around the sample image you posted tomorrow. Just to see if I can come up with anything. -
Error in background process.
I have a report that lets you choose between executing in background or not. As far as I see, in both cases the same subroutines are executed, though if you choose to execute directly a list is shown on screen and you are asked to push exec to have the last part of the program executed.
I am getting an error when choosing background, and so far I get a time-out error whenever I try to debug.
Is it possible that a variable is not being cleared because the process is in background?
I paste below the report's code, in case someone sees any difference I'm missing...
Thanks in advance,
S.
CODE:
REPORT ZFI_GENERACION_UTE_AF
MESSAGE-ID ZFI
NO STANDARD PAGE HEADING
LINE-SIZE 116.
*Declaración de includes
INCLUDE .
INCLUDE ZFI_BATCH_INPUT.
INCLUDE ZFI_UTE_TOP_AF.
Definición de la pantalla de selección
SELECTION-SCREEN BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
PARAMETERS: P_KOKRS LIKE TKA01-KOKRS OBLIGATORY.
SELECT-OPTIONS: P_BURKSN FOR SKB1-BUKRS OBLIGATORY.
SELECT-OPTIONS: P_GSBERN FOR BSIS-GSBER.
SELECT-OPTIONS: P_BURKSU FOR SKB1-BUKRS.
SELECT-OPTIONS: P_GSBERU FOR BSIS-GSBER.
PARAMETERS: P_GAJHR LIKE BKPF-GJAHR OBLIGATORY,
S_MONAT LIKE BSIS-MONAT OBLIGATORY,
P_BLDAT LIKE BKPF-BLDAT OBLIGATORY.
SELECTION-SCREEN END OF BLOCK B1.
Bloque1 'Tipo de procesamiento: Online - Background'
SELECTION-SCREEN BEGIN OF BLOCK BLOQUE1 WITH FRAME TITLE TEXT-004.
PARAMETERS:
X_ONLINE NO-DISPLAY, " RADIOBUTTON GROUP TYPE, " Ejecutar on-line
X_BATCH NO-DISPLAY. "RADIOBUTTON GROUP TYPE. " Ejecutar en fondo
SELECTION-SCREEN END OF BLOCK BLOQUE1.
SELECTION-SCREEN BEGIN OF BLOCK BLOQUE3 WITH FRAME TITLE TEXT-016.
PARAMETERS:
P_PARC RADIOBUTTON GROUP TIPO,
P_TOTAL RADIOBUTTON GROUP TIPO DEFAULT 'X'.
SELECTION-SCREEN END OF BLOCK BLOQUE3.
PARAMETERS: NUM_DOCS TYPE I DEFAULT '1000'.
PARAMETERS: P_CODIGO TYPE ZCODIGO_NEGOCIO.
*Declaración de includes
INCLUDE ZFI_UTES_F001_AF.
LOGICA DEL PROGRAMA
*Definición de inicializaciones
INITIALIZATION.
PERFORM INICIALIZAR_DATOS.
*Proceso principal
START-OF-SELECTION.
PERFORM INI_CABECERA.
Procesamiento Online
CALL FUNCTION 'SAPGUI_PROGRESS_INDICATOR'
EXPORTING
PERCENTAGE = 0
TEXT = TEXT-011.
CLEAR CONTABILIZADO.
PERFORM: INICIALIZAR_ESTRUCTURAS,
CONFECCION_ASIENTOS,
Si UTEs impuras se crearan los siguientes asientos
En la empresa ZZZI sólo se generarán apuntes para las cuentas
de la tabla ZHKONT_IMPURAS en las empresas ZZZ8 y ZZZ9.
add sis3e 14/01/05
CONFECCION_ASIENTO_NOD_FERR,
fin add sis3e 14/01/05
CREAR_ASIENTOS_CONTRARIOS.
CLEAR SALIR.
*Acciones de final de proceso
END-OF-SELECTION.
PERFORM VISUALIZAR_TI_APUNTES.
IF NOT SY-BATCH IS INITIAL.
PERFORM CONTABILIZAR_DOCUMENTOS.
ENDIF.
*Definición de cabeceras del listado
TOP-OF-PAGE.
PERFORM BATCH-HEADING(RSBTCHH0).
PERFORM CABECERA_INICIAL.
TOP-OF-PAGE DURING LINE-SELECTION.
PERFORM BATCH-HEADING(RSBTCHH0).
*Definición y control de acciones a tomar según entrada del usuario
AT USER-COMMAND.
CASE SY-UCOMM.
WHEN 'EJEC'. " Ejecutar Contabilizaciones
PERFORM CONTABILIZAR_DOCUMENTOS.
WHEN 'ENDE'.
Finalizar
LEAVE PROGRAM.
WHEN 'ABR'.
Cancelar
LEAVE PROGRAM.
ENDCASE.
*Validaciones de los parámetros de entrada
AT SELECTION-SCREEN.
KOKRS = P_KOKRS.
moneda de la sociedad
SELECT SINGLE WAERS FROM TKA01 INTO TKA01-WAERS
WHERE KOKRS = KOKRS.
MONEDA_FERR = TKA01-WAERS.
IF CONTROL_SELECTION = '1'.
PERFORM VALIDACIONES_SCREEN_SELECCION.
CONTROL_SELECTION = 2.
ENDIF.Hi,
Did you mean to say 'last part' of the program is executed after pressing button on application toolbar?
So even if you execute in background the list should appear and after that you need to see what the program is doing,
Rgds,
Sandeep -
Error in background job for VT04 - RV56TRGN
Hello!
The selection criteria is fine - we want every shipping point to be used to create every variant. We also have TPPs mirroring the plant codes. The job log sepects some deliveries which are good and tries to process them - the endlogue is it found X no. of deliveries, processed them but did not write the shipment to the data base. Digging further, it shows the error - <u>the system cound not determine the TPP for the plant in the delivery line item.</u>
When I look at the variant, it doesn't talk of the TPP at all - only the shipping point and all the other criteria mainly associated with the deliveries and their statuses. I am wondering how the system knows which TPP to use for each shipping point when no TPP is assiged in the variant? Is this a default configuration I am missing? Or something else? Appreciate any leads...
Thanks a lot.PS: In addition to creating the shipments, we also need to make their status to 'complete' in the background job as it is a purely non value added activity for us but we need the shipments for the cost documents. Any ideas how I can go about that?
Thanks. -
Error in background job while running FTP_CONNECT
Dear Friends,
When I try to run the FTP_CONNECT FM in background,
the ftp server is not connected throwing an error message ''Attempt to set up connection to <ftpserver> failed''.
I am using the RFC destination SAPFTPA and still it is not working!
My program was successfully executing in fore ground with SAPFTP.
Please help.
Thanks and Regards,
Vidya Sagar.Hi,
The below program is to create file on FTP server by taking the data from internal table
To see whether the file is created on the FTP server
use the function module FTP_SERVER_TO_R3.
Check the below code.
You write a same program with the function module FTP_SERVER_TO_R3 instead of
FTP_R3_TO_SERVER to check the existence of the file which is already created.
tables: t777a. "Building Addresses
Internal Table for Building table.
data: begin of it_t777a occurs 0,
build like t777a-build, "Building
stext like t777a-stext, "Object Name
cname like t777a-cname, "Address Supplement (c/o)
ort01 like t777a-ort01, "City
pstlz like t777a-pstlz, "Postal Code
regio like t777a-regio, "Region (State, Province, County)
end of it_t777a.
Internal Table for taking all fields of the above table in one line
separated by u2018|u2019(pipe).
data: begin of it_text occurs 0,
text(131),
end of it_text.
Constants: c_key type i value 26101957,
c_dest type rfcdes-rfcdest value 'SAPFTPA'.
data: g_dhdl type i, "Handle
g_dlen type i, "pass word length
g_dpwd(30). "For storing password
Selection Screen Starts
SELECTION-SCREEN BEGIN OF BLOCK blk1 WITH FRAME TITLE TEXT-001.
parameters: p_user(30) default 't777a' obligatory,
p_pwd(30) default 't777a' obligatory,
p_host(64) default 'XXX.XXX.XX.XXX' obligatory.
SELECTION-SCREEN END OF BLOCK blk1.
SELECTION-SCREEN BEGIN OF BLOCK blk2 WITH FRAME TITLE TEXT-002.
parameters: p_file like rlgrap-filename default 't777a_feed.txt'.
SELECTION-SCREEN END OF BLOCK blk2.
Password not visible.
at Selection-screen output.
loop at screen.
if screen-name = 'P_PWD'.
screen-invisible = '1'.
modify screen.
endif.
endloop.
g_dpwd = p_pwd.
Start of selection
start-of-selection.
To fetch the data records from the table T777A.
select build stext cname ort01 pstlz regio
from t777a
into table it_t777a.
Sort the internal table by build.
if not it_t777a[] is initial.
sort it_t777a by build.
endif.
Concatenate all the fields of above internal table records in one line
separated by u2018|u2019(pipe).
loop at it_t777a.
concatenate it_t777a-build it_t777a-stext it_t777a-cname
it_t777a-ort01 it_t777a-pstlz it_t777a-regio
into it_text-text separated by '|'.
append it_text.
clear it_text.
endloop.
To get the length of the password.
g_dlen = strlen( g_dpwd ).
Below Function module is used to Encrypt the Password.
CALL FUNCTION 'HTTP_SCRAMBLE'
EXPORTING
SOURCE = g_dpwd "Actual password
SOURCELEN = g_dlen
KEY = c_key
IMPORTING
DESTINATION = g_dpwd. "Encyrpted Password
*Connects to the FTP Server as specified by user.
Call function 'SAPGUI_PROGRESS_INDICATOR'
EXPORTING
text = 'Connecting to FTP Server'.
Below function module is used to connect the FTP Server.
It Accepts only Encrypted Passwords.
This Function module will provide a handle to perform different
operations on the FTP Server via FTP Commands.
call function 'FTP_CONNECT'
EXPORTING
user = p_user
password = g_dpwd
host = p_host
rfc_destination = c_dest
IMPORTING
handle = g_dhdl
EXCEPTIONS
NOT_CONNECTED.
if sy-subrc ne 0.
format color col_negative.
write:/ 'Error in Connection'.
else.
write:/ 'FTP Connection is opened '.
endif.
**Transferring the data from internal table to FTP Server.
CALL FUNCTION 'FTP_R3_TO_SERVER'
EXPORTING
HANDLE = g_dhdl
FNAME = p_file
CHARACTER_MODE = 'X'
TABLES
TEXT = it_text
EXCEPTIONS
TCPIP_ERROR = 1
COMMAND_ERROR = 2
DATA_ERROR = 3
OTHERS = 4.
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ELSE.
write:/ 'File has created on FTP Server'.
ENDIF.
Call function 'SAPGUI_PROGRESS_INDICATOR'
EXPORTING
text = 'File has created on FTP Server'.
To Disconnect the FTP Server.
CALL FUNCTION 'FTP_DISCONNECT'
EXPORTING
HANDLE = g_dhdl.
To Disconnect the Destination.
CALL FUNCTION 'RFC_CONNECTION_CLOSE'
EXPORTING
destination = c_dest
EXCEPTIONS
others = 1.
Regards,
Kumar Bandanadham
Maybe you are looking for
-
How can I install window 8 on external hard drive on mac mini
So, I have mac mini and I'd like to install win8 on external hard drive and can anyone give me guides to do it? also I wonder would I be able to run win 8 after installation using parallels desktop? thanks in advance
-
URGENT : replacing some letters in an expression
Hi all I think that I should post this topic here because I think it concerns SQL and PL/SQL functions. Here is the problem : I am developping a forms application . There is a listbox named FORMULA containing list of formulas. When I choose a formula
-
How to create an inbound Idoc from an inbound IDoc in same client
How to create an inbound Idoc from an inbound IDoc in same client Idoc will come from XI as an inbound idoc to SAP, now I have to read this inbound Idoc and split it into Several Inbound Idocs and now needs to be posted in the same client. please let
-
How to install a 2nd 10.2.0.5 OMS without the 'upgrade database' step?
Fellow DBAs... We have OEM environment which uses OMS version 10.2.0.5 - it's recently been upgrade from the 10.2.0.1 base release. We are trying to install a 2nd OMS. In order to do this we install the base 10.2.0.1 release, and then apply the 10.2.
-
Idecided to give bt broadband a go this year for no other reason than the speeds advertised.I have the bt broadband unlimited with up to 20mbps acheivable download speed. This is a joke and an advertised LIE.Too late as i am in an 18month contract wi