Reg. Segment error
Hi Friends,
I have one doubt in ale/idocs that is " how to resolve the segment errors"?. if we are having a error in
segment.
Best Regards,
Narasimha Rao.
Hi,
What kind of error u r facing?
After Creating a segment it should be RELEASED to use in a IDOC.
Please post ur error.
Thanks,
Nithya.
Similar Messages
-
Cannot create data store shared-memory segment error
Hi,
Here is some background information:
[ttadmin@timesten-la-p1 ~]$ ttversion
TimesTen Release 11.2.1.3.0 (64 bit Linux/x86_64) (cmttp1:53388) 2009-08-21T05:34:23Z
Instance admin: ttadmin
Instance home directory: /u01/app/ttadmin/TimesTen/cmttp1
Group owner: ttadmin
Daemon home directory: /u01/app/ttadmin/TimesTen/cmttp1/info
PL/SQL enabled.
[ttadmin@timesten-la-p1 ~]$ uname -a
Linux timesten-la-p1 2.6.18-164.6.1.el5 #1 SMP Tue Oct 27 11:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@timesten-la-p1 ~]# cat /proc/sys/kernel/shmmax
68719476736
[ttadmin@timesten-la-p1 ~]$ cat /proc/meminfo
MemTotal: 148426936 kB
MemFree: 116542072 kB
Buffers: 465800 kB
Cached: 30228196 kB
SwapCached: 0 kB
Active: 5739276 kB
Inactive: 25119448 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 148426936 kB
LowFree: 116542072 kB
SwapTotal: 16777208 kB
SwapFree: 16777208 kB
Dirty: 60 kB
Writeback: 0 kB
AnonPages: 164740 kB
Mapped: 39188 kB
Slab: 970548 kB
PageTables: 10428 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 90990676 kB
Committed_AS: 615028 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 274804 kB
VmallocChunk: 34359462519 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
extract from sys.odbc.ini
[cachealone2]
Driver=/u01/app/ttadmin/TimesTen/cmttp1/lib/libtten.so
DataStore=/u02/timesten/datastore/cachealone2/cachealone2
PermSize=14336
OracleNetServiceName=ttdev
DatabaseCharacterset=WE8ISO8859P1
ConnectionCharacterSet=WE8ISO8859P1
[ttadmin@timesten-la-p1 ~]$ grep SwapTotal /proc/meminfo
SwapTotal: 16777208 kB
Though we have around 140GB memory available and 65GB on the shmmax, we are unable to increase the PermSize to any thing more than 14GB. When I changed it to PermSize=15359, I am getting following error.
[ttadmin@timesten-la-p1 ~]$ ttIsql "DSN=cachealone2"
Copyright (c) 1996-2009, Oracle. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
connect "DSN=cachealone2";
836: Cannot create data store shared-memory segment, error 28
703: Subdaemon connect to data store failed with error TT836
The command failed.
Done.
I am not sure why this is not working, considering we have got 144GB RAM and 64GB shmmax allocated! Any help is much appreciated.
Regards,
RajThose parameters look ok for a 100GB shared memory segment. Also check the following:
ulimit - a mechanism to restrict the amount of system resources a process can consume. Your instance administrator user, the user who installed Oracle TimesTen needs to be allocated enough lockable memory resource to load and lock your Oracle TimesTen shared memory segment.
This is configured with the memlock entry in the OS file /etc/security/limits.conf for the instance administrator.
To view the current setting run the OS command
$ ulimit -l
and to set it to a value dynamically use
$ ulimit -l <value>.
Once changed you need to restart the TimesTen master daemon for the change to be picked up.
$ ttDaemonAdmin -restart
Beware sometimes ulimit is set in the instance administrators "~/.bashrc" or "~/.bash_profile" file which can override what's set in /etc/security/limits.conf
If this is ok then it might be related to Hugepages. If TT is configured to use Hugepages then you need enough Hugepages to accommodate the 100GB shared memory segment. TT is configured for Hugepages if the following entry is in the /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/ttendaemon.options file:
-linuxLargePageAlignment 2
So if configured for Hugepages please see this example of how to set an appropriate Hugepages setting:
Total the amount of memory required to accommodate your TimesTen database from /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/sys.odbc.ini
PermSize+TempSize+LogBufMB+64MB Overhead
For example consider a TimesTen database of size:
PermSize=250000 (unit is MB)
TempSize=100000
LogBufMB=1024
Total Memory = 250000+100000+1024+64 = 351088MB
The Hugepages pagesize on the Exalytics machine is 2048KB or 2MB. Therefore divide the total amount of memory required above in MB by the pagesize of 2MB. This is now the number of Hugepages you need to configure.
351088/2 = 175544
As user root edit the /etc/sysctl.conf file
Add/modify vm.nr_hugepages= to be the number of Hugepages calculated.
vm.nr_hugepages=175544
Add/modify vm.hugetlb_shm_group = 600
This parameter is the group id of the TimesTen instance administrator. In the Exalytics system this is oracle. Determine the group id while logged in as oracle with the following command. In this example it’s 600.
$ id
$ uid=700(oracle) gid=600(oinstall) groups=600(oinstall),601(dba),700(oracle)
As user root edit the /etc/security/limits.conf file
Add/modify the oracle memlock entries so that the fourth field equals the total amount of memory for your TimesTen database. The unit for this value is KB. For example this would be 351088*1024=359514112KB
oracle hard memlock 359514112
oracle soft memlock 359514112
THIS IS VERY IMPORTANT in order for the above changes to take effect you to either shutdown the BI software environment including TimesTen and reboot or issue the following OS command to make the changes permanent.
$ sysctl -p
Please note that dynamic setting (including using 'sysctl -p') of vm.nr_hugepages while the system is up may not give you the full number of Hugepages that you have specified. The only guaranteed way to get the full complement of Hugepages is to reboot.
Check Hugepages has been setup correctly, look for Hugepages_Total
$ cat /proc/meminfo | grep Huge
Based on the example values above you would see the following:
HugePages_Total: 175544
HugePages_Free: 175544 -
836: Cannot create data store shared-memory segment, error 22
Hi,
I am hoping that there is an active TimesTen user community out there who could help with this, or the TimesTen support team who hopefully monitor this forum.
I am currently evaluating TimesTen for a global investment organisation. We currently have a large Datawarehouse, where we utilise summary views and query rewrite, but have isolated some data that we would like to store in memory, and then be able to
report on it through a J2EE website.
We are evaluating TimesTen versus developing our own custom cache. Obviously, we would like to go with a packaged solution but we need to ensure that there are no limits in relation to maximum size. Looking through the documentation, it appears that the
only limit on a 64bit system is the actual physical memory on the box. Sounds good, but we want to prove it since we would like to see how the application scales when we store about 30gb (the limit on our UAT environment is 32gb). The ultimate goal is to
see if we can store about 50-60gb in memory.
Is this correct? Or are there any caveats in relation to this?
We have been able to get our Data Store store 8gb of data, but want to increase this. I am assuming that the following error message is due to us not changing the /etc/system on the box:
836: Cannot create data store shared-memory segment, error 22
703: Subdaemon connect to data store failed with error TT836
Can somebody from the User community, or an Oracle Times Ten support person recommend what should be changed above to fully utilise the 32gb of memory, and the 12 processors on the box.
Its quite a big deal for us to bounce the UAT unix box, so l want to be sure that l have factored in all changes that would ensure the following:
* Existing Oracle Database instances are not adversely impacted
* We are able to create a Data Store which is able fully utilise the physical memory on the box
* We don't need to change these settings for quite some time, and still be able to complete our evaluation
We are currently in discussion with our in-house Oracle team, but need to complete this process before contacting Oracle directly, but help with the above request would help speed this process up.
The current /etc/system settings are below, and l have put in the current machines settings as comments at the end of each line.
Can you please provide the recommended settings to fully utilise the existing 32gb on the box?
Machine
## I have contrasted the minimum prerequisites for TimesTen and then contrasted it with the machine's current settings:
SunOS uatmachinename 5.9 Generic_118558-11 sun4us sparc FJSV,GPUZC-M
FJSV,SPARC64-V
System Configuration: Sun Microsystems sun4us
Memory size: 32768 Megabytes
12 processors
/etc/system
set rlim_fd_max = 1080 # Not set on the machine
set rlim_fd_cur=4096 # Not set on the machine
set rlim_fd_max=4096 # Not set on the machine
set semsys:seminfo_semmni = 20 # machine has 0x42, Decimal = 66
set semsys:seminfo_semmsl = 512 # machine has 0x81, Decimal = 129
set semsys:seminfo_semmns = 10240 # machine has 0x2101, Decimal = 8449
set semsys:seminfo_semmnu = 10240 # machine has 0x2101, Decimal = 8449
set shmsys:shminfo_shmseg=12 # machine has 1024
set shmsys:shminfo_shmmax = 0x20000000 # machine has 8,589,934,590. The hexidecimal translates into 536,870,912
$ /usr/sbin/sysdef | grep -i sem
sys/sparcv9/semsys
sys/semsys
* IPC Semaphores
66 semaphore identifiers (SEMMNI)
8449 semaphores in system (SEMMNS)
8449 undo structures in system (SEMMNU)
129 max semaphores per id (SEMMSL)
100 max operations per semop call (SEMOPM)
1024 max undo entries per process (SEMUME)
32767 semaphore maximum value (SEMVMX)
16384 adjust on exit max value (SEMAEM)Hi,
I work for Oracle in the UK and I manage the TimesTen pre-sales support team for EMEA.
Your main problem here is that the value for shmsys:shminfo_shmmax in /etc/system is currently set to 8 Gb therby limiting the maximum size of a single shared memory segment (and hence Timesten datastore) to 8 Gb. You need to increase this to a suitable value (maybe 32 Gb in your case). While you are doing that it would be advisable to increase ny of the other kernel parameters that are currently lower than recommended up to the recommended values. There is no harm in increasing them other possibly than a tiny increase in kernel resources, but with 32 GB of RAM I don't think you need be concerned about that...
You should also be sure that the system has enough swap space configured to supprt a shared memory segment of this size. I would recommend that you have at least 48 GB of swap configured.
TimesTen should detect that you have a multi-CPU machine and adjust its behaviour accordingly but if you want to be absolutely sure you can set SMPOptLevel=1 in the ODBC settings for the datastore.
If you want more direct assistance with your evaluation going forward then please let me know and I will contact you directly. Of course, you are free to continue using this forum if you would prefer.
Regards, Chris -
Lenovo Yoga segmentation error with IMg
Hi All
since upgrading to 11.3 and the latest patch FRU!
I see a segementation Error everytime I try to run img
I've had a look through the logs and this is the only intresting one that I can see is the dmesg
[ 92.066731] device-mapper: uevent: version 1.0.3
[ 92.066793] device-mapper: ioctl: 4.23.0-ioctl (2012-07-25) initialised: [email protected]
[ 93.943781] tntfs: disagrees about version of symbol module_layout
[ 104.041654] tntfs: disagrees about version of symbol module_layout
[ 104.259330] bootsplash 3.2.0-2010/03/31: looking for picture...
[ 104.259333] bootsplash: ...found, freeing memory.
[ 104.259382] bootsplash: status on console 0 changed to off
[ 115.802395] tntfs: disagrees about version of symbol module_layout
[ 115.869695] img[1980]: segfault at 0 ip b6f44fb8 sp bfe59158 error 6 in libc-2.11.3.so[b6ecb000+161000]
[ 121.269815] tntfs: disagrees about version of symbol module_layout
[ 121.337676] img[2030]: segfault at 0 ip b6edbfb8 sp bffbaaa8 error 6 in libc-2.11.3.so[b6e62000+161000]
I've already logged a #SR with novell anyone got any ideas?
Regards
Johnfinely found out whats causing the problem
We use Lenovo usb 2.0 Ethernet Cables for imaging our ultrabook style lenovos.
Seems that the latest imaging kernel doesn't like this.
Everytime they are plugged in result in a segmentation error from img.
I'm using the following in z_maint.cfg to load the driver in anyone else seen this?
newid="0x8086 0x08b2, asix"
Regards
John -
JAI can't load jpeg image, premature end of data segment error
Hi,
I have a customer sending in a jpeg image and I tried to use JAI to open the image. But the program throws the following error:
However, I can see the jpeg file on Windows, and open it using Windows Picture/Paint etc software.
Corrupt JPEG data: premature end of data segment
Error: Cannot decode the image for the type :
Occurs in: com.sun.media.jai.opimage.CodecRIFUtil
java.io.IOException: Premature end of input file
at com.sun.media.jai.codecimpl.CodecUtils.toIOException(CodecUtils.java:76)
at com.sun.media.jai.codecimpl.JPEGImageDecoder.decodeAsRenderedImage(JPEGImageDecoder.java:48)
at com.sun.media.jai.opimage.CodecRIFUtil.create(CodecRIFUtil.java:88)
at com.sun.media.jai.opimage.JPEGRIF.create(JPEGRIF.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at javax.media.jai.FactoryCache.invoke(FactoryCache.java:122)
at javax.media.jai.OperationRegistry.invokeFactory(OperationRegistry.java:1674)
at javax.media.jai.ThreadSafeOperationRegistry.invokeFactory(ThreadSafeOperationRegistry.java:473)
at javax.media.jai.registry.RIFRegistry.create(RIFRegistry.java:332)
at com.sun.media.jai.opimage.StreamRIF.create(StreamRIF.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at javax.media.jai.FactoryCache.invoke(FactoryCache.java:122)
at javax.media.jai.OperationRegistry.invokeFactory(OperationRegistry.java:1674)
at javax.media.jai.ThreadSafeOperationRegistry.invokeFactory(ThreadSafeOperationRegistry.java:473)
at javax.media.jai.registry.RIFRegistry.create(RIFRegistry.java:332)
at com.sun.media.jai.opimage.FileLoadRIF.create(FileLoadRIF.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at javax.media.jai.FactoryCache.invoke(FactoryCache.java:122)
at javax.media.jai.OperationRegistry.invokeFactory(OperationRegistry.java:1674)
at javax.media.jai.ThreadSafeOperationRegistry.invokeFactory(ThreadSafeOperationRegistry.java:473)
at javax.media.jai.registry.RIFRegistry.create(RIFRegistry.java:332)
at javax.media.jai.RenderedOp.createInstance(RenderedOp.java:819)
at javax.media.jai.RenderedOp.createInstance(RenderedOp.java:770)
PlanarImage image = img.createInstance();
Thanks a lot for the help,I'm having this issue too - did you find any more information on this?
-
JNI - Unhandled eptiexcon - Type=Segmentation error vmState=0x00000000
I am getting below error. Can someone help me to solve this problem.
Unhandled eptiexcon
Type=Segmentation error vmState=0x00000000
J9Generic_Signal_Number=00000004 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
Handler1=0026A0A4 Handler2=002AE854 InaccessibleAddress=4F525249
EDI=4F525245 ESI=243C800E EAX=00008000 EBX=A25A90F4
ECX=08844FD4 EDX=A25A90F0
EIP=00FD59A4 ES=C040007B DS=C040007B ESP=005FC06C
EFlags=00210246 CS=00000073 SS=0000007B EBP=005FC094
Module=/home/nr/ibm-java2-i386-50/jre/bin/libj9gc23.so
Module_base_address=00F9B000
Target=2_30_20080314_17962_lHdSMr (Linux 2.6.18-8.el5)
CPU=x86 (1 logical CPUs) (0x2b3a2000 RAM)
JVMDUMP006I Processing Dump Event "gpf", detail "" - Please Wait.
JVMDUMP007I JVM Requesting System Dump using '/home/nr/workspace/PingProbe/core.20080619.211936.6371.0001.dmp'
JVMDUMP010I System Dump written to /home/nr/workspace/PingProbe/core.20080619.211936.6371.0001.dmp
JVMDUMP007I JVM Requesting Snap Dump using '/home/nr/workspace/PingProbe/Snap.20080619.211936.6371.0002.trc'
JVMDUMP010I Snap Dump written to /home/nr/workspace/PingProbe/Snap.20080619.211936.6371.0002.trc
JVMDUMP007I JVM Requesting Java Dump using '/home/nr/workspace/PingProbe/javacore.20080619.211936.6371.0003.txt'
JVMDUMP010I Java Dump written to /home/nr/workspace/PingProbe/javacore.20080619.211936.6371.0003.txt
JVMDUMP013I Processed Dump Event "gpf", detail "".
Thanks in advance
Narasimha Rao KonjetiI know segmentation error means physical memory is a problemNo it doesn't. It is a virtual memory problem. It means the application has used an address that isn't mapped into the process's address space, i.e. a wild pointer value or a major array index out of bounds issue. It is a VM bug. Report it to the Bug Parade or as shown in the dump.
-
Hi,
I am using NW2004s BI7, Windows 2003 server, Oracle 10g, Having problem in DB02 shows segments errors and some others errors... could you please tell me how and where may I fix that problem
Screenshot
http://www.flickr.com/photos/38842895@N04/4583962877/sizes/o/
http://www.flickr.com/photos/38842895@N04/4584591004/sizes/o/in/photostream/
Regards,Hello Angeline,
The errors/warnings are expected in DB02 where check conditions have been set in table DBCHECKORA, sap note # 483856. The table can be updated with transaction DB17. The real issue is whether the errors/warnings are "true". You can check this yourself for example in the second screen shot you have missing stats for table "SAPSR3./BIC/B0000011000" and also ORA-01653 which indicates a tablespace overflow which can be solved using oss note #3155. It is also a good idea to have latest brtools patch level installed to avoid any bugs which could lead to incorrect or unjustified alerts in DB02/Dbacockpit transaction. At any time you can deactivate or delete checks via DB17 (Active = "N") in DBCHECKORA table, or in 10g if you have problems with incorrect alerts, you can reset the entire contents of the table DBCHECKORA to the standard SAP settings by importing the SQL script dbcheckora10_oltp.sql or dbcheckora10_olap.sql. For more information, see Note 403704 (the scripts are attached to this note).
Best Regards
Rachel -
Post Author: aschreiber
CA Forum: Data Integration
We are running ETLs that are getting segmentation errors. The Job Server is on an HP-UX Itanium server (HP-UX bizdev1 B.11.23 U ia64 ) and the Designer is on Windows XP. We are on version 11.5.3 of Data Integrator. Has anyone encountered this error and if so how was it resolved.
24611 1 SYS-170101 9/10/07 2:36:23 PM System Exception <Segmentation Violation> occurred in the context <|Dataflow HNS_DF_LOAD_SERVICE_ORDER|Transform24611 1 SYS-170101 9/10/07 2:36:23 PM QRY_ToJoinTwoSources_factServiceOrder>Post Author: wdaehn
CA Forum: Data Integration
Can you watch the al_engine process memory via the "top" command of HP-UX? It might be that you run out of memory (2GB is the limit). Especially given the name of you query, you should consider checking the join rank of your two source tables inside the DF, maybe DI caches the wrong one. In other words, is one source small, the other big? The give the big one a join rank of 100, the other a join rank of 1 (everything > 1 is fine, 0 means "automatic"). -
"mapping null reg" compiler error LV2009
Opens fine in LV 8.6, but gives "Compiler error. mapping null reg" error in LV 2009.
If you get rid of the 2D transpose, the code compiles fine.
Attachments:
Bitmap to Graphic Block v1.0.vi 17 KBThanks for reporting this. The issue has actually already been addressed in LabVIEW 2010.
Regards,
Will
Certified LabVIEW Architect, Certified Professional Instructor
Choose Movement Consulting
choose-mc.com -
Hello experts,
My problem is as follows:
I need to perform a large transaction on one of my tables in my
database. I have to delete a large number of records using a
command as follows:
Delete from mytable where ordernumber like '2000%';
Each time i launch this command i have the following error:
ERROR at line 1:
ORA-01562: failed to extend rollback segment (id = 3)
ORA-01628: max # extents (30) reached for rollback segment R03
I know that i have a problem with my rollback segment. I have
thus created a large rollback segment so that my transaction can
use it.
I don't know how to tell my transaction use the large rollback
segment that i created.
(i know there is a command called set transation user rollback
segment large_rs1, i have tried it on sqlplus. it gives me
error: ORA-01453: SET TRANSACTION must be first statement of
transaction)
Please help
thanks in advance for a reply
Kind regards
YogeerajFirst, you have to create a bigger rollback segment (i.e. RBBIG).
Then, before each query, you must type:
set transaction use rollback segment rbbig;
This will force the use of the specified rollback segment.
Hope this helps. -
Segmentation error in background
Hello,
We have a problem creating a target group after a patch upgrade.
We want to create a target group using the infoset CRM_MKTTG_BP_BIRTHDATE. The filter that we apply is +1d.
If we built the TG in background, we are getting the error "Business partner does not exist" for all the partners in the TG. On the other
hand, if we take a look at the error, the number of partners that we are obtaining is correct. The program that generates the errors is
CRM_MKTTG_TG_BUILD.
But if we built the TG directly, not in background, we are getting no errors and everything looks good.
As I said, we are getting this error after a pacth upgrade.
In the attachment you have the result of what I comment. The first implementation has been online, and the second is the background and everyone has returned error "The business partner does not exist."
Thanks.Hi Juan,
From the screenshot I see that you are using the segment builder applet. May I know what version of CRM are you using. When you say patch upgrade, to which patch level have you upgraded.
What ever you are trying to do here appears to be standard. I would suggest you to open a SAP Message for this issue .
Hemanth -
Segmentation Error : Server returned HTTP response code: 500 for URL
Hi,
when we do customer segmentation in Applet Java Builder, we create a target group using 2 or more criterion, then it prompts us an error "Communication Error" - Server returned HTTP response code: 500 for URL: http//xxxxxxxxxxx/bc/bsp/sap/CRM_MKTTG_SEGAP/communication.do
we're in CRM 7.0 SP 6.
What we have done
- activated the service CRM_MKTTG_SEGAP
- implement sap note 1481289, 1359890, 1161753
any info is really appreciated.
Thanks
JDHI ,
Communication error occurs because of two active versions of segment builder jar files are appearing , deletion of older version resolves this issue .
Go to SE80 u2013 Select the BSP Application - CRM_MKTTG_SEGAP and check segmentbuilder.jar Segment Builder Applet under MIME folder and check the size and delete the older version .
Regards,
Satish Bondu -
BD10 and BD21 - Idoc custom segment - Error in BD21 but no error in BD10
Hi All,
I have a custom segment 'ZXXXXX' added to MATMAS05 idoc type.
When I use BD10 to send a material to another system I don't have any errors but when I use BD21 Change pointers I'm getting an internal error.
Is there something I need to add so that it works both for BD10 and BD21 without any errors?
Please help.
MeghnaDo you have a filter in the distribution model for a field in your custom segment ?
If yes ... you have to assign parameter CIMTYP (changing parameter) in your customer exit with the Value of your "IDoc Extensionname"
Example:
IDOCTP = MATMAS05
Extzension = ZZMATMAS
P_CIMTYP = 'ZZMATMAS'.
That´s all ! -
Hi,
I am doing IDOC->FILE. When I am posting to Idoc I am
gettng this error in R/3 - "Segment 'ZMESSAGE',
segmentnumber '000001' not correct in structure ZIDOC"
thx and regards,
Ansar.Hi Ansar,
In your R/3 system from where IDoc is triggered check whether the idoc segment ZMESSAGE is released.
Pls do check whether the latest idoc metadata is existing in the IDX2. If not pls do delete it using IDX_RESET_METADATA report and try it.
Also do compare the structure you are using for the IDoc is similar to the one which is being used in the integration repository. If not pls do reimport it.
Cheers
JK -
How to findout in IDOC whch segment error out
Hi All,
If an IDOC error out then how to know which IDOC segment needs to be corrected without going through all the segments. Is there any transaction or table exists to know the status of error segment.
Thanks in Advance
Swapna!Hi,
There is no such transaction. Some time in WE02 it shows the errored data segment. To check if i shows the error segment or not,
- go to WE02 and display your errored idoc.
- from the menu select "Edit --> Segments with errors".
If it shows errored segments, you are lucky. Otherwise you will have to figure out which segments is with error.
Regards,
RS
Maybe you are looking for
-
RHTML 8 WebHelp Search in Chrome not working
We are running RoboHTML 8, with output to WebHelp. The project does include PDF files. I mention this because of other Discussions I've read looking for a solution. The output is delivered to customers in a WAR and it serves just fine from the server
-
I have a stored proc that is defined as CREATE or REPLACE PROCEDURE ABC (linkid IN CHAR, Year_in IN DATE, Method_in IN CHAR, Date_out OUT DATE, average_out OUT NUMBER) is begin end; another partially completed stored proc that returns a ref cursor de
-
JDeveloper 9i RC and OC4J virtual directory
Hi, How can I configure the following tags from my tomcat/catalina "server.xml" file to JDeveloper OC4J? <context path="/images" docBase="c:\images"/> <context path="/javascript" docBase="c:\javascript"/> <context path="/styles" docBase="c:\styles"/>
-
Hi gurus.... what is BPML...Business process master list..?In detail regards krishna
-
Recently had a very bizarre issue that just happened. Not sure why, but it won't go away. I use the Fit Image script quite a bit. For sizing images that will end up on the blog, batching out files for proofs, etc. I have it written into many of my ac