Mapping Issue for EDI 823 from XI to R/3
Hi All,
We are receiving an EDI 823 from XI to R/3 and the XML file looks like this as below.
- <FINSTA01>
- <IDOC BEGIN="1">
- <EDI_DC40 SEGMENT="1">
<MANDT>110</MANDT>
<DOCNUM>0000000000000000</DOCNUM>
<DIRECT>1</DIRECT>
<IDOCTYP>FINSTA01</IDOCTYP>
<MESTYP>LOCKBX</MESTYP>
<STD>X</STD>
<STDVRS>003040</STDVRS>
<STDMES>823</STDMES>
<SNDPOR>EXTD_BALE</SNDPOR>
<SNDPRT>B</SNDPRT>
<SNDPRN>1000000015</SNDPRN>
<SNDSAD>1000000015</SNDSAD>
<RCVPOR>SAPYRD</RCVPOR>
<RCVPRT>LS</RCVPRT>
<RCVPRN>YRDCLNT110</RCVPRN>
</EDI_DC40>
Now the problem is the incoming IDOC ends up with errors.
Recipient Information
Port : <empty>
Partner No: YRDCLNT110
Partn Type: LS
Sender Information
Port: SAPYXD
Partner No: EXTD_BALE
Partn Type: LS
Whereas there exists no partner no. in R/3 by the name EXTD_BALE. The interesting part is EXTD_BALE is a IDoc Adapter Port No in XI.
Please provide inputs on these.
Regards,
Karthik
Hi, any feedback in this one ?
Similar Messages
-
Mapping error for EDI 823 from XI to R/3
Hi All,
We are receiving an EDI 823 from XI to R/3 and the XML file looks like this as below.
Partner No: YRDCLNT110
Partn Type: LS
Sender Information
Port: SAPYXD
Partner No: EXTD_BALE
Partn Type: LS
Whereas there exists no partner no. in R/3 by the name EXTD_BALE. The interesting part is EXTD_BALE is a IDoc Adapter Port No in XI.
Please provide inputs on these.
Regards,
KarthikHi Karthik,
Have u configure a R/3 port in IDX1 of XI. This port has the RFC destination pointing to R/3. It will be used by XI to get into R/3.
Regards,
Akshay -
ECC to CRM Mapping Issue for vendor data
Hi,
I am facing an issue for replicating vendor data from ECC to CRM. One of the vendor is not confirmed at ECC side and one vendor's time zone is not maintain in the master of time zone in CRM side. This may be the issue.
But Where exact we can see that mapping of the tables of ECC and CRM. So in future we should maintain mandatory data based on mapping to avoid BDOC failure.Hello Rahul ,
The field (MARA-BISMT) is not used in CRM system. If you want this
field to be maintained in CRM as well, and to be transferred from R/3
to CRM you must enhance the standard by customer extensions.
The procedure for customer enhancements for Download from R/3
to CRM is described in the following document:
SAPNet, alias crm, > Media Library > Documentation >
Key Capabilities > Master Data > CRM Product: Customer
Enhancements for Download/Upload
For further information about Product Master enchancements and how to
find techncical documentation about it please refer to the attached
Note 428989.
Thanks & regards,
Krishnen -
Robohelp - TFS Mapping issue for different versions
Hi,
I'm using Adobe Robohelp 9 with Team Foundation Server 2012 on Windows 7 - 64 bit. I've also installed Microsoft Visual Studio Team Foundation Server 2012 MSSCCI Provider. I've mapped all the project folders on my machine. We create HTML help (.chm) for our application.
I have different versions for each module of our entire product on Team Foundation Server 2012 (As we have multiple modules in our product), for example x+1, x+2, x+3 and so on.
Each module has different Robohelp projects for every version/branch. This means, if the Online help for any module from any version is changed, we update it in the respective version of the Online help project.
When we move on to the next version, due to binary files, the next version contains the same path and properties of previous version. We need to have different project for single module for the next version. For this, I have changed the TFS project path of every .xpj project for every version. For example, if I'm working on version x+2, then I've changed the TFS project path from x+1 to x+2 manually on x+2 version/branch and then started working on it.
As these Robohelp projects (for each module) are mapped to my local folder on C drive, it shows a single workspace mapped with same path on my machine. This should be the ideal condition for my all projects as follows:
But sometimes when I try to update any file from version x+2 and then check-in my changes, the changes are getting checked-in to the previous version, i.e. x+1. And if I see the workspace of the x+1 version, it shows one more wrong workspace added below the correct one. This new workspace contains the Source control folder path of x+1 version and the Local folder path of x+2 version. That means, my robohelp project folder of x+2 version from my local drive is getting mapped to TFS folder of version x+1. That is why all the changes done in Robohelp project files from x+2 version are going in to x+1 version.
The strange here is when I try to check-in my changes in x+2, it shows the correct path of x+2 version. But they all go to x+1 version, which should not happen.
If anyone can help me in this problem, it would be a great help!
Thanks in advance and waiting for solution,
DipaliSo, if I understand correctly, you create a duplicate of the project and
check that in on a different location in TFS. That is the way to go.
My guess is that the XPJ still references the old version's path. Try
checking out the XPJ with TFS and open in with notepad. Search for
'vc::cookie'. Check that the path there points to the correct path of
the project in TFS. It probably still references the old path and so
RoboHelp will check it in at the old path.
Kind regards,
Willam -
ITunes Movie Artwork Issue For Videos Purchased From iTunes
There appear to be a number of different issues with iTunes and Movie Artwork but my issue seems to be a little different – and I have a workaround for it – so I thought I'd post it here in the hope that it will save someone else some time.
Config: iMac 27" / Latest Mountain Lion / Itunes 11 *in 64 bit mode*
Issue: HD movies purchased and downloaded in iTunes won't let me add artwork. I use 'get info' on the movie in question, select the artwork tab, and then drag a picture across from Google Images. It appears to work fine. I collapse the 'get info' window. To check to see if the artwark was saved, I use the 'get info' command on the same movie. The artwork is not there. It was not saved.
Additional Info:
1. I have no problem adding artwork (using the exact same process) to any movies that I've ripped from DVDs using Handbrake.
2. I discovered that the movies in question that I downloaded wouldn't play when launched from iTunes. The failure message said that playback required Quicktime. I have old and new versions of Quicktime on my Mac. Didn't seem to make sense. I researched the failure to play problem and the workaround for that also fixed my artwork problem.
Workaround:
1. Quit out of iTunes.
2. In the applications folder, find iTunes, right click the app file and select 'get info'.
3. Tick the box that says "open in 32 bit mode". Collapse the 'get info' window.
4. Open iTunes.
5. Launch the movie in question. If you get a failure message, just hit 'cancel'. The movie will open and begin playing, likely with no sound.
6. Quit the movie.
7. Select the movie in question and do 'get info' on it.
8. Select the artwork tab and drag your artwork into the blank window. Collapse the 'get info' window.
9. Repeat steps 5 through 8 for all the movies you've had trouble with.
10. Repeat steps 1 through 4, but this time untick the box that says "open in 32 bit mode".
All your artwork should be ok now, and you should be able to make changes for those files from this point on without using this workaround.
So - my question is, does anyone know why there is an issue with 32-bit vs 64-bit and if there is a way to fix it so this workaround isn't needed?
Cheers.I am having this same problem ever since I upgraded to iTunes 7.x. When I was using 6.x, the videos displayed fine but after the upgrade, they were no longer viewable. The audio is skipping and I only receive flashing colors in the panel where the video should be displayed. I am currently using iTunes 7.5 and not only a problem with video display but also can not view album art in the Cover Flow display. Only a series of different colored squares where the album should be displayed. I have two Sony Vaios one works and one does not. The Vaio PCV-RZ34G with 1GB RAM does not work but a Vaio VGC-RB53 with 512MB works fine. I have searched for video driver updates and have not found any solution. The problem only seems to appear with any video played with Quicktime. Video played with other video applications works fine. I have just lived with no quicktime video or cover flow on the PCV-RZ43G. This has been an excellent system with the only problem being the video conflict with quicktime.
-
Mapping Issue for Synchronous interface
Hi Experts,
We are facing the below issue in a SOAP to RFC synchronous scenario.
Data from WS is successfully uploaded to RFC via RFC request. However we face the below issue when the RFC response is received.
com.sap.aii.utilxi.misc.api.BaseRuntimeException:Character reference "&#00" is an invalid XML character.
We receive this error particularly when one field has the below values.
<FIELD> X 20140812201409101211 1 USD TP 00000000 </FIELD>
When we click on view source the same field shows as below:
<FIELD> ����������������������X 20140812201409101211 1 USD ����������������TP 00000000�������</FIELD>
Even while pasting the same source xml from view source in Message Mapping we get the below error:
Unable to display tree view; Error when parsing an XML document (Character reference "�" is an invalid XML character.)
How is a simple value getting converted to some huge value when we open it using view source.
Please help me resolving the same.
Regards,
ShaiHi Shaibayan
looks like there is inconsistency in the RFC maintained in ECC to PI as non unicode.
In RFC channel check the flag for Unicode.
Or
Write a java mapping to remove the special characters
Regards
Osman -
Mapping issue for condition must be satisfied
Hi All,
I am using IDOC ORDERS05 for my interface IDOC to FILE,
in which i have partner fuction as PARVW , in which i have to AG and LF
i must must pouplate data if the from the IDOC PARTN if the PARVW is AG to the VENDORCODE in my target
i must must pouplate data if the from the IDOC PARTN if the PARVW is LF to the BUYER in my target
please help how to do it and what condition i must use for this '
Thanking you
SridharHi,
Since you are mapping them on the target side you can use your IDOC variables in multiple palces if you are mapping graphically.
i.e. 1. VENDOR -> if PARVW is equal to AG then PARTN to VENDORCODE
2. BUYER -> if PARVW is equal to LF then PARTN to BUYER
Make sure you set the context for both PARVW & PARTN to its upper level parent & not the immediate level.
for eg:
IDOC
---E2EDKA1
---PARVW
|
---PARTN
set it to IDOC & not E2EDKA1. if you still have doubts on context you can put your query back...
Thanks,
venkatesh -
Mapping Issue for IDoc to JDBC interface
Hi All,
I am having problem in implementing logic in IDoc to JDBC interface where I have to filter out E1WBB07-KSCHL = VKP0.
Source IDoc structure is like ->
E1WBB01(occ 0 -1000)
|-> E1WBB03 (occ 0-100)
|-> E1WBB07(occ 0-1000)
|-> KSCHL
DATAB
DATBI
Now, For each KSCHL = VKA0 there should be a duplicate VKP0 record. From these 2 records only the VKA0 should get processed and VKP0 ignored.
Duplicates for VKP0 and VKA0 can be identified by identical DATAB and DATBI.
Suppose, in one E1WBB03 segment,there are 4 E1WBB07 segment having following values.
1: KSCHL=VKP0, DATAB=20102011, DATBI=25102011
2: KSCHL=VKP0, DATAB=26102011, DATBI=30102011
3: KSCHL=VKA0, DATAB=26102011, DATBI=30102011
4: KSCHL=VKP0, DATAB=01112011, DATBI=31129999
2 & 3 are duplicates. From these, 2 should get dropped.
As a result only 1, 3 and 4 should get processed.
How can I proceed with this..?...I have tried some work around but not able to do it successfully. Is a UDF required to compare DATAB and DATBI. If yes how it can be written.?
Thnx in advance,
Praveen.chk below mapping:
change the context of DATAB, DATAB1 and KSCHL to E1WBB03 (right click-> context) in all the mappings shown below
1)
DATAB
------------concat-----sort----splibyvalue(value change)-----collapse context---TargetNode
DATBI
2)
DATAB
------------concat-----sortbykey \
DATBI / \
KSCHL------------/ \
----------------------------------------FormatByExample----sort-----UDF1----Target KSCHL
DATAB /
----concat---sort--splibyvalue(value change)-
DATBI
3)
DATAB
---concat ( ; )-----sort-splibyvalue(value change)---collapse context--splitbyvalue (each value)--UDF2---TargetDATAB
DATBI
4)
DATAB
------------concat ( ; )-----sort----splibyvalue(value change)-----collapse context--splitbyvalue (each value)--UDF3---TargetDATABI
DATBI
UDF1: execution type : all values of a context...input: var1
int a=var1.length;
int count=0;
if(a>=2)
for(int i=0;i<a;i++)
if(var1<i>.equals("VKA0"))
count= count+1;
else
result.addValue(var1[0]);
if(count>1)
for(int i=0;i<count;i++)
result.addValue("VKA0");
UDF2:execution type: single value...input: var1
String [] temp= var1.split(";");
return temp[0];
UDF3: execution type: single value...input: var1
String [] temp= var1.split(";");
return temp[1]; -
Authorization issue for Jump query from Summary to Detail
Hello Gurus,
I am facing an interesting issue in Jump query authorization.
I have a query on a summary cube which has Company code as a authorization relevant object .From this query I launch a query on detail cube.This details cube has company code and customer as authorization relevant objects.Customer is present in free characteristic for this query.Summary cube doesnt have the customer object at all.
When the user drillsdown on the Customer level in the details query he get the authorization error.After this if he just refreshes the query it works fine .
Can anybody please suggest any innovative workaround for this issue.
GautamHello Gurus,
I am facing an interesting issue in Jump query authorization.
I have a query on a summary cube which has Company code as a authorization relevant object .From this query I launch a query on detail cube.This details cube has company code and customer as authorization relevant objects.Customer is present in free characteristic for this query.Summary cube doesnt have the customer object at all.
When the user drillsdown on the Customer level in the details query he get the authorization error.After this if he just refreshes the query it works fine .
Can anybody please suggest any innovative workaround for this issue.
Gautam -
Issue for choosing disk from managed ASM
Hello All,
Oracle Grid Infrastructure installation(11gR2) is in progress, I am not being able to select disk from managed asm disk group, details are as listed below:
uname -a
Linux hostname.host.com 2.6.18-194.el5 #1 SMP Mon Mar 29 22:10:29 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
createdisk for ASM disk group for following disks have been completed:
ASMDATA1
ASMDATA21
ASMDATA22
OCR_VOTE01
OCR_VOTE02
OCR_VOTE03
# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
# oracleasm listdisks
ASMDATA1
ASMDATA21
ASMDATA22
OCR_VOTE01
OCR_VOTE02
OCR_VOTE03
$./runcluvfy.sh comp nodereach -n devrac1,devrac2 -verbose
Pre-check for cluster services setup was successful on all the nodes.
$./runcluvfy.sh stage -pre crsinst -n devrac1,devrac2 -verbose
Pre-check for cluster services setup was successful.
$./runcluvfy.sh comp nodecon -n devrac1,devrac2 -verbose
Verification of node connectivity was successful.
$./runcluvfy.sh comp sys -n devrac1,devrac2 -p crs -osdba crs -orainv oinstall
Verification of system requirement was successful.
$./runInstaller
INSTALLATION OPTION: Install and Confgiure Grid Infrastructure for a Cluster
INSTALLATION TYPE: Advanced Installation
PRODUCT lANGUAGE: English
GRID PLUG AND PLAY:
SCAN/GNS configuration:
cluster Name: devrac1
SCAN Name: scan.devrac1.domain.company.net
SCAN Port: 1521
GNS sub Domain: devrac1.domain.company.net
GNS VIP Address: 192.168.112.32
CLUSTER NODE INFORMATION
Hostname: devrac1.domain.company.net, Hostname: devrac1.domain.company.net
Virtual IP Name: AUTO,AUTO
SSH Connectivity: PASS
NETWORK INTERFACE USAGE
Interface Name:eth0, eth1
Subnet: 192.168.112.0,10.10.10.0
Interface Type: Public, Private
STORAGE OPTION: ASM
CREATE ASM DISK GROUP
DiskGroupname:ORCLDATA
Redundancy: External
If Candidate Disks is the option, does't list any disk path.
If All Disks is the option, list all disks configured for ASM but couldn't check any disk path.
When we click on "Next" button, following error message would be seen:
[INS-30507] Empty ASM disk group.
How can I select disk from managed ASM disk group here at this position? What I have missed in my installation for Oracle Grid Infrastructure?
Thanks in advance.
Regards,
Suresh GautamHello Harish,
I don't think that there is issue of discovery string as I can see list of all disks path for ASM, but couldn't include(checked) listed disks for new ASM disk group. I couldn't see any option to trace for OUI while providing configuration for setup, it may trace the information when it starts installation. Please, correct me if I am wrong.
Configuration of /etc/sysconfig/oracleasm is as follows:
ORACLEASM_ENABLED=true
ORACLEASM_UID=oracle
ORACLEASM_GID=oinstall
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
How scan order can resut such a issue during installation? I couldn't understand.
Anyway, I did modification in /etc/sysconfig/oracleasm as you have suggested, then reinitiated installation process after system reboot, modified parameters are listed below for your reference:
ORACLEASM_SCANORDER="emcpower sd"
ORACLEASM_SCANEXCLUDE="sda sdb sdc sdd sde sdf sdg"
BUT COULDN'T RESOLVE THE ISSUE.
output of "fdisk -l" is as follows, it can give you additional information:
========================================================================
# fdisk -l
Disk /dev/sda: 72.9 GB, 72999763968 bytes
255 heads, 63 sectors/track, 8875 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 2053 16386300 82 Linux swap / Solaris
/dev/sda3 2054 8875 54797715 83 Linux
Disk /dev/sdb: 288.1 GB, 288161071104 bytes
255 heads, 63 sectors/track, 35033 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 35033 281402541 83 Linux
Disk /dev/sdc: 287.7 GB, 287762808832 bytes
255 heads, 63 sectors/track, 34985 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 34985 281016981 83 Linux
Disk /dev/sdd: 288.5 GB, 288559333376 bytes
255 heads, 63 sectors/track, 35082 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 1217 9775521 83 Linux
/dev/sdd2 1218 2434 9775552+ 83 Linux
/dev/sdd3 2435 3651 9775552+ 83 Linux
/dev/sdd4 3652 35082 252469507+ 83 Linux
Disk /dev/sde: 288.1 GB, 288161071104 bytes
255 heads, 63 sectors/track, 35033 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 35033 281402541 83 Linux
Disk /dev/sdf: 287.7 GB, 287762808832 bytes
255 heads, 63 sectors/track, 34985 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 34985 281016981 83 Linux
Disk /dev/sdg: 288.5 GB, 288559333376 bytes
255 heads, 63 sectors/track, 35082 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdg1 1 1217 9775521 83 Linux
/dev/sdg2 1218 2434 9775552+ 83 Linux
/dev/sdg3 2435 3651 9775552+ 83 Linux
/dev/sdg4 3652 35082 252469507+ 83 Linux
Disk /dev/emcpowera: 288.1 GB, 288161071104 bytes
255 heads, 63 sectors/track, 35033 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/emcpowera1 1 35033 281402541 83 Linux
Disk /dev/emcpowerb: 287.7 GB, 287762808832 bytes
255 heads, 63 sectors/track, 34985 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/emcpowerb1 1 34985 281016981 83 Linux
Disk /dev/emcpowerc: 288.5 GB, 288559333376 bytes
255 heads, 63 sectors/track, 35082 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/emcpowerc1 1 1217 9775521 83 Linux
/dev/emcpowerc2 1218 2434 9775552+ 83 Linux
/dev/emcpowerc3 2435 3651 9775552+ 83 Linux
/dev/emcpowerc4 3652 35082 252469507+ 83 Linux
========================================================================
Any kind of follow up will be highly appriciated.
Regards,
Suresh Gautam -
Network dive map issue for particular user account.
Hi,
My one of the user want to access network drive. But when he logoff or restart the system, he lost the map drive.
He is a domain user. He already have few map drive in his profile. But now he need to map one more network drive. And special requirement is he want to access that map drive from any system from our organization. And that drive should not
get disconnect/dismount after he logoff or restart the system.
Note: User XYZ wants to access particular map network drive form any system, where he login in domain. And
no other user should be able to see or access that particular map drive.
Also if user XYZ logoff or reboot the own system, Then when next time he will login again from any system, he must be able to see and access that particular
map drive.
How do i configure this....Please help.
Regards,
Santosh PawarHi,
Using Group Policy Preferences to Map Drives could meet your requirement. It based on group membership.
About how to get it, please refer to this guide:
Using Group Policy Preferences to Map Drives Based on Group Membership
https://blogs.technet.com/b/askds/archive/2009/01/07/using-group-policy-preferences-to-map-drives-based-on-group-membership.aspx
For further help, I would like to suggest you ask Directory Services forum:
https://social.technet.microsoft.com/Forums/windowsserver/en-US/home?forum=winserverDS
Karen Hu
TechNet Community Support -
Issue for retrive data from ITAB
Case:HOW CAN WE FETCH DATA FROM TRANSPARENT TABLE
scenario:calculating net price for material on sales order
(on the basis of material number,sales organization,distribution channel,division,sale to partY ,sales office)
For this 5 prices are calculated :
a)zmrp
b)zlbj
c)zmlb
d)zdij
e)mwst
A bapi is developed through which all these data are fetched using joins on transparent and pooled tables.
Bapi is properly working in Sap Environment.But when fetching data from non sap environment(.NET) only pooled tables are returning data.
Transparent tables are returning blank data.
And also if in query of transprent table if in " where Clause " all parameters are hard codded then transparent table also return data in non sap envirnment.
for eg...
instead of writing ---
select data from tranparent table where matnr= (matnr variable made in bapi entered by user) and vkorg=(sales org variable made in bapi) ....same with all conditions
if we write(returing data)----
select data from tranparent table where matnr='5476665987' and vkorg='1400' ....same with all conditions
FUNCTION ZBAPI_BAR3.
""Local Interface:
*" IMPORTING
*" VALUE(MATNR) TYPE ZBAPI_IMPORT2-MATNR
*" VALUE(VKORG) TYPE ZBAPI_IMPORT2-VKORG
*" VALUE(VTWEG) TYPE ZBAPI_IMPORT2-VTWEG
*" VALUE(KUNNR) TYPE ZBAPI_IMPORT2-KUNNR
*" VALUE(SPART) TYPE ZBAPI_IMPORT2-SPART
*" VALUE(AUART) TYPE ZBAPI_IMPORT2-AUART
*" VALUE(VKBUR) TYPE ZBAPI_IMPORT2-VKBUR
*" EXPORTING
*" VALUE(RETURN) TYPE BAPIRETURN
*" TABLES
*" ITAB STRUCTURE ZBAPI_TABLE3
DATA: LAND1 LIKE KNA1-LAND1 ,
REGIO LIKE KNA1-REGIO ,
WERKS LIKE VBAP-WERKS ,
KNUMH LIKE A004-KNUMH ,
KNUMH LIKE ZBAPI_TABLE3-Knumh ,
KBETR LIKE COND_KONW-Kbetr ,
LIKE ZBAPI_TABLE3-KBETR ,
KNUMH1(10) TYPE C ,
KNUMH2(10) TYPE C ,
KNUMH3(6) TYPE C ,
KNUMH2 LIKE ZBAPI_TABLE3-KNUMH ,
KNUMH3 LIKE ZBAPI_TABLE3-KNUMH ,
KNUMV LIKE ZBAPI_TABLE3-KNUMV ,
KDGRP LIKE KNVV-KDGRP ,
TAXKD LIKE KNVI-TAXKD ,
TAXM1 LIKE MLAN-TAXM1 ,
ADRNR LIKE TVBUR-ADRNR ,
REGION LIKE ADRC-REGION ,
DATE TYPE A503-DATAB .
SELECT SINGLE KDGRP FROM KNVV INTO KDGRP WHERE KUNNR = KUNNR AND VKORG = VKORG AND VTWEG = VTWEG AND SPART = SPART .
SELECT SINGLE LAND1 REGIO FROM KNA1 INTO (LAND1,REGIO) WHERE KUNNR = KUNNR .
SELECT SINGLE WERKS FROM VBAP INNER JOIN VBAK ON VBAPVBELN = VBAKVBELN INTO WERKS WHERE AUART = AUART .
SELECT SINGLE TAXKD FROM KNVI INTO TAXKD WHERE KUNNR = KUNNR AND ALAND = LAND1 AND TATYP = 'MWST' .
SELECT SINGLE TAXM1 FROM MLAN INTO TAXM1 WHERE MATNR = MATNR AND ALAND = LAND1 .
SELECT SINGLE ADRNR FROM TVBUR INTO ADRNR WHERE VKBUR = VKBUR .
SELECT SINGLE REGION FROM ADRC INTO REGION WHERE ADDRNUMBER = ADRNR .
DATE = SY-DATUM .
SELECT SINGLE KNUMH FROM A931 INTO KNUMH1 WHERE KAPPL = 'V' AND KSCHL = 'ZMRP' AND VKORG = VKORG AND VTWEG = VTWEG AND WERKS = '1410' AND KUNNR = KUNNR AND MATNR = MATNR AND KFRST = SPACE AND DATAB LE DATE AND DATBI GE DATE .
SELECT SINGLE KNUMH FROM A931 INTO KNUMH1 WHERE KAPPL = 'V' AND KSCHL = 'ZMRP' AND VKORG = '1400' AND VTWEG = '10' AND WERKS = '1410' AND KUNNR = '0000100163' AND MATNR = 'A10AN027PNSL' AND KFRST = SPACE AND DATAB LE DATE AND DATBI GE DATE .
SELECT SINGLE zkarigar FROM zkari INTO KNUMH1 WHERE erdat = '20070410' .
SELECT SINGLE KNUMH FROM A004 INTO KNUMH1 WHERE KAPPL = 'V' AND KSCHL = 'ZMRP' AND VKORG = VKORG AND VTWEG = VTWEG AND MATNR = MATNR AND DATAB LE DATE AND DATBI GE DATE .
*'0000280050'
CLEAR KNUMH .
CLEAR KBETR .
SELECT SINGLE KNUMH FROM A503 INTO KNUMH1 WHERE KAPPL = 'V' AND KSCHL = 'MWST' AND ALAND = LAND1 AND WKREG = REGION AND REGIO = REGIO
AND TAXK1 = TAXKD AND TAXM1 = TAXM1 AND KFRST = SPACE AND DATAB LE DATE AND DATBI GE DATE and knumh = '0000279708'.
concatenate '0000' KNUMH into knumh1 .
**KNUMH1 = KNUMH .
**select single kbetr from konp into kbetr where knumh = knumh1 .
**knumh3 = knumh1+4(6) .
***itab-kbetr = kbetr .
**concatenate '0000' knumh3 into knumh2 .
**write:/ knumh2 .
ITAB-KNUMH1 = KNUMH2 .
ITAB-KNUMH1 = KNUMH1 .
APPEND ITAB .
*loop at itab .
*write:/ itab-knumh1 .
*endloop .
CLEAR KNUMH .
CLEAR KBETR .
SELECT SINGLE KNUMH FROM A940 INTO KNUMH WHERE KAPPL = 'V' AND KSCHL = 'ZDIJ' AND VKORG = VKORG AND VTWEG = VTWEG AND KDGRP = KDGRP AND MATNR = MATNR AND KFRST = SPACE AND DATAB LE DATE AND DATBI GE DATE .
SELECT SINGLE KNUMH FROM A931 INTO KNUMH WHERE KAPPL = 'V' AND KSCHL = 'ZDIJ' AND VKORG = VKORG AND VTWEG = VTWEG AND WERKS = WERKS AND KUNNR = KUNNR AND MATNR = MATNR AND KFRST = SPACE AND DATAB LE DATE AND DATBI GE DATE .
ITAB-KNUMH = KNUMH.
APPEND ITAB .
CLEAR KNUMH .
CLEAR KBETR .
ITAB-KNUMH = KNUMH .
APPEND ITAB .
COMMIT WORK AND WAIT.
ENDFUNCTION.
Thanks & Regards
AmrishHi,
Please Check this => The specified item was not found.
How to post code in SCN, and some things NOT to do...
Faisal -
hello all,
my scenario is idoc to xml file
E1EDKT1-------segment (1:n)
-TDID----
ele
-E1EDKT2----segment (1:n)
-TDLINE-----ele
When TDID value is X then all values in TDLINE of undeline sement E1EDKT2 needs to be cocatenated and passed to target elemnt A
if TDID value is Y then all values in TDLINE of undeline sement E1EDKT2 needs to be cocatenated and passed to target elemnt B
i have written java function(for all values in context) to conctenate TDLINE elements.
also the context of TDLINE is changed to E1EDKT1.
i am getting all the concatenaed values of TDLINE for TDID=X in target element A but i am not geting any value in target element B for TDID=Y
can any one suggest what could be the problem.
Regards,
SandipHi Sandip,
Map like this for target element A:
use simple if stament. and for If give TDID equals(text function) to constant X
and for then TDLINE-->concattdline(udf) --> output A
Map like this for target element B:
use simple if stament. and for If give TDID equals(text function) to constant Y
and for then TDLINE-->concattdline(udf) --> output B
concattdline udf:
Create a Context udf with one argument a and name it as concattdline.
Imports: java.*;
Add this code:
//write your code here
String value = "";
for(int j=0; j<a.length - 1; j++){
value += a[j] + "";
value += a[a.length - 1];
result.addValue(value);
I just tested this and it should work for you.
Regards,
---Satish -
MAP value for sub materials from New(Main) Material
Hi all
I got an issue in Kit breaking i.e. i got one material(main) which contains 10 of sub materials.Now when i am receiving the main material, i will receive with a value assume as 10000 bugs.
Now i would like to know that the 10000 bugs can be splited equally between all the 10 materials automatically.
All the 10 material are newly created with MAP value as zero.
Regards
PraveenAs mentioned in my earlier reply, if material is maintained on price control 'V', then system tries to settle the variance to inventory. In such case, if the variance is favourable i.e. Actual cost is less than the standard cost, the negative variance will reduce the value of inventory and as a result MAP. If the inventory value is less than the amount of variance, system issues a message that MAP will become negative.
Hope this clear the issue. If yes, please award points
Regards
Rakesh Pawaskar
Message was edited by:
Rakesh Pawaskar -
Hi All,
I have to send EDI 857 Information from SAP. Could you please let me know which IDOC should i use to achieve the functionality. I appreciate if anybody can send me the mapping document of EDI 857 from SAP.
Thank You,
SureshHi,
I have to send the Shipment/Billing Information and the triggering point is When we save the Shipment of the Delivery.. I am using SHPMNT03 IDOC for Shipment Details.. I am thinking of extending the SHPMNT03 IDOC for getting the billing details. Is this the right way for sending the Shipment/Billing Information to EDI 857 ?
Thank You,
Suresh
Maybe you are looking for
-
My computer crashed and I need to reset up my phone. I do not have a backup. Iphone 3 running software Version 4.2.1 After researching this, I understand that I will need to reload iTunes on a new computer, deactivate the prior computer and authori
-
InDesign Backwards Compatibility in CS5 an MAJOR issue
I am a print designer who works in InDesign. I bought CS3 Design Premium in late summer of 2008. Shortly thereafter CS4 came out, but after just having forked out a big chunk of change, I decided against upgrading to CS4 right away. Recently I consid
-
Here's the link: http://www.barristercopy.com I have two of my pages that are deeper with content than the others, Case Studies and Management Team. These are the two pages that seemingly jump slightly when viewed with IE. No problem in FF. Generally
-
SAP KW Installation in SAP SOlution Manager 4.0
Dear Friends, We are planning to install SAP Knowledge Warehouse 7 in our landscape. We have solution Manager 4.0 and planning install SAP KW in SOlution Manager system. Is it possible to install and SAP KW in solution Manger 4.0 ? If yes, D
-
My email and new video address - but neither recognised on login?
Hi, has anyone else had issues with this? This problem first surfaced when I went to a Collaboration Clinic here in the UK to get Jabber Video set up, all seems to be fine with registration but having downloaded the client when I login it tells me ne