Reg WM- PP Integration simple scenario, SubAssembly GR
May be basic question not sure
I have BOM for finished good as Below
Finished Goood
Raw Material 1
Subassembly 1
SubAssembly1 has BOM As below
Raw Material 2
Raw Material 3
We will have 2 production orders will be created in the system one for Finished Good and one for Sub Assembly1
We dont have collcective order functionality
Now We create production order for sub assembly1, built physically ans confirmation is done, we are worried about where Sub-assembly is received[GR] and how it get stored and how GI to be done again in production order for finished good [As WM active]
If Collective order control cycles will ensure backflush correctly, subassembly will be GR and GI automatically when finished product production order is confirmed
Please help
Sasi
Edited by: Sasi nagireddy on Sep 2, 2011 4:46 AM
Hi,
Better to implement collective order functionality...if not need to have seperate storage locations 1.semi finished goods 2. finished goods, once you done GR for sub assembly order, then you have to do GI for main order..
Hope it may helps you..
Kuber
Similar Messages
-
XI Simple Scenario..
HI Everyone,
I am new to XI , so please tell me a simple scenario to start with ( if possible with Step-by- Step).
Also, Please tell me how to move from simple scenarios to more complex ones also if possible give me the complex sceanrios too.
Regards,
SnehalSnehal,
Doing a file to file is the best way to start with XI. these blogs can help you on that,
/people/venkat.donela/blog/2005/03/02/introduction-to-simplefile-xi-filescenario-and-complete-walk-through-for-starterspart1 - File to File Part 1
/people/venkat.donela/blog/2005/03/03/introduction-to-simple-file-xi-filescenario-and-complete-walk-through-for-starterspart2 - File to File Part 2
Regards,
Bhavesh -
Can somebody explain a simple scenario (like 2 or 3 dimensions and 2 or 3 key figures) starting from data load (from flat file, so that I can create using EXCEL) to creating infocube and generating a report.
Or is there any source that I can find the above scenario?
Thanks in advance.Hi,
You can do this one.
Create one ODS with following key fields.
1. Document number
Data fields.
2. Cal day
3. Comp code
4. Sales org
5. Customer code
6. CUstomer district
7. Material
8. Quantity
9. Price
And then create a cube with the following dimensions
Organization with fllowing chars
COmp COde
Sales Org
Customer with following chars
Customer Code
Customer Dist
MAterial with following chars
Material
Here, you need not consider the document number. The quantity and price goes to the fact table.
Please let me know if it worked
Sriram -
Performance In Simple Scenarios
I have done some performance testing to see if asynchronous triggers performs any better than synchronous triggers in a simple audit scenario -- capturing record snapshots at insert, update and delete events to a separate database within the same instance of SQL Server.
Synchronous triggers performed 50% better than asynchronous triggers; this was with conversation reuse and the receive queue activation turned off, so the poor performance was just in the act of forming and sending the message, not receiving and processing. This was not necessarily surprising to me, and yet I have to wonder under what conditions would we see real performance benefits for audit scenarios.
I am interested if anyone has done similar testing, and if they received similar or different results. If anyone had conditions where asynchronous triggers pulled ahead for audit scenarios, I would really like to hear back from them. I invite any comments or suggestions for better performance.
The asynchronous trigger:
Code Snippet
ALTER TRIGGER TR_CUSTOMER_INSERT ON DBO.CUSTOMER
FOR INSERT AS
BEGIN
DECLARE
@CONVERSATION UNIQUEIDENTIFIER ,
@MESSAGE XML ,
@LOG_OPERATION CHAR(1) ,
@LOG_USER VARCHAR(35) ,
@LOG_DATE DATETIME;
SELECT TOP(1)
@CONVERSATION = CONVERSATION_HANDLE ,
@LOG_OPERATION = 'I' ,
@LOG_USER = USER() ,
@LOG_DATE = GETDATE()
FROM SYS.CONVERSATION_ENDPOINTS;
SET @MESSAGE =
( SELECT
CUST_ID = NEW.CUST_ID ,
CUST_DESCR = NEW.CUST_DESCR ,
CUST_ADDRESS = NEW.CUST_ADDRESS ,
LOG_OPERATION = @LOG_OPERATION ,
LOG_USER = @LOG_USER ,
LOG_DATE = @LOG_DATE
FROM INSERTED NEW
FOR XML AUTO );
SEND ON CONVERSATION @CONVERSATION
MESSAGE TYPE CUSTOMER_LOG_MESSAGE ( @MESSAGE );
END;
The synchronous trigger:
Code Snippet
ALTER TRIGGER TR_CUSTOMER_INSERT ON DBO.CUSTOMER
FOR INSERT AS
BEGIN
DECLARE
@LOG_OPERATION CHAR(1) ,
@LOG_USER VARCHAR(15) ,
@LOG_DATE DATETIME;
SELECT
@LOG_OPERATION = 'I' ,
@LOG_USER = USER() ,
@LOG_DATE = GETDATE()
INSERT INTO SALES_LOG.DBO.CUSTOMER
SELECT
CUST_ID = NEW.CUST_ID ,
CUST_DESCR = NEW.CUST_DESCR ,
CUST_ADDRESS = NEW.CUST_ADDRESS ,
LOG_OPERATION = @LOG_OPERATION ,
LOG_USER = @LOG_USER ,
LOG_DATE = @LOG_DATE
FROM INSERTED NEW
END;Synchronous audit has to do one database write (one insert). Asynchronous audit has to do at least an insert and an update (the SEND) plus a delete (the RECEIVE) and an insert (the audit itself), so that is 4 database writes. If the destination audit service is remote, then the sys.transmission_queue operations have to be added (one insert and one delete). So clearly there is no way asynchronous audit can be on pair with synchronous audit, there are at least 3 more writes to complete. And that is neglecting all the reads (like looking up the conversation handle etc) and all the marshaling/unmarshaling of the message (usually some fairly expensive XML processing).
Within one database the asynchronous pattern is apealing when the trigger processing is expensive (so that the extra cost of going async is negligible) and reducing the original call response time is important. It could also help if the audit operations create high contention and defering the audit reduces this. Some more esoteric reasons is when asynchronous processing is desired for architecture reasons, like the posibility to add a workflow triggered by the original operation and desire to change this workflow on-the-fly without impact/down time (eg. more consumers of the async message are added, the message is schreded/dispatched to more processing apps and triggers more messages downstream etc etc).
If the audit is between different databases even within same instance then the problem of availabilty arrises (audit table/database may be down for intervals, blocking the orginal operations/application).
If the audit is remote (different SQL Server isntances) then using Service Broker solves the most difficult problem (communication) in addition to asynchronicity and availability, in that case the the synchrnous pattern (e.g. using a linked server) is really a bad choice. -
Reg N sender 1 receiver scenario
Hi Experts,
i have one basic doubt ,is it possible to have a scenario with multiple senders and one receiver without using BPM in a single interface..
Say we have a common sender structure for all N system. and one sender message interface,MM,IM and which can be resused by all the system.
Regards,
srinivasHi all,
i did an extensive search in sdn.closing this thread. Below are the useful links for the same topics.
using wild card char * may be a bit risky as multiple interfaces might be runnig for the same receiver, would prefer a simple BPM, rest depends on the requirement.
Multiple Senders - One receiver
Regarding Multiple senders to single receiver
Multiple Senders on A2A
How can i have multiple Sender agreements in a single interface suggested not to use wild card "*" instead create 3 receiver determination with same channel
Multiple Senders to Inbound Proxy Scenario
Multiple sender file may be useful.
Having multiple sender components send to one receiver component ?
WSDL based on multiple sender interfaces: Re: WSDL based on multiple sender interfaces Very helpful
Approach for Multiple Sender? links
Multiple Sender to one Receiver links
Regarding Senders And Receivers
regards,
srinivas -
Multi Mapping for a simple scenario
Hi,
i have a scenario i.e. from the source I'm getting some 10 fields of data.. like bellow..
Data: 0--- Unbounded
Company_Code 1-1
Order_No 1-1
Material 1-1
Amount 1-1
but my requirement is.. in the receiver side i have two structures..
1)Receiver1
Data: 0--- Unbounded
Company_Code 1-1
Order_No 1-1
2)Receiver2
Data: 0--- Unbounded
Company_Code 1-1
Material 1-1
Amount 1-1
if the Company Code is 1000 then the data wil goes to First Receiver.. and if the Company Code is 2000 then the data will goes to Second Receiver
This is my requirement..
for this. idid IR point of view every thing correctly by using multi mapping.. even if i test the mapping that is working fine..
but in the ID(Integration Directory ) i'm not able to see any Interface mappings . in the Enhanced Interface detremination.. It was displaying No Objects Found.. message..
for that what can i do.. any suggestions.. please..
regards
JainHi jain,
I think you confused with occurance.
For your recurement You created a structure with 0-unbounded in the Source and Receivers.
There is no need of change the Message type and Interface Name Structure occurances.
Just Do like belllow.
1. Mapping:
select your source structure.
select your Receiver structures.
Put the condition for creating Nodes for the Receiver structures.
There is No need of change occurances.
2.InterFace Mapping:
Select Your source message Interface.
Select your Receiver interfaces.
Have (1 Mapping) 1 source ,2 Receivers.
There is No need of change occurances.
Now go to Interface Determination.Choose extened.you will get your mapping.
Regards,
Prakasu -
File content conversion simple scenario
Hi friends,
I am trying FCC for first time. I had created one simple file scenario without using FCC to which I had to provide input in XML format.
like this:
<?xml version="1.0" encoding="UTF-8"?>
<ns0:mt_sender xmlns:ns0="http:/soni.com">
<ponum>1</ponum>
<poqty>2</poqty>
<poamt>3</poamt>
</ns0:mt_sender>
Now I want to use FCC for this,in which i have to provide comma seperate data, e.g 11,12,114.4
Please let me know how to do this. I had gone through some blogs on sdn but I didnt got solution for this.
Thanks.,
Brij........HI,
If you want the output file in comma separated pattern for the incoming xml file,then you can go for the below mentioned fcc parameters in your receiver channel.
Structure* Recordset,Record
Record.fieldSeparator ,
Record.endSeparator 'nl'
Recordset.fieldSeparator 'nl'
Regards,
Swetha. -
Integration Directory Scenario object
Hi,
Can two screnario object exist in Integration Directory.
I had created one scenario for file adapter to idoc and another scenario for SRM.
The file adapter scenario was working ,but when i created SRM scenario the file adapter scenario has stopped working it is not picking XML file from the given location.
Kindly suggest what could be the problem.
Thanks
MonzyHi,
After configuring & activating your File adapter,
(if u are @ SP5) Go to http://<hostname>:50000/AdapterFramework/monitor/monitor.jsp
( if u are @ SP9) go to RWB> ComponentMonitoring>AdapterFramework. (Adapters, a new pop up window will appear)
If u have configured your File correctly, u will see a GREEN traffic light there, else you see a RED traffic light.
Hope this helps you.
Regards,
Siva Maranani. -
Integration Directory Scenario
Hi all!
I want to create a new scenario in the Integration Directory, without the Wizzard. I only have this wizzard when I select "new Scenario Object".
Right-Klick doesn't work for selecting e.g. "new..."
Any ideas?
How can I create a new scenario, giving a name to it and configurating it later (no wizzard!)??Hi Steffen,
To create Configuartion Scenarios, in your Integration Directory,
1. Select the SCENARIOS tab.
2.Just Right Click on the existing scenarios and select new.
3. Now , in the window that opens, you will find many options on the Right Side, just make sure Configuration Scenario is selected.
4. Now , the wizard will let u do the rest.
Hope this helps,
regards,
Bhavesh -
hi UDDI experts,
For the SAP online documentation shown below, is there any tutorial or examples or how to guide available? Did anyone have experiene using SAP J2EE Engines UDDI registry?
Thanks in advance
Kiran
snippet from online documentation
==============================
Simple UDDI Scenarios
· One company publishes a standard in UDDI and the others can find and use the same standard.
For example, a tour office company may offer rooms from several hotels. This company publishes a Web Service Interface in UDDI and each hotel that wants to be represented by this company implements a Web service that extends this Web Service Interface. Later on, after an agreement between both companies, the tour company may use the WSDL of the Hotel Web service to ask for free rooms, book rooms, and so on.
· A published standard Web Service Interface already exists and you want to see who supports it and to choose the one that suits you best.Synchronous audit has to do one database write (one insert). Asynchronous audit has to do at least an insert and an update (the SEND) plus a delete (the RECEIVE) and an insert (the audit itself), so that is 4 database writes. If the destination audit service is remote, then the sys.transmission_queue operations have to be added (one insert and one delete). So clearly there is no way asynchronous audit can be on pair with synchronous audit, there are at least 3 more writes to complete. And that is neglecting all the reads (like looking up the conversation handle etc) and all the marshaling/unmarshaling of the message (usually some fairly expensive XML processing).
Within one database the asynchronous pattern is apealing when the trigger processing is expensive (so that the extra cost of going async is negligible) and reducing the original call response time is important. It could also help if the audit operations create high contention and defering the audit reduces this. Some more esoteric reasons is when asynchronous processing is desired for architecture reasons, like the posibility to add a workflow triggered by the original operation and desire to change this workflow on-the-fly without impact/down time (eg. more consumers of the async message are added, the message is schreded/dispatched to more processing apps and triggers more messages downstream etc etc).
If the audit is between different databases even within same instance then the problem of availabilty arrises (audit table/database may be down for intervals, blocking the orginal operations/application).
If the audit is remote (different SQL Server isntances) then using Service Broker solves the most difficult problem (communication) in addition to asynchronicity and availability, in that case the the synchrnous pattern (e.g. using a linked server) is really a bad choice. -
A very simple scenario, but I can't find a solution
Hi,
I have an iPhone that runs IOS 4.1. It is synced with iTunes library at home which has all my music, video, podcasts, apps, etc. Occasionally when I'm at work I come across an audio clip or video clip that I would like to save on an iPhone to watch during the commute back. I am reasonably tech savvy, but I can't find a solution to accomplish this task. Since iPhone is synced with the library at home, it cannot be synced with the iTunes library at work without it being wiped and reinitialized. I have found partial solution for video files using the (now unavailable) VLC app, which allows transfer of FLV or MP4 files, although since hardware acceleration is not available to VLC app, watching any video above 480p is not possible. VLC app does not accept MP3 files, so I'm out of luck with these.
In the past I used non-IOS iPods and had no problems syncing it with library at home and being able to manually add music/videos from iTunes library at work.
I'm surprised that the device/ecosystem that is so user friendly suddenly cannot be used for such a simple task. All of the content I am talking about is DRM free and in open formats like MP3.
This limitation of only one iTunes library being allowed to add/manage content on an IOS device seems very not user friendly to me. I understand the need to protect content, but again - I am only talking about non-purchased publicly available content like MP3 files of programs published by radio stations.
I have a VPN connection between home and work, but iTunes does not allow syncing via anything but USB. I have even tried to find some sort of "Virtual USB" port that would allow me to connect iPhone at work and have computer at home see it as USB connected device, but could not find any solution for this on OS X.
Am I missing something trivial or is it impossible to add an MP3 file to an iPhone from anywhere but one iTunes library?
Thank you.I have finally been able to solve this problem, albeit through an unorthodox method.
I have placed my iTunes library folder on a NAS I have at home and configured both home Mac Pro and Macbook Pro at work to use the same library. Since I have VPN, the file path is the same for both machines, so they both now can access the same library and sync to the phone from it.
Now when at work I come across something I want to drop to the iPhone for listening on my way back home, I add the file to the library (it travels over the VPN to my home NAS) and then sync (it travels back over the same VPN to my iPhone)
Since there's about 20mbit bandwidth, the speed and response is acceptable.
The problem is solved.
I wonder, why Apple had to put this strange limitation of only one library being able to sync to the iPhone? Is the rationale behind this decision documented anywhere? -
JAVA API AND ABAP API SIMPLE SCENARIO
Hello MDM gurus
I have never used any Java API or ABAP API to leverage and present MDM functionalities on front end systems like portal,etc...
Could you please give me all the required to play around with JAVA api and ABAP api's.
Points will be given to every valuable answer.
ThanksHi Nazeer,
In order to use Portal you need Java APIs and to start with refer the MDM Java docs to get the basic idea of various classes and methods to be used in developing the simple java application and access it using portal.
http://help.sap.com/saphelp_mdm550/helpdata/en/47/9f23e5cf9e3c5ce10000000a421937/frameset.htm
Sample code for Duplicating Repository
public class TestDuplicateRepository
public static ConnectionPool simpleConnection;
public static RepositoryIdentifier repIdentifier,repIdentifier1;
public static String session;
public static String connection = "MDMServer_Test";
public static String repository1 = "Test_Repository";
public static String repository2 = "Test_Duplicate";
public static DBMSType dbmsType = DBMSType.MS_SQL;
public static void main(String[] args)throws CommandException, ConnectionException
//Creating Connection.
simpleConnection = ConnectionPoolFactory.getInstance(connection);
//Establishing connection with Repository.
repIdentifier = new RepositoryIdentifier(repository1, connection, dbmsType);
repIdentifier1 = new RepositoryIdentifier(repository2, connection, dbmsType);
//Creation Sever Session.
CreateServerSessionCommand createServerSessionCmd = new CreateServerSessionCommand(simpleConnection);
createServerSessionCmd.execute();
session = createServerSessionCmd.getSession();
//Authenticating Server Session.
AuthenticateServerSessionCommand auth= new AuthenticateServerSessionCommand(simpleConnection);
auth.setSession(session);
auth.setUserName("Admin");
auth.setUserPassword("Admin");
auth.execute();
session = auth.getSession();
//Duplicate Repository Command
DuplicateRepositoryCommand duplRepCmd = new DuplicateRepositoryCommand(simpleConnection);
duplRepCmd.setDBMSUserName("sa");
duplRepCmd.setDBMSUserPassword("abc");
duplRepCmd.setSession(session);
duplRepCmd.setSourceRepositoryIdentifier(repIdentifier);
duplRepCmd.setTargetRepositoryIdentifier(repIdentifier1);
duplRepCmd.execute();
Similarly you can try with Getting server version, Archive repository and then move on to adding,modifying records etc.
For ABAP APIs refer the below link
http://help.sap.com/saphelp_mdm550/helpdata/en/44/93aa6e31381053e10000000a422035/frameset.htm
Regards,
Jitesh Talreja -
Hi,
I'm just a beginner with XI. I've got such scenario to develop. Could anyone help me ?
1. After completing production in external system I'm receving LOIPRO.LOIPRO01 message from R/3 system with status I0045. (I mean LOIPRO with status I0045 is beeing sent from external system to XI)
2. In that case I should call BAPI_GOODSMVT_CREATE with quantity 0 and delivery flag completed.
3. To call that BAPI, IDOC sent from external system has unsufficient data (only status and process order). Because of that before BAPI_GOODSMVT_CREATE the BAPI_PROCORD_GET_DETAIL must be called.
Should I use BPM or I can avoid using it ?
Could anyone describe me how should it looks like in Intergration Repository (I would be very grateful for any step-by-step description )
Thanks in advance.Hi Lukasz,
While your scenario could be accomplished using a BPM, this would not be optimal. One reason is the performance overhead incurred and another is that the information received from BAPI_PROCORD_GET_DETAIL could (in principle) be stale by the time you later call BAPI_GOODSMVT_CREATE because there is no way to call the 2 BAPIs from XI in the same transactional context.
A better solution would be to wrap the 2 BAPI calls in either a remote-enabled function module or, better yet, a receiver proxy in the R/3 system. Thus, XI simply calls this RFC or proxy and it, in turn, takes care of calling the 2 BAPIs in sequence.
Regards,
Thorsten -
R1---Cloud(R4)----R2
|
R3(KS)
hi,
I set up 3 routers, with R3 being the KS. a very simple GET VPN. It is not working. The underlying reachibility is fine.
any idea?
thanks,
Han
=====R3, KS====
crypto isakmp policy 10
encr aes
authentication pre-share
group 2
crypto isakmp key cisco address 1.1.14.1
crypto isakmp key cisco address 1.1.24.2
crypto ipsec transform-set mygdoi-trans esp-aes esp-sha-hmac
crypto ipsec profile godi-profile-getvpn
set security-association lifetime seconds 7200
set transform-set mygdoi-trans
crypto gdoi group getvpn
identity number 1234
server local
rekey retransmit 10 number 2
sa ipsec 1
profile godi-profile-getvpn
match address ipv4 199
replay counter window-size 64
interface Serial1/0
ip address 1.1.34.3 255.255.255.0
serial restart-delay 0
router ospf 1
log-adjacency-changes
network 0.0.0.0 255.255.255.255 area 0
ip forward-protocol nd
no ip http server
no ip http secure-server
access-list 199 permit ip host 1.1.1.1 host 2.2.2.2
access-list 199 permit ip host 2.2.2.2 host 1.1.1.1
============R1, GM============
crypto isakmp policy 10
encr aes
authentication pre-share
group 2
lifetime 1200
crypto isakmp key cisco address 1.1.34.3
crypto gdoi group getvpn
identity number 1234
server address ipv4 1.1.34.3
crypto map getvpn-map 10 gdoi
set group getvpn
interface Loopback0
ip address 1.1.1.1 255.255.255.0
interface FastEthernet0/0
no ip address
shutdown
duplex half
interface Serial1/0
ip address 1.1.14.1 255.255.255.0
serial restart-delay 0
crypto map getvpn-map
router ospf 1
log-adjacency-changes
network 0.0.0.0 255.255.255.255 area 0
=====R2, GM=====
crypto isakmp policy 10
encr aes
authentication pre-share
group 2
lifetime 1200
crypto isakmp key cisco address 1.1.34.3
crypto gdoi group getvpn
identity number 1234
server address ipv4 1.1.34.3
crypto map getvpn-map 10 gdoi
set group getvpn
interface Loopback0
ip address 2.2.2.2 255.255.255.0
interface Serial1/0
ip address 1.1.24.2 255.255.255.0
serial restart-delay 0
crypto map getvpn-map
router ospf 1
log-adjacency-changes
network 0.0.0.0 255.255.255.255 area 0
============
show cryto ipsec sa on R2
R2#sh cry ips sa
interface: Serial1/0
Crypto map tag: getvpn-map, local addr 1.1.24.2
protected vrf: (none)
local ident (addr/mask/prot/port): (2.0.0.0/255.0.0.0/0/0)
remote ident (addr/mask/prot/port): (1.0.0.0/255.0.0.0/0/0)
current_peer 0.0.0.0 port 848
PERMIT, flags={origin_is_acl,}
#pkts encaps: 0, #pkts encrypt: 0, #pkts digest: 0
#pkts decaps: 0, #pkts decrypt: 0, #pkts verify: 0
#pkts compressed: 0, #pkts decompressed: 0
#pkts not compressed: 0, #pkts compr. failed: 0
#pkts not decompressed: 0, #pkts decompress failed: 0
#send errors 0, #recv errors 0
local crypto endpt.: 1.1.24.2, remote crypto endpt.: 0.0.0.0
path mtu 1500, ip mtu 1500, ip mtu idb Serial1/0
current outbound spi: 0xB4D74B58(3034008408)
PFS (Y/N): N, DH group: none
inbound esp sas:
spi: 0xB4D74B58(3034008408)
transform: esp-aes esp-sha-hmac ,
in use settings ={Tunnel, }
conn id: 3, flow_id: SW:3, sibling_flags 80000040, crypto map: getvpn-map
sa timing: remaining key lifetime (sec): (4739)
Kilobyte Volume Rekey has been disabled
IV size: 16 bytes
replay detection support: N
Status: ACTIVE
inbound ah sas:
inbound pcp sas:
outbound esp sas:
spi: 0xB4D74B58(3034008408)
transform: esp-aes esp-sha-hmac ,
in use settings ={Tunnel, }
conn id: 4, flow_id: SW:4, sibling_flags 80000040, crypto map: getvpn-map
sa timing: remaining key lifetime (sec): (4739)
Kilobyte Volume Rekey has been disabled
IV size: 16 bytes
replay detection support: N
Status: ACTIVE
outbound ah sas:
outbound pcp sas:
protected vrf: (none)
local ident (addr/mask/prot/port): (1.0.0.0/255.0.0.0/0/0)
remote ident (addr/mask/prot/port): (2.0.0.0/255.0.0.0/0/0)
current_peer 0.0.0.0 port 848
PERMIT, flags={origin_is_acl,}
#pkts encaps: 0, #pkts encrypt: 0, #pkts digest: 0
#pkts decaps: 0, #pkts decrypt: 0, #pkts verify: 0
#pkts compressed: 0, #pkts decompressed: 0
#pkts not compressed: 0, #pkts compr. failed: 0
#pkts not decompressed: 0, #pkts decompress failed: 0
#send errors 0, #recv errors 0
local crypto endpt.: 1.1.24.2, remote crypto endpt.: 0.0.0.0
path mtu 1500, ip mtu 1500, ip mtu idb Serial1/0
current outbound spi: 0xB4D74B58(3034008408)
PFS (Y/N): N, DH group: none
inbound esp sas:
spi: 0xB4D74B58(3034008408)
transform: esp-aes esp-sha-hmac ,
in use settings ={Tunnel, }
conn id: 1, flow_id: SW:1, sibling_flags 80000040, crypto map: getvpn-map
sa timing: remaining key lifetime (sec): (4739)
Kilobyte Volume Rekey has been disabled
IV size: 16 bytes
replay detection support: N
Status: ACTIVE
inbound ah sas:
inbound pcp sas:
outbound esp sas:
spi: 0xB4D74B58(3034008408)
transform: esp-aes esp-sha-hmac ,
in use settings ={Tunnel, }
conn id: 2, flow_id: SW:2, sibling_flags 80000040, crypto map: getvpn-map
sa timing: remaining key lifetime (sec): (4739)
Kilobyte Volume Rekey has been disabled
IV size: 16 bytes
replay detection support: N
Status: ACTIVE
outbound ah sas:
outbound pcp sas:
R2#First, I would say the sorryserver should be the CSS2 vip and not a server behind it.
This is a feasible solution.
The only important point is that CSS1 needs to see the response from the server, so you need to nat traffic on CSS1 with an ip address part of CSS1 subnet so that the server behind CSS2 can send the response to CSS1 and not directly to the client.
You can do this with a group.
ie:
group natme
vip x.x.x.x
add destination service sorryserver1
active
Regards,
Gilles. -
Best practice for simple scenario
I'm working on demo of File To File integration.
Mapping between two system is complicated.
For example:
There is a field (CODE) in SYSTEM 1. I have to perform SQL Query (SELECT ID FROM TABLE WHERE CODE=CODE_FROM_SYSTEM_1) This ID will be an ID for SYSTEM 2
Which way is the best.
1. To write logic of mapping in SYSTEM 2. It will be some service that will perform import.
I will use cache in order reduce number of SQL Queries
(For example I perform this query only once SELECT ID,CODE FROM TABLE) each ID will be obtained from cache by CODE_FROM_SYSTEM_1
2. To create some complicated Business process that will use JDBC Adapter for executing SQL Queries.
But what about caching?
What do you think? Which way will be the best?
Message was edited by: Sergey AHI Sergey,
if you can use the first approach and leave only message processing for XI
albo if your files are big not using BPM will be a good idea
unless your messages are very small then you cna try the second approach
Regards,
michal
Maybe you are looking for
-
Amount in Local Currency of documents of material (mov 251)
Hello Gurus, We have a problem in the Amount in Local Currency of documents of material (mov 251), resulting from the idoc WPUUMS01. When is the posting date before the current period. example: Material X August Moving Price 150u20AC >>Total Stock
-
I am facing a problem using the screen-capture option. when I use cmdctrl+shift4 or 3 it actually makes the sound that it captured my screen but I can't find the captured file on my desktop. I have also installed the "deeper" application and changed
-
Bookmarks will not add. 10.8.4
Bookmarks will not post using add prompt
-
G4 with Tiger 10,4,10 is very slow to shut down The mac is running but something is probably wrong in the system since every command is slow a defect called " non valid key length " can't be repaired by Disk utility , neither by Tech Tools !! I would
-
WAN MPLS with IS-IS - 1G or 10G ethernet
Which method would be better: use one 10G link, or combine multiple 1G interfaces? Does anyone know about any pros or cons in using each method? Thank you very much.