Reg: Value Mapping - Recommended Max Entries/Size
Hi,
Just curious to know if there is any recommendation on the max number of entries stored and retrieved in a Value Maps.
Is it suitable for storing entries in order of 5000 or 10000. Also is it effecient for retrieval from such a big Map.
Since it is stored in Java Cache, is there any limitation on the size (Number of Entries).
Anticipating your valuable inputs.
Thanks,
Sudharshan N A
Hi
There will be performance problem if more entries are used in Value mapping.
Value mapping replication will be better choice for huge records.
Regards
Abhijit
Similar Messages
-
Reg:Value Mapping Replacment
hi friends
i need to design a scenario using value mapping .i have nearly 600 records to be replaced but i observed in the configuration part that we have to enter all the values manually which is not prefferable for 600 records. so is there any alternative to place the values like copy paste from excel sheet.
regards
sumanHi,
>so is there any alternative to place the values like copy paste from excel sheet.
there is something called value mapping mass replication
it's an interface for value mapping table
and you can use it in your own abap report that will
process excel files
but remember you will not see those values in ID
you can only view them from RWB but they will work
in standard value mappings
but I'd suggest doing a normal lookup and not using value mapping tables at all
Regards,
Michal Krawczyk
https://mypigenie.com XI/PI FAQ -
Hi all,
We got a doubt regarding value mapping,
Is there any size limit for value mapping tables that we are creating in Configuration?
Thanks and regards,
sasiHi,
since the valueMapping will be executed on Java Stack, the size limit will rely on your JVM memory setting (Heap size, basically).
But the memory request will be from your payload. As a rule of thumb, the memory requirement will be 5-10 times of your payload size.
Cheers,
Aaron -
Max-stack-size - default_stksize
Hi,
First, sorry for my english ^^
I'm new on Solaris. I installed Solaris 10 on Sun V490 . I use Core Network.
Why with the default setting, daemons or process of the OS "don't work" like this:
Jun 17 14:50:10 unknown genunix: [ID 883052 kern.notice] basic rctl process.max-stack-size (value 8683520) exceeded by process 353In this example I generate the error with the format command. But I get this error with other command or deamon like nscd.
I had the same problem with the value max-file-descriptor. The values set by projmod for the system project did not seem take effect. Thus I used the "old" parameters rlim_fd_cur, rlim_fd_max. Now it's ok.
I find this parameter default_stksize in the Sun documentation. I put this in my /etc/system file:
set default_stksize=16384At the boot time I have no error message for the value, but the max-stack-size value is the same:
prctl -n process.max-stack-size 130
process: 130: /usr/sbin/nscd
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-stack-size
basic 8,28MB - deny 130
privileged 127MB - deny -
system 2,00GB max deny -
nscd is in the system project:
ps -p 130 -o project
PROJECT
system
Thank in advance, any idea is welcome.
GuillaumeHi Prasad,
Block Size is the number of parallel processes that are being executed in background during the application. This is normally a configuration activity to be configured in line with basis.
In Block size, we enter the number of objects to be processed per block by the CIF comparison/reconciliation during data selection in SAP APO or in the partner system.
If you increase the block size, the memory required also increases. This has a positive effect on performance. If processes are cancelled due to lack of memory, you can decrease the block size to save memory.
If you do not enter anything here, the system determines the block size dynamically from the number of objects that actually exist and the maximum number of work processes available.
Normally, when you execute a job in background, it picks up the application server automatically or by manually defined server. In parallel processing, at a time, one or more job of the same identify can be triggered under this scenario by defining application servers. But too many parallel processing activity will affect the performance.
One needs to define the parallel processes also to control system behaviour. The Parallel processing profile is defined for parallel processing of background jobs. You then assign these profiles to variants in the applications.
Regards
R. Senthil Mareeswaran. -
Modify entries of Value mapping table fromValue mapping replication
Hi,
I have successful uploaded value mapping table from external source(SAP Table) using value mapping replication. The entries are displayed in the cache monitoring.
Now what should i do,if i have to modify the entries in the value mapping table.I tried uploading the new entries using the same GUID, Agency, Schema.... but now both the previously uploaded and the new entries are dispalyed in Cache monitoring.
But i want only the new entries to get reflected in the value mapping table. So kindly request anyone to help.
Regards,
AnupHi Anup,
To know more about the value mapping tools for the SAP Exchange Infrastructure (XI), please go thru the following link:
http://www.applicon.dk/fileadmin/filer/XI_Tools/ValueMappingTool.pdf
To get an idea as to what value mapping is, please go thru the following links:
http://help.sap.com/saphelp_nw04/helpdata/en/13/ba20dd7beb14438bc7b04b5b6ca300/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/f2/dfae3d47afd652e10000000a114084/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/2a/9d2891cc976549a9ad9f81e9b8db25/content.htm
most of the links that I have provided also helps you get the step by step procedure of doing the same. And also involves the procedure to implement certain advanced features.
Regards,
abhy -
Reg: Default value option - is it for value mapping????
Hi,
Recently i had seen an option of mapping a default value or error out the message when value mapping (???) doesnt fetch the conversion. I'm confused if i'm referring to value mapping or any such functionalitywhich comes as a standard pack....certainly not the options u have in recv determination. Can anyone pls enlighten on what it is exactly. I'm left clueless as i barely remember where i got to see tht option from.
Note: def not the default node in mapping too
thanks
P.SRecently i had seen an option of mapping a default value or error out the message when value mapping (???) doesnt fetch
the conversion
Where can he find the option Map with Default value OR error-out the message ..... is it in Value Mapping or in some other functionality
Regards,
Abhishek. -
Value mapping in the ERP system during IDOC creation
We have a PI File-IDOC scenario.
We can translate simple mappings such as currency or unit of measure in PI(7.1) using the Value Mapping function. For complicated mappings involving business logic (e.g. Tax code which is derived from multiple fields) we want to perform these mapping in the ERP (ECC6) system. Is there an approved generic SAP standard process where these value mappings can be done in the IDOC creation which is still valid when re-processing?
e.g. a specific BADI or enhancement point recommended for these mapping?
There might also be a requirement to add segments based on value mapping logic for example when a tax record segment is only required for a non-zero tax code which is mapped via business logic.
Can someone please direct me in the right direction.Thanks for your answers but I assumed that using a user exit / Enhancement point was obvious. What I'd like to know is a generic entry point that I can estabolish a IDOC enhancement framework. Somewhere I can call a class containing methods linked to IDOC message types
e.g. Call similar to my prototype (this will be where dependant on mappings certain segments will need to be inserted such as tax segments.
ASSIGN control-mestyp TO <mestyp>.
CALL METHOD (<mestyp>)
EXPORTING
control = control
data = data
IMPORTING
have_to_change = have_to_change
protocol = protocol
new_entries = new_entries.
The "entry point" must be processed both at creation and reprocess. We are also dealing with Inbound IDOCs not outbound -
Value mapping values to be reflected dynamically
Hi All,
I have a scenario where R3 entries should be reflected in value mapping dynamically for a particluar field .
Can anyone explain wht does the component SAP BASIS 7.00(NS- http://sap.com/xi/XI/System,MI -ValueMappingReplication) has the role. by generating a java proxy & deploying the jar file in visual admin helps, after this proceeding with abap mapping to retrieve the value from r3 dynamically using this MI.
Correct me if Iam wrong?
I have to schedule this evry week such tht new entries should get populated dynamically form r3 in abap mapping.
Appreciate if anyone has inputs.
Regards
ChaithanyaHi All,
I am facing a problem while processing the message(value mapping values not been reflected in cache monitoring) .
Iam getting the below error "INTERNAL">HTTP_RESP_STATUS_CODE_NOT_OK"
SAP:AdditionalText><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html> <head> <title>Error Report</title> <style> td {font-family : Arial, Tahoma, Helvetica, sans-serif; font-size : 14px;} A:link A:visited A:active </style> </head> <body marginwidth="0" marginheight="0" leftmargin="0" topmargin="0" rightmargin="0"> <table width="100%" cellspacing="0" cellpadding="0" border="0" align="left" height="75"> <tr bgcolor="#FFFFFF"> <td align="left" colspan="2" height="48"><font face="Arial, Verdana, Helvetica" size="4" color="#666666"><b> 404   Not Found</b></font></td> </tr> <tr bgcolor="#3F73A3"> <td height="23" width="84"><img width=1 height=1 border=0 alt=""></td> <td height="23"><img width=1 height=1 border=0 alt=""></td> <td align="right" height="23"><font face="Arial, Verdana, Helvetica" size="2" color="#FFFFFF"><b>SAP J2EE Engine/7.00 </b></font></td> </tr> <tr bgcolor="#9DCDFD"> <td height="4" colspan="3"><img width=1 height=1 border=0 alt=""></td> </tr> </table> <br><br><br><br><br><br> <table width="100%" cellspacing="0" cellpadding="0" border="0" align="left" height="75"> <tr bgcolor="#FFFFFF"> <td align="left" colspan="2" height="48"><font face="Arial, Verdana, Helvetica" size="3" color="#000000"><b> The requested resource does not exist.</b></font></td> </tr> <tr bgcolor="#FFFFFF"> <td align="left" valign="top" height="48"><font face="Arial, Verdana, Helvetica" size="2" color="#000000"><b> Details:</b></font></td> <td align="left" valign="top" height="48"><font face="Arial, Verdana, Helvetica" size="3" color="#000000"><pre> Go to <A HREF="/MessagingSystem" target="_parent">main page</A> of this application!</pre></font></td> </tr> </body>
I have created abap report to fetch the entries from t005 table , I am getting the entries in a internal table but these entries are not reflected in cache monitoring.
Can anyone suggest me if you have gone through this error?
Appreciate your inputs
Regards
Chaithanya -
Max fetch size in DataScrollerComponent
Hi
If i cap a max fetch size of 2000 on a query that i know will return 8000 rows the DataScroller still reports 1-rageSize of 8000 ?
If i try and navigate past row 2000 the component correctly sees the max fetch size and displays 1-rangeSize of 2000 ??
Whats going on ?
Thanks
MattGot to the bottom of this by replacing a the code in the DataScroller component so we could view (and change) the source code. The component files are :
ScrollBarTag.java - used in <jbo:DataScroller> tag
DataScrollerComponent.jsp - the page that renders the dropdown list and next / previous values
DataScroller.java - the bean used in the jsp that contains state information about the current datasource.
When the DataScroller bean is initialised it sets up range values to enable calculation of entries in the scroller (e.g 1-100 0f 1000000). It derives the datasource from PageContext and looks at the RowSet directly.
The problem is that it uses the getEstimatedRowCount() API on the RowSet driectly (i.e. not via the ViewObject).
This figure returned doesnt consistently obey a cap on the max fetch size and will be a nasty bottleneck if you are querying on very large result sets. Mysteriously, if you try to navigate past the end of your maxFetchSize in the scroller the second pass through the code will get the estimated row count correct and obey the cap on maxFetchSize ?
If you change the call to use the getRowCount() API instead, this figure seems to consistently obey the maxFetchSize cap.
We had to look into this code as this component will perform very badly against 6 and 7 figure ResultSets as its not taking into account any caps set.
As a matter of interest, changing the code in the bean to ask the view object directly for its getEstimatedRowCount() seems to consistently obey any caps !?! :
ds = Utils.getDataSourceFromContext(pageContext, dsName);
RowSet rs = ds.getRowSet();
ApplicationModule am = rs.getApplicationModule();
ViewObject dvo = am.findViewObject(Constants.SOME_DVO);
if (dvo != null)
System.out.println("dvo "+dvo.getName());
System.out.println("est rows "+dvo.getEstimatedRowCount());
System.out.println("rows "+dvo.getRowCount());
This code will get the figure right and if run before the
rs.getEstimatedRowCount seems to force the correct value in the RowSet.
Matt -
Sophos AV max scanning size / timeout
Hi,
I haven't found any changeable settings for max. scanning size or scanning timeout on a S160 v7.1.3 with Sophos AV.
In the GUI under "Security Services-->Anti-Maleware" it shows "Object Scanning Limits: Max. Object Size: 32 MB".
I'm not able to change it. This parameter seems not to belong to the Sophos AV.
I can change it only after enableing Webroot or McAfee first.
The CLI has no commands for adjusting AV settings.
How can I control the max. scanning size or scanning timeout with Sophos-AV?
Has it fixed values for it?
Does anyone have an idea, how it works?
Kind regards,
ManfredWith administrator rights, the value should be editable. The object size is applied to all scanners which have been licensed and enabled on the appliance.
~Tim -
N: 1 relationships in Value Mapping 3.0
Hi,
in our Value Mapping we need to process n:1 relationships between values. E.g. 2 different codes for reason-for-rejection in the sending system need to be mapped to 1 value in the receiving system.
As we see now, and this is also mentioned somewhere in this forum, this doesn't work. Is this correct?
As suggested in this forum one option would be to add the sending value in the receiving value and use mapping to trim the value. So e.g.
Agency: SystemX Agency: SystemY
Scheme: Augru Scheme: Augru
Value Value
A1 B
A2 B
would become
A1 A1_B
A2 A2_B
Of course, this would work, but just one-way (from SystemX to SystemY). From SystemY to SystemX this would not work, since only value B is available at that time. If you apply the same logic, so implement another value, this would eventually mean:
Agency: SystemX Agency: SystemY
Scheme: Augru Scheme: Augru
Value Value
A1 A1_B
A2 A2_B
C_A1 C_B
D_A2 D_B
So you end up with a lot of values to maintain.
Another option would be to use different Value Mapping scheme's, but this would influence the reusability of the mapping.
Any suggestions?
Thanks a lot!When you have an n:1 mapping and want to use this in both directions, you need a logic, how the values are determined, if you have several entries to choose.
In your example:
A1 -> B
A2 -> B
If you come from the other side, what value should B have?
For the default value you could use a unique prefix (eg: 1_) and add that prefix to the value before calling the value mapping.
Let us assume, you have maintained the table like this:
A1 -> 1_B
A2 -> 2_B
A3 -> 1_C
A4 -> 1_D
A5 -> 2_D
A6 -> 3_D
so you have B ->A1, C -> A3, D -> A4
Regards
Stefan -
Table lookup instead of fixed value mapping
Hi Folks,
My current scenario is that I have used fixed value mapping to map a single target field. These details are actually maintained in the TP_Code table in R3 (a sample table).
TradingPartner:SAP:Short Text
PA:PA:Package
PL:PAL:Pallet
The table contains 20 entries.
I am thinking that fixed value mapping is not a good option since there might have future changes in the TP_Code table.
Is rfc lookup a good option? But I am not sure which rfc to use and if there is any udf availble to be used in parsing the xml payload.
Please advise.
Thanks a ton!Thanks so much for the links!
I managed to use and run the RFC lookup to an R3 table using JCo. However, I am not getting the right result.
I am getting the result in this format -> 800,PA,PAC,Package
The correct output is PAC which is the third node. Could this only be achieved by using java tokenizer?
The code that I have used is as follows:
Import com.sap.mw.jco.*;
//write your code here
String DBTABLE = a;
String WHERE_CLAUSE = b" = ""'"c"'" ;
String sapClient = "xxx";
String hostName = "nnnxxx";
String systemNumber = "xx";
String userName = "nnnnn";
String password = "******";
String language = "EN";
JCO.Repository mRepository;
JCO.Client mConnection = JCO.createClient(
sapClient,
userName,
password,
language,
hostName,
systemNumber );
// connect to SAP
mConnection.connect();
// create repository
mRepository = new JCO.Repository( "SAPLookup", mConnection );
// Create function
JCO.Function function = null;
IFunctionTemplate ft = mRepository.getFunctionTemplate("RFC_READ_TABLE");
function = ft.getFunction();
// Obtain parameter list for function
JCO.ParameterList input = function.getImportParameterList();
// Pass function parameters
input.setValue( DBTABLE, "QUERY_TABLE");
input.setValue( "," , "DELIMITER");
//Fill the where clause of the table
JCO.ParameterList tabInput = function.getTableParameterList();
JCO.Table inputTable = tabInput.getTable("OPTIONS");
inputTable.appendRow();
inputTable.setValue(WHERE_CLAUSE,"TEXT");
mConnection.execute( function );
JCO.Table valueSet = function.getTableParameterList().getTable("DATA");
String resultSet = valueSet.getString("WA");
mConnection.disconnect();
return resultSet; -
Value mapping : target value need to be blank
Hi,
I have a requirement like below (just example).
Source Target
A Z
B Y
C X
N BLANK or ""
When I did this using value mapping it works fine for all values except for source value 'N'.
When the source is N then the target should have value space or blank. Is this not supported by value mapping?
For value N, do i have to explicitly check 'if value N then make it blank'?> For source value N I am getting target as N though in value mapping I have mentioned the target as blank.
This is standard behavior of value mapping when any value is not present in value mapping table then it return the input value as it is. So this means when you declare N with a bank value as result then value mapping won't consider it as an valid entry in the value mapping table. So now you have to take care of N explicitly. -
Value Mapping Replication Vs RFC Lookup API Usage Pros and Cons
Hi Ananth,
Looking at the options you have, Value Mapping Replication might be a better option, because, by going the second route, you are just trying to simulate how a Value mapping replication works in reality.
But, when the VMR framework is readily available, why do we need to simulate it by using Java code?
Moreover, the java code approach will be triggered as many times and the number of messages that are triggered VS the number of times the VMR tables in the Java Runtime gets updated only when there is an entry that is added/modified in the SAP side (Which i feel is less frequent than the frequency of the messages being sent through XI).
Regards,
Ravi Kanth TalaganaHi,
There are more issues you need to consider here:
1) Is this static data or will it change often (the lookup data);
2) Wether it is a good approach to keep it in middle system? if so who takes the ownership of managing it i.e. update, delete etc from time to time
3) The tables don't seem small to me for value mapping
I am not a big expert but i am off the opinion going RFC way as it will keep loose-coupling in place.
However your issue for not using graphical RFC is not true though, you can pass all your line items as a single object which means only making ONE single call to ECC and then get the return the same way. Think along the lines of internal table as an object
regards, -
Hello,
I am implementing value mapping in XSLT using template
I call following template to retrieve the value from ID....
<xsl:template name="NonMerchandiseSaleValueMapping">
<xsl:param name="GMNonMerchandiseType"/>
<xsl:value-of select="vm:executeMapping( 'GMNonMerchandiseType', 'GMNonMerchandiseTypeSchema', $GMNonMerchandiseType, 'IXRetailPOSLogLineItemType', 'IXRetailPOSLogLineItemTypeSchema')"/>
</xsl:template>
It works fine for 1 entry that was added long back. I added few new entries in ID value mapping table. All of them does not work. Is there any step we need to do after entering new data in value mapping table. I can see the values in cache monitoring also.
regards
GrewalHey
In Both IR and ID go to Environment->Cache Notifications
Do you see any error(red icon) for cache update,it must be all green to refresh cache.If there are any red color icons,click that and manually do a cache refresh(by clicking the small cache refresh icon).
also refresh Cache in RWB.
If all the above does not work,you can re-import the XSLT mapping in IR and see if it works.
Thanks
Aamir
Maybe you are looking for
-
"Youtube not available" error on my video
I uploaded a video last night to YouTube. I can access it from the YouTube app on my phone only if I am signed in and go to "My Videos". But if I try to open it using a link to the video, I get a "Youtube not available" error. I've gotten 3 other peo
-
Erratic, intermittent Airport
I am having some problems with the Airport connection on my 2008 MP. I have a Powerbook as well, both are in the same location. But the MP is constantly losing the connection and that is messing up Mail as well, with timeouts. Any ideas? Or is this a
-
Hey everyone, I just had a question about cloning an object. I have an object and within that object there is a field that holds a "date". I want to clone the object x number of times (x is user input) and each clone the "date" is the next day. So th
-
Hi guys, i loaded this query (i modified from the original) into the dataprovider of the WAD (copied from the original template with the original query) and when executing it in the browser i am getting this error system error in program CL_RSR_WWW_I
-
Mirrored RAID set has degraded following power outage.
Hello, Following a recent power outage our Mac Pro running Leopard OSX Server with 2 x 1TB discs in a Mirroring RAID configuration (with an installed RAID card) developed a 'severe error' message. The Raid Set R0-1 has a Viable (degraded) status. Dri