Data load failed at infocube level
Dear Experts,
I hve data loads from the ECC source system for datasource 2LIS_11_VAITM to 3 different datatargets in BI system. the data load is successful until PSA when comes to load to datatargets the load is successful to 2 datatargets and failed at one data target i.e. infocube. I got the following error message:
Error 18 in the update
Diagnosis
The update delivered the error code 18 .
Procedure
You can find further information on this error in the error message of
the update.
Here I tried to activate update rules once again by excuting the program and tried to reload using reconstruction fecility but I get the same error message.
Kindly, please help me to analyze the issue.
Thanks&Regards,
Mannu
Hi,
Here I tried to trigger repeat delta in the impression that the error will not repeat but then I encountered the issues like
1. the data load status in RSMO is red but where as in the data target the status is showing green
2. when i try to analyze psa from rsmo Tcode PSA is giving me dump with the following.
Following analysis is from Tcode ST22
Runtime Errors GETWA_NOT_ASSIGNED
Short text
Field symbol has not yet been assigned.
What happened?
Error in the ABAP Application Program
The current ABAP program "SAPLSLVC" had to be terminated because it has
come across a statement that unfortunately cannot be executed.
What can you do?
Note down which actions and inputs caused the error.
To process the problem further, contact you SAP system
administrator.
Using Transaction ST22 for ABAP Dump Analysis, you can look
at and manage termination messages, and you can also
keep them for a long time.
Error analysis
You attempted to access an unassigned field symbol
(data segment 32821).
This error may occur if
- You address a typed field symbol before it has been set with
ASSIGN
- You address a field symbol that pointed to the line of an
internal table that was deleted
- You address a field symbol that was previously reset using
UNASSIGN or that pointed to a local field that no
longer exists
- You address a global function interface, although the
respective function module is not active - that is, is
not in the list of active calls. The list of active calls
can be taken from this short dump.
How to correct the error
If the error occures in a non-modified SAP program, you may be able to
find an interim solution in an SAP Note.
If you have access to SAP Notes, carry out a search with the following
keywords:
"GETWA_NOT_ASSIGNED" " "
"SAPLSLVC" or "LSLVCF36"
"FILL_DATA_TABLE"
Here I have activated the include LSLVCF36
reactivated the transfer rules and update rules and retriggered the data load
But still I am getting the same error...
Could any one please help me to resolve this issue....
Thanks a lot,
Mannu
Thanks & Regards,
Mannu
Similar Messages
-
Infocube data loads fail with UNCAUGHT_EXCEPTION dump after BI 7.01 upgrade
Hi folks,
We're in the middle of upgrading our BW landscape from 3.5 to to 7.01. The Support pack we have in the system is SAPKW70103.
Since the upgrade any of the data loads going to infocubes are failing with the following
ABAP dump UNCAUGHT_EXCEPTION
CX_RSR_COB_PRO_NOT_FOUND.
Error analysis
An exception occurred which is explained in detail below.
The exception, which is assigned to class 'CX_RSR_COB_PRO_NOT_FOUND', was not
caught and
therefore caused a runtime error.
The reason for the exception is:
0REQUEST is not a valid characteristic for InfoProvider
Has anyone come accross this issue and if so any resolutions you adopted to fix this. Appreciate any help in this regard.
Thanks
Mujeebhi experts,
i have exactly the same problem.
But I can`t find a test for infoobjects in rsrv. How can I repair the infoobject 0REQUEST`?
Thx for all answers!
kind regards
Edited by: Marian Bielefeld on Jul 7, 2009 8:04 AM
Edited by: Marian Bielefeld on Jul 7, 2009 8:04 AM -
Data load failing in update rules
Hello All and happy New Year!!
Please provide assistance to solve this error - a full data load into an InfoCube is failing in the update rules with the error ABORT was set in the customer routine 9998".
Your assistance will be highly appreciated.
Regards,
KeithHi Keith,
You need to check the start routine in the update rules...this is what the message is referring to. You can also debug the update rules if you like...in the debugger screen search for 9998 to get to the Start Routine code.
Hope this helps... -
Data load failed while loading data from one DSO to another DSO..
Hi,
On SID generation data load failed while loading data from Source DSO to Target DSO.
Following are the error which is occuuring--
Value "External Ref # 2421-0625511EXP " (HEX 450078007400650072006E0061006C0020005200650066
Error when assigning SID: Action VAL_SID_CONVERT, InfoObject 0BBP
So, i'm not getting WHY in one DSO i.e Source it got successful but in another DSO i.e. Target its got failed??
While analyzing all i check that SIDs Generation upon Activation is ckecked in source DSO but not in Target DSO..so it is reason its got failed??
Please explain..
Thanks,
SnehaHi,
I hope your data flow has been designed in such a way where the 1st DSO as a staging Device and all transformation rules and routine are maintained in between 1st to 2nd dso and sid generation upon activation maintained in 2nd DSO. By doing so you will be getting your data 1st DSO same as your source system data since you are not doing any transformation rules and routine etc.. which helps to avoid data load failure.
Please analyze the following
Have you loaded masterdata before transaction data ... if no please do it first
go to the property of first dso and check whether there maintained sid generation up on activation (it may not be maintained I guess)
Goto the property of 2nd Dso and check whether there maintained sid generation up on activation (It may be maintained I hope)
this may be the reason.
Also check whether there is any special char involvement in your transaction data (even lower case letter)
Regards
BVR -
Data load failing with a short-dump
Hello All,
There is a data load failing with the following short-dump:
Runtime Errors UNCAUGHT_EXCEPTION
Except. CX_FOEV_ERROR_IN_FUNCTION
Full text:
"UNCAUGHT_EXCEPTION" CX_FOEV_ERROR_IN_FUNCTIONC
"CL_RSAR_FUNCTION==============CP" or "CL_RSAR_FUNCTION==============CM004"
"DATE_WEEK"
It seems as if it is failing in a function module - please advise.
Regards,
Keith KibuukaDid you read OSS [Note 872193 - Error in the formula compiler|https://service.sap.com/sap/support/notes/872193] or OSS [Note 855424 - Runtime error UNCAUGHT_EXCEPTION during upload|https://service.sap.com/sap/support/notes/855424]
Regards -
Alert Message When Data Load Fails
Hi Friends,
I have three process chains(p1,p2,p3)and all for transactional data loading(psa-ods-cube).suppose whenever data load fails,it sends message to my company mail id. is there any functionality to do like this.
i am not much aware of RSPCM(Process Chain Monitor).if anyone knows. tell me how much does it support my requirement?
thank in advance
samhi Sam,
welcome to sdn ...
you can add 'create message' by right click the process type, this will lead you to mail sending setting.
see thread
Getting Failed process chain info through email
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/b0afcd90-0201-0010-b297-9184845346ca
you may need to setup sapconnect for email, take a look
http://help.sap.com/saphelp_webas620/helpdata/en/2b/d925bf4b8a11d1894c0000e8323c4f/frameset.htm
http://help.sap.com/saphelp_webas620/helpdata/en/af/73563c1e734f0fe10000000a114084/content.htm
hope this helps. -
Aggregating data loaded into different hierarchy levels
I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
I read the help in DML Reference of the OLAP Worksheet and it said the follow:
When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
LIMIT all_but_q4 TO ALL
LIMIT all_but_q4 REMOVE 'Q4'
Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
How to do it this for more than one dimension?
Above i wrote my case of study:
DEFINE T_TIME DIMENSION TEXT
T_TIME
200401
200402
200403
200404
200405
200406
200407
200408
200409
200410
200411
2004
200412
200501
200502
200503
200504
200505
200506
200507
200508
200509
200510
200511
2005
200512
DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
-----------T_TIME_HIERLIST-------------
T_TIME H_TIME
200401 2004
200402 2004
200403 2004
200404 2004
200405 2004
200406 2004
200407 2004
200408 2004
200409 2004
200410 2004
200411 2004
2004 NA
200412 2004
200501 2005
200502 2005
200503 2005
200504 2005
200505 2005
200506 2005
200507 2005
200508 2005
200509 2005
200510 2005
200511 2005
2005 NA
200512 2005
DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
EQ -
aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
T_TIME PRUEBA2_IMPORTE
200401 NA
200402 NA
200403 2,00
200404 2,00
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
2004 4,00 ---> here its right!! but...
200412 NA
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
2005 10,00 ---> here must be 30,00 not 10,00
200512 NA
DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
T_TIME PRUEBA2_IMPORTE_STORED
200401 NA
200402 NA
200403 NA
200404 NA
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
2004 NA
200412 NA
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
2005 10,00
200512 NA
DEFINE OBJ262568349 AGGMAP
AGGMAP
RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
AGGINDEX NO
CACHE NONE
END
DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
T_TIME_AGGRHIER_VSET1 = (H_TIME)
DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
T_TIME_AGGRDIM_VSET1 = (2005)
Regards,
Mel.Mel,
There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
= solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
2. Data is loaded at both a detail level and it's ancestor, as in your example case.
= the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
To solve your usage case I would suggest a hierarchy that looks more like this:
DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
-----------T_TIME_HIERLIST-------------
T_TIME H_TIME
200401 2004
200402 2004
200403 2004
200404 2004
200405 2004
200406 2004
200407 2004
200408 2004
200409 2004
200410 2004
200411 2004
200412 2004
2004_SELF 2004
2004 NA
200501 2005
200502 2005
200503 2005
200504 2005
200505 2005
200506 2005
200507 2005
200508 2005
200509 2005
200510 2005
200511 2005
200512 2005
2005_SELF 2005
2005 NA
Resulting in the following cube:
T_TIME PRUEBA2_IMPORTE
200401 NA
200402 NA
200403 2,00
200404 2,00
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
200412 NA
2004_SELF NA
2004 4,00
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
200512 NA
2005_SELF 10,00
2005 30,00
3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
= this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
= often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation. -
Material Master Data load failed
Hi All,
I am loading material master data. I have around 8055 records in R/3. The load has failed with an error "Non-updated Idocs found in Business Information Warehouse" and asking me to process the IDocs manually. I have checked in WE05 for the IDocs and found 3 Idocs with status 64(IDoc ready to be transferred to application).
But when I checked in the manage screen of the 0MATERIAL I could find the Transfered and Updated records as 8055. I have even checked the data in 0MATERIAL and found that all the data(8055 records) has already been uploaded.
Why is it still showing an error(load failed) even when all the data has been uploaded? What should I do now?
Best Regards,
Nene.hi Nene/Sankar,
for material text no language please check Note 546346 - Material texts: no selection on languages
and for why idocs problem, check e.g Note 561880 - Requests hang because IDocs are not processed, Note 555229 - IDocs hang in status 64 for tRFC with immediate processing.
hope this helps.
Note 546346 - Material texts: no selection on languages
Summary
Symptom
When loading material texts from R/3 into the Business Information Warehouse, you cannot select languages.
Other terms
DataSource, 0MATERIAL_TEXT, InfoSource, InfoObject, 0MATERIAL, SPRAS, LANGU, 0LANGU, MAT_BW, extraction, selection field, delta extraction, ALE, delta, change pointer
Reason and Prerequisites
As of PI/PI-A 2001_2 Support Package 6, the selection option for the 'SPRAS' field in the OLTP system was undone in the 0MATERIAL_TEXT DataSource. (Refer here to note 505952).
Solution
As of PI/PI-A 2002_1 Support Package 4, the 'SPRAS' field is provided for selection again with the 0MATERIAL_TEXT DataSource in the OLTP system. When loading the material texts from BW, the language is still not provided for selection in the scheduler, instead all languages of the language vector in BW are implicitly requested from the source system. However, during the delta update in the source system, the change pointers for all languages in the source system are now set to processed, regardless of whether the language was requested by BW or not.
Import
Support Package 4 for PI/PI-A 2002_1_31I - PI/PI-A 2002_1_45B
Support Package 4 for PI 2002_1_46B - PI 2002_1_46C
Support Package 3 for PI 2002_1_470
In transaction RSA5, copy the D version of the 0MATERIAL_TEXT DataSource to the A version.
Then replicate the DataSources for 0MATERIAL in the BW system. The 'SPRAS' field is then flagged again as a selection field in the transfer structure of the 0MATERIAL InfoSource. The transfer rules remain unchanged. Activate the transfer rules and perform a delta initialization again.
Note 561880 - Requests hang because IDocs are not processed
Symptom
Data extraction in a BW or SEM BW system from an external OLTP System (such as R/3) or an internal (via DataMart) OLTP System hangs with the 'Yellow' status in the load monitor.
After a timeout, the request status finally switches to 'Red'.
Information IDocs with the status '64' are displayed in the 'Detail' tab.
Other terms
IDoc, tRFC, ALE, status 64
Reason and Prerequisites
Status information on load requests is transferred in the form of IDocs.
IDocs are processed in the BW ALE model using tRFC in online work processes (DIA).
IDoc types for BW (RSINFO, RSRQST, RSSEND) are processed immediately.
If no free online work process is available, the IDocs remain and must then be restarted to transfer the request information.With the conversion to asynchronous processing, it can often happen that no DIA is available for tRFC for a short period of time (see note 535172).
The IDoc status 64 can be caused by other factors such as a rollback in the application updating the IDocs. See the relevant notes.
Furthermore, you can also display these IDocs after the solution mentioned below, however, this is only intended as information.
You must therefore analyze the status text.
Solution
We recommend asynchronous processing for Business Warehouse.
To do this, you need the corrections from note 535172 as well as note 555229 or the relevant Support Packages.
The "BATCHJOB" entry in the TEDEF table mentioned in note 555229 is generated automatically in the BW system when you import Support Package 08 for BW 3.0B (Support Package 2 for 3.1 Content).For other releases and Support Package levels, you must manually implement the entry via transaction SE16.
Depending on the Basis Support Package imported, you may also have to implement the source code corrections from note 555229.
The following basic recommendations apply in avoiding bottlenecks in the dialog processing and checking of IDocs for BW:
1. Make sure there is always sufficient DIA, that is, at least 1 DIA more than all other work processes altogether, for example, 8 DIA for a total of 15 work processes (see also note 74141).
TIP: 2 UPD process are sufficient in BW, BW does not need any UP2.
2. Unprocessed Info IDocs should be processed manually within the request in BW;in the 'Detail' tab, you can start each IDoc again by selecting 'Update manually' (right mouse button).
3. Use BD87 to check the system daily (or whenever a problem arises) for IDocs that have not yet been processed and reactivate if necessary.
However, make sure beforehand that these IDocs can actually be assigned to the current status of requests.
TIP: Also check transaction SM58 for problematic tRFC entries.
IMPORTANT: Notes 535172, 555229 and the above recommendations are relevant (unless otherwise specified) both for BW and for SAP source systems. -
Data load fails from DB table - No SID found for value 'MT ' of characteris
Hi,
I am loading data into BI from an external system (oracle Database).
This system has different Units like BG, ROL, MT (for Meter). While these units are not maintaned in R3/BW. They are respectively BAG, ROLL, M.
Now User wants a "z table" to be maintained in BW, which has "Mapping between external system Units and BW units".
So that data load does not fail. Source system will have its trivial Units, but at the time of loading, BW Units are loaded.
For example -
Input Unit (BG) -
> Loaded Unit in BW (BAG)
Regards,
Saurabh T.Hello,
The table T006 (BW Side) will have all the UOM, only thing is to make sure that all the Source System UOM are maintained in it. It also have fields for Source Units and target units as you have mentioned BG from Source will become BAG. See the fields MSEHI, ISOCODE in T006 table
If you want to convert to other units then you need to implement Unit Conversion.
Also see
[How to Report Data in Alternate Units of Measure|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b7b2aa90-0201-0010-a480-a755eeb82b6f]
Thanks
Chandran -
Hi Experts,
One of my master data full load failed, see the below erro message in status tab,
Incorrect data records - error requests (total status RED)
Diagnosis
Data records were recognized as incorrect.
System response
The valid records were updated in the data target.
The request was marked as incorrect so that the data that was already updated cannot be used in reporting.
The incorrect records were not written in the data targets but were posted retroactively under a new request number in the PSA.
Procedure
Check the data in the error requests, correct the errors and post the error requests. Then set this request manually to green.
can any one please give me solution.
Thanks
DavidHI,
I am loading the data from application server, i dont have R/3 for this load. and the below message is showing in status tab;
Request still running
Diagnosis
No errors could be found. The current process has probably not finished yet.
System response
The ALE inbox of the SAP BW is identical to the ALE outbox of the source system
and/or
the maximum wait time for this request has not yet run out
and/or
the batch job in the source system has not yet ended.
Current status
Thanks
David -
Hi All,
In my process data loading from source to target,for example iam loading 100 records ,in that 50th record having error,once the error record occured total loading process want to fail, records needs to rollback and error record need to move error table....any one can you guide on this process...
Thanks in advance....30021986 wrote:
Hi All,
In my process data loading from source to target,for example iam loading 100 records ,in that 50th record having error,once the error record occured total loading process want to fail, records needs to rollback and error record need to move error table....any one can you guide on this process...
Thanks in advance....Hi ,
Test this ..
Set flow control to YES
Set Maximum number of errors allowed to 0 (zero)
Run the interface -
Data Load Fails due to duplicate records from the PSA
Hi,
I have loaded the Master Data Twice in to the PSA. Then, I created the DTP to load the data from the PSA to the InfoProvider. The data load is failing with an error "duplicate key/records found".
Is there any setting that I can configure by which even though I have duplicate records in PSA, I can be successfully able to load only one set of data (without duplicates) in to the InfoProvider?
How can I set up the process chains to do so?
Your answer to the above two questions is appreciated.
Thanks,Hi Sesh,
There are 2 places where the DTP checks for duplicates.
In the first, it checks previous error stacks. If the records you are loading are still contained in the error stack of a previous DTP run, it will throw the error at this stage. In this case you will first have to clean up the previous error stack.
The second stage will clean up duplicates across datapackeages, provided the option is set in your datasource. But you should note that this will not solve the problem if you have duplicates in the same datapackage. In that case you can do the filtering yourself in the start routine of your transformation.
Hope this helps,
Pieter -
Direct vs Indirect data load from ODS - InfoCube
We know that there are two options to load data from ODS to InfoCube:
1. Direct load:
If the ODS setting is checked with the option of "Update data targets from ODS object automatically", then when loading data to ODS, then the data also can be loaded to the InfoCube, in this way, only one InfoPackage is needed to load data to the ODS, then the InfoCube can be filled with data automatically.
2. Indirect load:
If the ODS setting is NOT checked with the option of "Update data targets from ODS object automatically", then not only the InfoPackage to load the data to the ODS is needed, but also the InfoPackage to load data from the exported datasource of the ODS to the InfoCube is needed to be created in order to load the data from ODS to the InfoCube.
I wonder what are the pro and con between the above two loading methods from ODS to InfoCube.
Welcome to every BW expert inputs!HI Kevin,
Direct Loads or rather Automated Loads are usually used where u need to load data automatically into the final data target. And it has no dependencies.This process has less of maintenance involved and u need to execute a single infopackage to send data to the final data target.
But Indirect Loads are usually used when u have a number of dependencies and this ODs object is one of the object in a process chain. So if u require the this ODS data load wait till some other event has been executed , then indirect loads are used through process chains.
Regards,
JAsprit -
Time dependant Master data load fail (Invalid time interval)
Hi,
I have IO 0HRPOSITION which is time dependant master data.
When I load master data I have maintained from date 01.03.2007 to end date 01.12.999 in scheduler.
But there are some positions records in HR system which have from date less than 01.03.2007 and because of that master data load is failing everyday.
How I can load only those records which fall in-between from date 01.03.2007 to end date 01.12.999???
Thanks
PramodHi Pramod,
I think you can put a selection in the Infopackage.
Thanks -
Hi Gurus,
My master data load is failing because of this error.
Record 20 :0CAT_GROUP : Data record 20 ('000200000012 '): Version 'D $$OSS001 ' is not valid
I went back to PSA and found out that I have 6 error records with values $$OSS001 .
0CAT_GROUP is a attribute of master data object
I checked in the master data extractors.These values are coming from ECC .
Can some one tell me how to get rid of these values and reload to Masterdata without errors.i am using BW3.5
Anyhelp would be appreciated.
NeoYou probably have to make appropriate change in settings in RSKC to allow certain characters. Otherwise the best way will be stop these at source, if they are really 'bad' data.
You could also visit the link below:
/people/sap.user72/blog/2006/07/23/invalid-characters-in-sap-bw-3x-myths-and-reality-part-2
alternatively, you can write routine in transfer rules/ transformation by using
CALL FUNCTION 'RSKC_ALLOWED_CHAR_GET'
Maybe you are looking for
-
How do i sort my music by albums without influence of artists sorting?
I have a few albuns with various artists in it, but my music app (ios 5) makes an album per artist, replicating it. I have already tested the "group by album artist" option in settings enabled and disabled and nothing happened.
-
How to reference recording fields values?
hi, i am in the LSMW Maintain field mapping and conversion rules screen and i used the following abap coding routine to move values to a destination field :- g_amount = VA02-KURSK + 3. write g_amount to VA02-KURSK decimals 2.
-
Will the debugger work through Citrix and a Aventail connection? I have gotten past the debugger not finding the tns issue.
-
I can't see any themes available for creating collages, greeting cards, etc. Do I have to download them as extras, or are they hidden somewhere, waiting to be liberated? Robin.
-
Prod. system periodically into swap
Hi all I have problem with dialog production instance. Periodically dialog going into swap and when swap is full accordingly instance is not responding. Configuration: ECC 5.0 Oracle 9.2 Solaris 10 I want to understand what options are actually used