Initial Load Performance Decrease
Hi colleagues,
We noticed a huge decrease initial load performance after installing an
application in the PDA.
Our first test we downloaded one data object with nearly 6.6Mb that
corresponds to 30.000 registries with eight fields each. Initial Load
to PDA took only 2 minutes.
We performed a second test with same PDA after a reinstallation and
new device ID. The difference here is that we installed an MI
application related to same data object. Same amount of data was sent
to the PDA. It took 3 hours to download it.
In third test we change the application in order not to have the
related data object assigned to it. In this case, download took 2
minutes again.
In other words, if we have an application with the data object
assigned, it results in a huge decrease in initial load.
In both cases we use direct connection to our LAN.
Here goes our PDA specs:
- Windows Mobile 6 Classic
- Processor: Marvell PXA310 de 624MHz
- 64MB RAM, 256MB flash ROM (190MB available to user)
Any similar experiences?
Thanks.
Edited by: Renato Petrulis on Jun 1, 2010 4:15 PM
I am confused on downloading a data object with no application.
I thought you can only download data if it is associated to a Mobile Component, I guess you just assign the DMSCV manually?
In any case, I have only experienced scenario two when we were downloading application with mobile component with no packaging of messages. we had maybe a few thousand records to download and process and it would take an hour or more.
When we enabled packaging, it would take 15-30 minutes.
Then I went to Create Setup Package because it was just simpler to install the application and data together with no corruption or failure of DMSCV not going operational and not sending data etc... plus it was a faster download using either FTP or ActiveSync to transfer the install files.
Similar Messages
-
Initial Load performs deletion TWICE!!
Hi All,
I face a very peculiar issue. I started an initial load on a codition object. On the R/3 there are about 3 million records. The load starts
1)First it deletes all the records in CRM(count bcomes 0)
2) Then it starts inserting the new records ( the records get inserted and the count reaches 3 million)
in R3AM1 this adapter object(DNL_COND_A006) status changes to "DONE"!!
Now comes the problem
There are still some queue entries which again starts deleting the entries from the condition table and the
count starts reducing and the record count becomes 0 agai n in the conditio table!
Then it again starts inserting and the entire load stops after insertin 1.9 million records! Thsi isvery strange.Any pointers will be helpful
I also checked whether the mappin module is maintained twice in CRM but that is also not the case. Since the initial load takes more than a day i checked whether there are any jobs scheduled but there are no jobs scheduled also.
I am really confused as to why 2 times deletion should happen. Any pointers will be highly appreciated.
Thanks,
AbishekHi Abishek,
This is really strange and I do not have any clue. What I can suggest is that before you start the load of DNL_COND_A006, load the CNDALL & CND object again. Some time CNDALL resolve this kind of issues.
Good luck.
Vikash. -
Golden Gate Initial Load - Performance Problem
Hello,
I'm using the fastest method of initial load. Direct Bulk Load with additional parameters:
BULKLOAD NOLOGGING PARALLEL SKIPALLINDEXES
Unfortunatelly the load of a big Table 734 billions rows (around 30 GB) takes about 7 hours. The same table loaded with normal INSERT Statement in parallel via DB-Link takes 1 hour 20 minutes.
Why does it take so long using Golden Gate? Am I missing something?
I've also noticed that the load time with and without PARALLEL parameter for BULKLOAD is almost the same.
Regards
PawelHi Bobby,
It's Extract / Replicat using SQL Loader.
Created with following commands
ADD EXTRACT initial-load_Extract, SOURCEISTABLE
ADD REPLICAT initial-load_Replicat, SPECIALRUN
The Extract parameter file:
USERIDALIAS {:GGEXTADM}
RMTHOST {:EXT_RMTHOST}, MGRPORT {:REP_MGR_PORT}
RMTTASK replicat, GROUP {:REP_INIT_NAME}_0
TABLE Schema.Table_name;
The Replicat parameter file:
REPLICAT {:REP_INIT_NAME}_0
SETENV (ORACLE_SID='{:REPLICAT_SID}')
USERIDALIAS {:GGREPADM}
BULKLOAD NOLOGGING NOPARALLEL SKIPALLINDEXES
ASSUMETARGETDEFS
MAP Schema.Table_name, TARGET Schema.Table_tgt_name,
COLMAP(USEDEFAULTS),
KEYCOLS(PKEY),
INSERTAPPEND;
Regards,
Pawel -
Improving initial load performance.
Hi ,
Please let me know the setup and prerequisite required for running parallel request so as to fasten the connection object download .
I need to download connection object and Point of Delivery from ISU to CRM. Is there any other way to improve the performance.
Regards,
RahulHello,
May you please tell us more about your scenario? Because using the connection object ID may not be easy to start many request in parallel, as this field is alphanumeric if I remember well... meaning that a range between 1 and 2 will include 10, 11, 100, etc.
That's why within a migration process SAP introduced a new concept (via table ECRM_TEMP_OBJ) to replicate into CRM only those connection objects that are not already there. This is explained page 12 of the cookbook. Futhermore, as far as replication performance is concerned, I highly recommend to read those OSS notes carefully (which are valid for ISU technical objects as well):
Note 350176 - CRM/EBP: Performance improvement during exchange of data
Note 426159 - Adapter: Running requests in parallel
Regards,
Nicolas Busson. -
Initial Load Error - No generation performed. Call transaction GN_START
Hi Folks,
We are doing middleware configuration for data migration between R3->CRM.Have followed "Best Practies" configuration Guide.
System Using; CRM 2007 and ECC6.0
Issue
While performing initial load, system is throwing the error as
001- No generation performed. Call transaction GN_START
002-Due to system errors the Load is prohibited (check transaction MW_CHECK)!
After calling the transaction GN_START system asks for job scheduling,whereas I have already scheduled it.
A job is already scheduled periodically.
Clicking on 'Continue' will create another job
that starts immediately.
After checking(MW_CHECK),message is displayed as
No generation performed. Call transaction GN_START.
If anybody has encountered the similar issue and has resolved it,their guidence will be greatly appriciated.
Thanks in Advance
VEERA BVeera,
We also faced the same problem when we have done the upgrade from CRM 4.0 to CRM 2007.
For that you go to SMWP where you can see all the errors related to Middleware with the error message so try to remove the error,
Also pls check in RZ20 and activate the middleware trace tree.
Regards
Vinod -
Perform rollback occurs during initial load of material
Hi Gurus,
When we try to do the initial load of materials, only some part of the materials are replicated to SRM. We have the R3AC1 filter of taking only the materials with Purchasing view. We have no other filter. Although there are 576 materials that match this filter, only 368 materials are replicated to SRM.
One thing we have observed is that when we have a look at SM21 (System Log) we see "Perform rollback" actions. Below is the details of the log. Can anyone help on our issue?
Details Page 2 Line 30 System Log: Local Analysis of sapsrmt 1
Time
Tip
Nr
Clt
User
İKodu
Grp
N
Text
23:52:59
DIA
003
013
ALEREMOTE
R6
8
Perform rollback
Perform rollback
Details
Recording at local and central time........................ 29.11.2006 23:52:59
Task......
Process
User......
Terminal
Session
İKodu
Program
Cl
Problem cl
Package
87262
Dialog work process No. 003
ALEREMOTE
1
SAPMSSY1
W
Warning
STSK
Further details for this message type
Module nam
Line
Error text
Caller....
Reason/cal
thxxhead
1300
ThIRoll
roll ba
No documentation for syslog message R6 8 exists
Technical details
File
Offset
RecFm
System log type
Grp
N
variable message data
4
456660
m
Error (Function,Module,Row)
R6
8
ThIRollroll bathxxhead1300Hi,
Some of our material groups were problematic. After removing these the problem is resolved.
FYI -
MacBook Pro (mid 2010) - Slow Startup and Initial Load of Apps after Mavericks Install
Hello everyone,
Long time Snow Leopard user here. I've been absolutely, 100% thrilled with my MBP's performance / speed since mid 2010. Since installing Mavericks I've noticed two things:
1. Initial on boot load times seem much slower
2. The first time you open an app (Safari, even TextEdit) is far slower than in Snow Leopard.
Now granted, once the computer has been on for about 5 minutes, and once you've had an application open for 1-2 minutes, than performance is okay once again. But it seems that quite literally everytime you do "anything" for the first time since the computer has been turned on you run the risk of a spinning beachball and an extended wait. This simply never happened in Snow Leopard.
I'm not a computer science major but I do understand that when you do something for the first time since a reboot it will need to load new files into the RAM which takes time. However, I've been using this computer since 2010 and have never experienced this behavior / speed / delay, etc... something seems fundamentally different.
For example, even just opening finder, and navigating to the Applications folder results in sluggish behavior the first time. Also, anytime you first click the top menu bar in an application you run the risk of a beachball.
It doesn't seem like a failing hard drive because I can read/write large chunks of data one an app is finally loaded. It's just initially loading the app takes quite literally 10-20 times longer than Snow Leopard. It seems like perhaps Mavericks itself, the core of the new OS was perhaps installed in a bad sector of the HD, but none of my other files are on bad sectors... if that makes sense. Again - I'm not a computer science major so I'm largely just making things up
It also seems as if multitasking and switching between several open apps has seen a decrease in performance. Could it be that perhaps Mavericks RAM management is superior to Snow Leopard, but only if there is a TON of RAM to go around? Could it be as simple as Macs with only 4GB installed should not be allowed to even install Mavericks?
Does this sound normal for Mavericks considering I have a C2D processor, a non SSD, and only 4GB ram? I know those specs are "outdated" for 2014, but keep in mind these were the specs that I've always had and with Snow Leoapard I was consistently thrilled with the speed of the computer.
I suppose my question is: should my strategy be to simply turn the computer on, get a cup of coffee and let everything load and maybe even have my frequently used apps load on startup, etc... and carry on as normal? Does something seem off? Should I invest in a new SSD / RAM upgrade? I know the upgrades couldn't hurt, but the issue is that I was perfectly content until upgrading to Mavericks
I've tried booting into OS X Recovery mode and using the Disk Utility. I've tried Safe Mode. I've included a "EtreCheck" report below. I'm open to any ideas / suggestions.
Thanks!!
Hardware Information:
MacBook Pro (13-inch, Mid 2010)
MacBook Pro - model: MacBookPro7,1
1 2.4 GHz Intel Core 2 Duo CPU: 2 cores
4 GB RAM
Video Information:
NVIDIA GeForce 320M - VRAM: 256 MB
System Software:
OS X 10.9.1 (13B42) - Uptime: 0 days 0:10:48
Disk Information:
Hitachi HTS545025B9SA02 disk0 : (250.06 GB)
EFI (disk0s1) <not mounted>: 209.7 MB
Macintosh HD (disk0s2) / [Startup]: 249.2 GB (228.46 GB free)
Recovery HD (disk0s3) <not mounted>: 650 MB
MATSHITADVD-R UJ-898
USB Information:
Apple Inc. Built-in iSight
Apple, Inc. Keyboard Hub
Logitech USB Receiver
Apple Inc. Apple Keyboard
Apple Internal Memory Card Reader
Apple Inc. BRCM2046 Hub
Apple Inc. Bluetooth USB Host Controller
Apple Computer, Inc. IR Receiver
Apple Inc. Apple Internal Keyboard / Trackpad
FireWire Information:
Thunderbolt Information:
Launch Daemons:
[System] com.adobe.fpsaud.plist 3rd-Party support link
Launch Agents:
[System] com.adobe.AAM.Updater-1.0.plist 3rd-Party support link
[System] com.adobe.AdobeCreativeCloud.plist 3rd-Party support link
User Launch Agents:
[not loaded] com.adobe.AAM.Updater-1.0.plist 3rd-Party support link
[not loaded] com.google.keystone.agent.plist 3rd-Party support link
User Login Items:
None
Internet Plug-ins:
FlashPlayer-10.6: Version: 11.9.900.152 - SDK 10.6 3rd-Party support link
Flash Player: Version: 11.9.900.152 - SDK 10.6 Outdated! Update
QuickTime Plugin: Version: 7.7.3
JavaAppletPlugin: Version: 14.9.0 - SDK 10.7 Outdated! Update
AdobeAAMDetect: Version: AdobeAAMDetect 2.0.0.0 - SDK 10.7 3rd-Party support link
Default Browser: Version: 537 - SDK 10.9
Audio Plug-ins:
BluetoothAudioPlugIn: Version: 1.0 - SDK 10.9
AirPlay: Version: 1.9 - SDK 10.9
AppleAVBAudio: Version: 2.0.0 - SDK 10.9
iSightAudio: Version: 7.7.3 - SDK 10.9
iTunes Plug-ins:
Quartz Composer Visualizer: Version: 1.4 - SDK 10.9
3rd Party Preference Panes:
Flash Player 3rd-Party support link
Old Applications:
None
Time Machine:
Time Machine not configured!
Top Processes by CPU:
4% WindowServer
1% EtreCheck
0% mds
0% mds_stores
0% airportd
Top Processes by Memory:
111 MB com.apple.IconServicesAgent
94 MB Finder
57 MB Dock
53 MB SystemUIServer
53 MB EtreCheck
Virtual Memory Information:
1.64 GB Free RAM
1.42 GB Active RAM
314 MB Inactive RAM
390 MB Wired RAM
316 MB Page-ins
0 B Page-outsWish I could be of some help, but I'm having similar issues on my mid 2011 macbook pro with 8 GB ram. It's very frustrating. I haven't worked on a computer this slow since the early 90's. Even simple things like signing in to this site bring up the beach ball for maybe ten seconds. When opening up apps I often have to wait minutes, and often end up force quitting when they hang. I'm using force quit a dozen or more times a day, when it used to be very rare.
I don't know if it's Mavericks. I installed a new hard drive in November, but the problems really started getting bad since the latest upgrade.
It's not just opening apps. If I have an app open in the background and switch to it I have similar issues. Watching a simple video is often accompanied by stalls and delays - which is really bad as I'm often using the laptop to edit video. It's become unuseable for work.
Perhaps the most frustrating is clicking on a link, only to find Safari decides to complete a previous scroll that it had paused. The page jumps down, and sometimes I end up clicking on a link I didn't want. Then I have to wait for THAT spinning beach ball before I can go back to the previous spinning beach ball.
It can take up to ten minutes to restart. I've tried everything I could find on these pages so far, and nothing has worked.
I thought maybe it was something I've done (and perhaps it is), but I keep seeing others with similar issues. -
Initial load of inventory level from csv - double datarows in query
Hello everybody,
a query result shown in a web browser seems strange to me and I would be very glad, if anyone can give me some advice how to solve the problem. As I do not think that it is related to the query, I posted it into this forum.
The query refers to an InfoCube for inventory management with a single non-cumulative key figure and two other cumulative key figures for increase and decrease of inventory. The time reference characteristic is 0CALDAY. The initial load has been processed reading from a flat file (CSV), the structure looks like this:
Product group XXX
Day 20040101
Quantity 1000
Increase 0
Decrease 0
Unit ST
The initial load runs fine, the system fills all the record sets into the InfoCube. Unfortunately I do not know how to look at the records written into the cube, because only the cumulative key figures are shown in InfoCube-> Manage-> Contents.
Well, when executing the query, a really simple one, the result is just strange, because somehow there are now two rows for each product group with different dates, one with the 1st of January, 2004 and the other for the 31st of December, 2003 containing both 1000 units. The sum is 2000.
It became more confusing, when I loaded the data for increase and decrease: now the quantities and sums are correct, but the date of the initial load is a few days later than before, the data table in the query does not contain the 1st of January.
Does anybody know, what I did wrong or where there is information about how to perform an initial load of inventory from csv in a better way?
Kind regards
PeterPeter,
Inventory is not that straight forward to evaluate as it is non-cumulative. Basically it means that one KF is derived from one/two other KFs. You cannot see non-cumulative KFs in manage infocube.
Have you uploaded opening balances separately? If so, your data for 31st of december is explained.
In non-cumulative cubes, there need not be a posting for a particular day for a record to exist. For e.g. if you have stock as 10 units on 1st and then no posting for 2nd and 3rd and then increase 10 units on 4th, even for 2nd and 3rd, the non-cumulative KF will report as 10 units (stock on 1st rolled forward).
There is a how to...inventory management document on service market place that explains this quite nicely.
Cheers
Aneesh -
Initial load gets slower and slower
For a PoC I tried to use the internal goldengate mechanism for initial load. The size of the table is about 500 mb in total but over time the load is decreasing. Starting with nearly 1000 rows per second after one hour I was down to 50 Rows per hour and again decreasing down to not more than 10 rows per hour. So the entire load took 15 hours!
There is only a primary key on the target table and no other constraints.
Any idea?Same thing happens performance-wise on imports: starts off pretty fast, then starts to slow down. Can you rebuild/enable the PK index after the load is done? That should be a safe operation, give that your source has a PK. Are you sure there aren't any other constraints (or triggers) on the target table?
Plus (assuming you are a DBA), what does AWR (or statspack, or tracing) show for wait events? -
Loading performance of the infocube & ODS ?
Hi Experts,
Do we need to turn off the aggregates on the infocubes before loading so that it will decrease the loading time or it doesn't matter at all, I mean if we have aggregates created on the infocube..Is that gonna effect in anyway to the loading of the cube ? Also please let me know few tips to increase the loading performance of a cube/ods. Some of them are
1. delete index and create index after loading.
2. run paralled processes.
3. compression of the infocube , how does the compression of an infocube decrease the loading time ?
Please throw some light on the loading performance of the cube/ods.
Thanks,Hi Daniel,
Aggregates will not affect the data loading. Aggregates are just the views similar to InfoCube.
As you mentioned some performance tuning options while loading data:
Compression is just like archiving the InfoCube data. Once compressed, data cannot be decompressed. So need to ensure the data is correct b4 Compressing. When you compress the data, you will have some free space available, which will improve data loading performance.
Other than the above options:
1.If you have routines written at the transformation level, just check whether it is tuned properly.
2.PSA partition size: In transaction RSCUSTV6 the size of each PSA partition can be defined. This size defines the number of records that must be exceeded to create a new PSA partition. One request is contained in one partition, even if its size exceeds the user-defined PSA size; several packages can be stored within one partition.
The PSA is partitioned to enable fast deletion (DDL statement DROP PARTITION). Packages are not deleted physically until all packages in the same partition can be deleted.
3. Export Datasource:The Export DataSource (or Data Mart interface) enables the data population of InfoCubes and ODS Objects out of other InfoCubes.
The read operations of the export DataSource are single threaded (i.e. sequential). Note that during the read operations u2013 dependent on the complexity of the source InfoCube u2013 the initial time before data is retrieved (i.e. parsing, reading, sorting) can be significant.
The posting to a subsequent DataTarget can be parallelized by ROIDOCPRMS settings for the u201Cmyselfu201D system. But note that several DataTargets cannot be populated in parallel; there is only parallelism within one DataTarget.
Hope it helps!!!
Thanks,
Lavanya. -
How to use more than one application server during initial load?
Hi,
we plan to use more than one application server in CRM during initial download in order to increase the number of parallel requests and to decrease the time for the initial load. Is there a way to allocate requests to more than one server? Is is possible via multiple rfc connections for consumer CRM in CRMRFCPAR?
Thanks.
Alexander SchifferHi Naresh,
thanks for your answer. It has solved my problem. SMLG is the transaction that I was looking for.
Two more OSS notes that helped me to guide our basis into the right direction:
OSS 593058 - New RFC load balancing procedure
OSS 1413986 - SMLG: Possibility to select a favorite type for Ext.RFCs
Thanks again.
Alexander Schiffer -
Hi Forum,
Iam doing Middleware setup for downloading customer master from R/3 to CRM.Iam trying to do initial load of customizing objects viz.,DNL_CUST_ACGRPB,DNL_CUST_ADDR,
DNL_CUST_KTOKD,DNL_CUST_TVKN,DNL_CUST_TVLS,DNL_CUST_TVPV... which are to be loaded before doing initial load of CUSTOMER_MAIN.
while doing initial load of customizing objects iam getting the below mentioned error
<b>001 No generation performed. Call transaction GN_START.</b>
<b>002 Due to system errors the Load is prohibited (check transaction MW_CHECK)!</b>
<b>-</b>when I do GN_START
"A job is already scheduled periodically.
Clicking on 'Continue' will create another job
that starts immediately.
Do you want to continue?" message is displayed
and I have sheduled it.
But in SMWP transaction I can see in
<b>BDoc Types: Generation of other runtime objects</b>
Not generated / <b>generated with errors 2 entries
31.08.2006 05:33:50</b>
and the objects with errors are
<b>POT_LISTWRITE
SPE_DDIC_WRITE</b>
<b>-</b>In transaction MW_CHECK, system displays message as <b>No generation performed. Call transaction GN_START.</b>
when I regenerate these objects(generated with errors)from the context menu I find no difference.
I have also referred to the <b>Note :637836 and 661067</b> which also suggests to run few reports and GN_START but inspite of doing all the corretion parameters in the note Iam still unable to come out of the situation.
Please Guide
Thanks in Advance
Shridhar.
Message was edited by: Shridhar DeshpandeHi Rahul,
Thanks for the reply.I checked in transaction MW_CHECK and the system throws the message as
<b>No generation performed. Call transaction GN_START.</b>
In the long text the below message is available
<b>No generation performed. Call transaction GN_START.
Message no. SMW_GEN_KERNEL005
Diagnosis
An upgrade was performed.
<b>System response</b>
The Middleware is stopped because MW objects must be generated.
<b>Procedure</b>
Excecute transaction GN_START.</b>
If GN_START is executed,I dont find any change.
I also checked in <b>smq2</b>in CRM and I found the status of the queue as below
CL Queue Name Entries Status Date
<b>200 CSABUPA0000000042 5 SYSFAIL 31.08.2006 09:54:05 31.08.2006 09:54:11</b>
Thanks
Shridhar
Message was edited by: Shridhar Deshpande -
Replicating data once again to CRM after initial load fails for few records
My question (to put it simply):
We performed an initial load for customers and some records error out in CRM due to invalid data in R/3. How do we get the data into CRM after fixing the errors in R/3?
Detailed information:
This is a follow up question to the one posted here.
Can we turn off email validation during BP replication ?
We are doing an initial load of customers from R/3 to CRM, and those customers with invalid email address in R/3 error out and show up in SMW01 as having an invalid email address.
If we decide to fix the email address errors on R/3, these customers should then be replicated to CRM automatically, right? (since the deltas for customers are already active) The delta replication takes place, but, then we get this error message "Business Partner with GUID 'XXXX...' does not exist".
We ran the program ZREPAIR_CRMKUNNR provided by SAP to clear out any inconsistent data in the intermediate tables CRMKUNNR and CRM_BUT_CUSTNO, and then tried the delta load again. It still didn't seem to go through.
Any ideas how to resolve this issue?
Thanks in advance.
MaxSubramaniyan/Frederic,
We already performed an initial load of customers from R/3 to CRM. We had 30,330 records in R/3 and 30,300 of them have come over to CRM in the initial load. The remaining 30 show BDOC errors due to invalid email address.
I checked the delta load (R3AC4) and it is active for customers. Any changes I make for customers already in CRM come through successfully. When I make changes to customers with an invalid email address, the delta gets triggered and data come through to CRM, and I get the BDOC error "BP with GUID XXX... does not exist"
When I do a request load for that specific customer, it stays in "Wait" state forever in "Monitor Requests"
No, the DIMA did not help Frederic. I did follow the same steps you had mentioned in the other thread, but it just doesn't seem to run. I am going to open an OSS message with SAP for it. I'll update the other thread.
Thanks,
Max -
Initial Load of Business Partner not running
Dear SAP CRM gurus,
We have been able to perform initial download of Business Partners from ECC into our CRM system. We have done this many times. We do not know what is wrong, but since last week, we are unable to perform initial download of our Business Partners. When we run initial download using R3AS, there is no BDoc created, there is also no queues on inbound/outbound of both CRM and ECC system. There is also no error. R3AM1 showing initial download complete but only with 1 block. But there is no BDoc created!! All other replication objects are fine, except that BUPA_MAIN we are unable to perform initial download. Delta download is fine as well.
We have not changed anything on SMOEAC and it is all correct. Entries on CRMSUBTAB and CRMC_BUT_CALL_FU is also correct.
Please help!!Hi,
When you are downloading CUSTOMER_MAIN through R3AS are u getting any warning or error??
or r u getting pop up with green light??
If u are getting any warning or error then go to tcode: smwp. Then go to runtime information
->Adapter Status Information>Initial Load Status
under that check running objects and check customer_main is there or not??
if found delete that entry and do initial load again.
Also check outbound queue of R/3 and inbound queue of CRM .
If then also its not working do request download using r3ar2, r3ar3 and r3ar4 and check whether it is working or not.
If helpful kindly reward me.
Thanks & Regards,
Anirban -
Initial load of articles through ARTMAS05
Hi Retail experts, I need to built IDOCs to perform the initial load of articles through ARTMAS05.
Although we tried to use an IDOC from a DEMO system as template, we couldn't get a successful IDOC so far. The Function module we are using is BAPI_IDOC_INPUT1, with IDOC type ARTMAS05.
Does anybody has a guideline to set this up?
Thanks in advance.I would welcome Bjorn's input on this, but, generally I accomplish this using LSMWs. I use SWO1 to explore the business object, but use LSMW (Legacy System Migration Workbench) to mass load. In the case of LSMW, you simply call the transaction LSMW.
- From here, define a project, subproject and object. (Eg: Project = INITLOAD {initial data load}, Subproject = LOMD {logistics master data}, object = ARTMAS {article master}).
- Maintain the object attributes. Here, you can chose from four options: standard/batchinput, batch input recording, Business Object Method (BAPI) or IDoc. Choose the Business Object method, use object BUS1001001 and method CLONE.
- Define your source structure. Here, you will lay out what the input file's STRUCTURE is going to look like (not the fields). Since it's ARTMAS, it's not realistic to put all data into a single row in a text file, so you will likely use a structured input file - especially for variants, site-specific and sales-org-specific data.
- Define your source fields. Here you will define the fields that are in your input file and assign them to your input structure. A lot of work goes into this step. Note - I would try to use names very close to the SAP names, since there is an automapping tool. Also, you can copy SAP table structures into your field structures which is very helpful if you plan to use say 75 - 80 percent of the fields of a particluar structure.
- Maintain structure relations. You will assign your input structures to the corresponding ARTMAS structures in this step.
- Map the fields and maintain conversion rules. Here you assign your input fields to the ARTMAS fields. Also, you can code ABAP in this step for conversion/translation purposes. It depends on your chosen ETL or ETCL or ECTL or CETL methodology (E = Extract, C = Cleanse, T = Transform, L = Load) on whether you will write ABAP conversion rules in this step.
- Specify input files. This is where the data resides in it's text file input format. Typically, you will use a small data set that sits on your PC to test it, and then for a mass load, create a server-side directory on the SAP server, place the input file there, and you can use it. This speeds processing for large files considerably.
- Assign files. Here you assign the previously specified input file to an input structure
- Read data. This actually reads the data so that you can see how it's going to come in.
- Convert data. This creates a psuedo IDoc. It is not an IDoc yet, but in IDoc format.
- Start IDoc generation. This converts the converted file into a true IDoc.
- Start IDoc processing. Here, your IDoc moves from 64 status to (hopefully) 53 status.
Well, I hope this helps, and I would be interested in Bjorn's input. Also, Bjorn, what did you mean by the WRKKEY comment? I've never heard or seen a reference to this.
Maybe you are looking for
-
Installing on Macbook Air 4,2 - why can I not get this done?
I am trying so desperately to install Arch and dual-boot on my Macbook Air. I'm not an Arch noob - I promise. I've installed and loved arch on my old laptop a couple years back but that was not a mac. I've been here but all it says is that this "w
-
4.3 title safe in 16.9 frame
Hi everyone. I have just started producing TV commercials in FCP Studio and using the latest JVC HD camera. I am shooting and editing 16.9 but got caught today with the title just out of the 4.3 zone. As 80% of viewers in Australia dont have 16.9 set
-
I don't know what is going on with this. Photoshop worked but the premiere elements suddenly kept freezing so I un installed and am trying to reinstall but keep having the same issue. This is from a purchased dvd set. At one point when it got to this
-
Re-arranging a static tablespace using MOVE
I have a tablespace which holds historic, non moving data. There are in fact two tablespaces, one for tables and one for indexes. I would like to size the tables to their actual size so I get 0 extents. What is the correct way to do this? Thanks
-
Set up default goods recipient for confirmations in SRM
Hi, Client has setup one of the user as Default " Goods reciepent" for the Purchasing group and that seemes to be not working (might be removed), now I need to set the same user as default goods recipent for the same Purchasing group Wanted to know t