Group data locked error for MM01 using parallel processing

Hello gurus,
                   I am using Call txn method (MM01) Parallel Processing method ( around 9 threads ). The Materials are getting locked around 10 percent sometimes.
This is happening randomly ..one day i dont have any locking errors ..next day i have ...Any ideas why this could be..any prereq i need to check before executing the parallel processing..
Thank you in advance..
sasidhar p

Hi Sasidhar
I guess you are either extending the Sales Data or MRP Data. Just make sure that you are processing these transactions in a linear form for a single material. We can use parallel processing for different materials but for a single material if we go for parallel processing we can definetely expect the Lock Objects error.
Kind Regards
Eswar

Similar Messages

  • BDC for MM01 using bapi

    HI all ,
    i am new to bapi . i have created so many bdcs to upload data in sap but havent done any bapi .
    i want to upload data through bapi  for mm01 .
    how can i achieve this and which bapi is responsible for this job . plz send any sample code for this .
    Thanks ,
    Amit Ranjan .

    Hi Amit,
    Refer to the code in the following link:
    Re: Reg Transfer of MM01 data using BAPI method
    Hope this will solve your query...

  • PUK Locked Error on iPhone3 used as an iPod

    How do I get past the "PUK Locked" error for my old iPhone3 that my son was using as an iPod?  This PUK error happened when I connected the phone to iTunes and it prompted me to update the software (which I unfortunately did).  Please help!

    Thanks, but this old iPhone isn't being used as an iPhone and not under any service.  It is only being used as an iPod so I can't get a code from the carrier to unlock the PUK error.

  • Kernal Data Inpage Error, No Bootable Device, Critical Process Died AGAIN! Please help!

    I bought this S55-A5295 in July 2013. It started to have problems in April 2014. After 3 months of Critical Process Died, No Bootable Device, and having it shut down multiple times a day, I eventually convince service to take accept it because when I tried to do the recovery, the hard drive was missing partitions needed. (My $30 to ship, despite under warranty)
    I received it back late June. The factory replaced the faulty screen (a different problem), but did not replace the hard drive. They supposedly "fixed" it, and added the needed partitions, should I need to do recovery again. I know that the partition was added because when I got it back, instead of being like new out of the box and ready to use, I was locked out because it was set to administrator and there was no option to access it, so I had to use the partitioned info. and do ANOTHER reset after getting it back from them. Not great service all around.
    Now the laptop is out of warranty and I've started having problems again.
    Sept. 16 -- I had a Kernal Data Inpage Error. The laptop made a horrible sound and shut down. At the time there was a Toshiba Canvio Slim II external drive attached.
    Sept. 18 -- EFI\Microsoft\Boot\BCD 0xc00000bb the boot configuration data for your PC is missing or containing errors.
    Sept. 19 -- Kernal Data Inpage Error. At this point I had removed the external drive, in case that was the problem. It wasn't letting me remove it earlier in the week. I didn't use it much over the next week.
    Sept. 27 -- No Bootable Device.
    Sept. 30 -- Kernal Data Inpage Error (volmgrx.sys)
    Then Critical Process Died -- 3 different times
    Then it restarted itself once without an error code.
    Today, Oct. 1, I installed updates from Toshiba, including one fpr HDD and I performed maintenance. I tried to do the hard drive check, but the tabs in the instructions on the computer don't actually exist.
    The directions: go to Computer, Properties, Tools tab (can't find it) , so no Error Checking and Check.
    However, I did find, under Performance information and Tools my computer's performance rating. The Primary Hard Disk data transfer rate is 5.9.
    So far it hasn't shut down today and hopefully the updates have fixed any issues, but I want to be prepared if they don't.
    Are these errors coming from my laptop's hard drive? Is there a way for me to actually check that? Are the errors somehow connected to my TOSHIBA external drive? I haven't connected the external drive since the errors started happening, but i will need to use it again at some point, I just wanted to make sure the laptop was okay first.
    Also, I am still running Windows 8. That's what they reset it to at the factory, and last time the issues seemed to correspond to me upgrading to 8.1, so I am wary to end up with a $600 paper weight again.
    Please advise. Thanks!!

    ..went through the whole reset.   ...   ..do ANOTHER reset after getting it back from them
    What can I say? The gold-standard test for a software problem is to restore the hard disk to its original out-of-the-box contents.
    I'm a Windows specialist. Can't troubleshoot hardware.
    -Jerry

  • Group Data locked for material

    Hi,
    I have an interface that failed because the "Group Data" of the material was locked.
    I am aware of "Material being locked by another user". However I don't know how the group data gets locked or what group data refers to.
    In most cases when I run the interface after some time, the material posts correctly. But I'd like to know what exactly locks group data and how I can check if it has been unlocked subsequently.
    Thanks.
    Urmila

    Hi,
      This message comes usually when you attempt to change the material using fm calls and material header MARA is locked (that is material itself is locked) by some other user .
      What you can do is to call lock fm ENQUEUE_EMMARAE
      before calling your material update logic and see whether the lock is successful. If the lock is successful then you can proceed with the update logic.
    Sri

  • GOOP VI "set modified data" locks LabVIEW for seconds

    I use GOOP.
    Sometimes the VI "set modified data" (from now on SMD) of one of my GOOP classes is locked for about 15 seconds, while CPU raises to 100%.
    It happens when SMD is called from a certain VI. It does not happen every time SMD is called from this VI.
    As I have understood GOOP, the VI "get data to modify" tries to access/lock the data members - which might take a while if the the data members are currently locked by another VI - but SMD only sets the data members and removes the lock, which should always be possible (at least if the data set has been locked by this VI, which - yes! - is the case).
    Other VIs are running in parallell, maybe they could cause the CPU to raise to 100% but I find it strange th
    at it only happens when SMD is called from this particular VI.
    I think I need a better understanding for how SMD really works.
    Regards

    Hi,
    Your description of how "get data to modify" (from now on called GDM) and "set modified data" works is correct and you have fully understand how it all works. The only VI that actually may wait is the "get data to modify" (happens if the data for the object is locked by something running in parallell as you describe). The GDM is however reentrant so that it really will be able to wait for seconds (specified by the timeout) and also letting other process wait as well using the GDM. The SDM is not reentrant (does not have to) and should really execute fast (set data and unlock) and never hold execution, just as you describe.
    Are you absolutly sure that it is the SMD that cause the CPU to freak out? Maybe the fact that you unlock the data releases ano
    ther part of your program that were "hanging on the lock" and the problem is actually in another part of the program. Do you have some exceptional large size of your data? Do you somehow elaborate with VI priorities and/or reentrant VIs. This might sometimes really cause strange situations. The situation you describe looks more like a "race" problem.
    The SDM problem you describe is new to me and I have never encounted it. Would it be possible for you to attach some LV code example showing the phenomenon?
    Best regards,
    Mattias Ericsson
    Endevo Sweden
    (member of the GOOP developer team)

  • How to group data and assign cell names using Excel templates

    Hi all,
    reading the article "Real Excel Templates 1.5" on the Tim Dexter's Blog, I found that I need hierarchical data for Excel templates. So only in this way I can group my data.
    My hierarchy is composed by 3 levels:
    lev 1 DESTINATION: is the higher level that groups SERVICES and COUNTRY
    lev 2 SERVICES: is the level that groups the countries
    lev 3 COUNTRY: is the lowest level with the COUNTRY, CALLS and CALLS_MINUTES details
    An example of my hierarchy is this:
    lev 1 INTERNATIONAL
    lev 2 INTERNATIONAL FIXED
    lev 3 Albania 90 438,15
    lev 3 Armenia 1 16,95
    lev 2 INTERNATIONAL MOBILE
    lev 3 Albania Mobile 161 603,35
    lev 3 Australia Mobile 6 34,38
    lev 1 NATIONAL
    lev 2 HELLAS LOCAL
    lev 3 Hellas Local 186,369 707940,6
    lev 2 HELLAS MOBILE
    lev 3 Hellas Mobile Cosmote 31,33 43856,97
    lev 3 Hellas Mobile Q-Telecom 2,398 4343,78
    lev 2 HELLAS NATIONAL
    lev 3 Hellas Long Distance 649 1499,55
    lev 1 INTERNET
    lev 2 INTERNET CALLS
    lev 3 Cosmoline @Free 79 2871,3
    So, my data template is the following (with exactly the hierarchy I want for my data):
    <dataTemplate name="emp" description="destinations" dataSourceRef="GINO_DB">
         <dataQuery>
              <sqlStatement name="Q1">
                   <![CDATA[SELECT 1 TOTAL_CALLS, 2 TOTAL_CALLS_MIN from dual ]]>
              </sqlStatement>
              <sqlStatement name="Q2">
                   <![CDATA[SELECT dest.ID_DESTINATION, dest.DESC_DEST from ale.AAA_DESTINATION dest order by dest.ID_DESTINATION ]]>
              </sqlStatement>
              <sqlStatement name="Q3">
                   <![CDATA[SELECT ser.ID_SERVICE,
    ser.ID_DEST,
    ser.DESC_SERVICE,
    count.ID_COUNTRY,
    count.ID_SERV,
    count.COUNTRY,
    count.CALLS,
    count.CALLS_MIN
    from ale.AAA_SERVICE ser, ale.AAA_COUNTRY count
    where ser.ID_SERVICE= count.ID_SERV
    and ID_DEST = :ID_DESTINATION
    order by ser.ID_SERVICE ]]>
              </sqlStatement>
         </dataQuery>
         <dataStructure>
              <group name="G_TOT" source="Q1">
                   <element name="TOTAL_CALLS" value="G_COUNTRY.CALLS" function="SUM()"/>
                   <element name="TOTAL_CALLS_MIN" value="G_COUNTRY.CALLS_MIN" function="SUM()"/>
                   <group name="G_DEST" source="Q2">
                        <element name="DESC_DEST" value="DESC_DEST"/>
                        <element name="DEST_CALLS_SUBTOTAL" value="G_COUNTRY.CALLS" function="SUM()"/>
                        <element name="DEST_CALLS_MIN_SUBTOTAL" value="G_COUNTRY.CALLS_MIN" function="SUM()"/>
                        <group name="G_SERV" source="Q3">
                             <element name="DESC_SERVICE" value="DESC_SERVICE"/>
                             <element name="SERV_CALLS_SUBTOTAL" value="G_COUNTRY.CALLS" function="SUM()"/>
                             <element name="SERV_CALLS_MIN_SUBTOTAL" value="G_COUNTRY.CALLS_MIN" function="SUM()"/>
                             <group name="G_COUNTRY" source="Q3">
                                  <element name="COUNTRY" value="COUNTRY"/>
                                  <element name="CALLS" value="CALLS"/>
                                  <element name="CALLS_MIN" value="CALLS_MIN"/>
                             </group>
                        </group>
                   </group>
              </group>
         </dataStructure>
    </dataTemplate>
    Not considering the CALLS and CALLS_MIN details (I focused only on the COUNTRY which is as the same level), with this data template, making tests on my excel template, I noticed that I can group ONLY two nested levels using the format XDO_GROUP_?group_name?
    XDO_GROUP_?G_DEST?
    XDO_GROUP_?G_SERV?
    or
    XDO_GROUP_?G_DEST?
    XDO_GROUP_?G_COUNTRY?
    or
    XDO_GROUP_?G_SERV?
    XDO_GROUP_?G_COUNTRY
    If I try to group all the three level together in this order
    XDO_GROUP_?G_DEST?
    XDO_GROUP_?G_SERV?
    XDO_GROUP_?G_COUNTRY
    I don't have the output I would like to have.....
    Practically, in my excel I have 3 rows with the following labels
    DESTINATION (called XDO_?DESC_DEST? - =Sheet1!$A$3
    SERVICE (called XDO_?DESC_SERVICE? - =Sheet1!$A$4
    COUNTRY (called XDO_?COUNTRY? - =Sheet1!$A$5)
    where
    XDO_GROUP_?G_DEST? (=Sheet1!$A$3:$B$5)
    XDO_GROUP_?G_SERV? (=Sheet1!$A$4:$B$5)
    XDO_GROUP_?G_COUNTRY     (=Sheet1!$A$5:$B$5)
    I noticed that if I don't use the last one (XDO_GROUP_?G_COUNTRY), my output is correct even if I don't have more than one country for each service....As soon as I put XDO_GROUP_?G_COUNTRY....I loose all the 2nd level and the most of times the 3rd level too....
    So...I think that the problem is how I choose the excel cells when I assign the XDO_GROUP_?group_name?
    Anybody had made some tests, or can help me ???? I'm becoming crazy.....
    Any help will be appreciated
    Thanks in advance
    Alex

    But how can I use tags XDO_GROUP_?? to group data correctly using hierarchial xml, I don't want to use flat XML.
    Yep, I using Template Builder in Excel to run reports localy, and output is wrong
    It's seems that groups couldn't define the level of nesting, I think...
    How can I write it in XDO_METADATA sheet?
    Though I have hierarchial XML and groups should define nesting level correctly.
    I have no clue.....

  • Golden Gate - Initial Load using parallel process group

    Dear all,
    I am new to GG and I was wondering if GG can support initial load with parallel process groups? I have manage to do an initial load using "Direct BULK Load" and "File to Replicat", but I have several big tables and replicat is not catching up. I am aware that GG is not ideal for making initial load, but it is complicated to explain why I am using it.
    Is it possible to user @RANGE function while performing Initial Load regardless of which method is used (file to replicat, direct bulk, ...) ?
    Thanks in advance

    you may use datapump for initial load for large tables.

  • Data Conversion Errors for the last week

    We've been running a simple Stream Analytics job for little over a month now with a very light workload. Input is Event hub and output SQL Server. We noticed today that we haven't received anything into SQL Server since 2014-12-08 (we don't receive events
    every day so we only know that everything still worked on the 8th of December), so we checked the job's logs. It seems that job is failing to process all the messages: The value of "Data Conversion Errors" is high.
    I wonder what could have happened? We haven't touched the client since we started the job so it's still sending the messages in same format. And we haven't touched the job's query either.
    Has there been an update to either to Stream Analytics or to Events Hub which could cause the issue we're seeing?

    I've followed word for word the TollApp Instructions (except the thing with NamespaceType "Messaging" that has been added to New-AzureSBNamespace).
    I have 0 line in output, and this is the service log:
    Correlation ID:
    e94f5b9e-d755-4160-b49e-c8225ceced0c
    Error:
    Message:
    After deserialization, 0 rows have been found. Possible reasons could be a missing header or malformed CSV input.
    Message Time:
    2015-01-21 10:35:15Z
    Microsoft.Resources/EventNameV2:
    sharedNode92F920DE-290E-4B4C-861A-F85A4EC01D82.entrystream_0_c76f7247_25b7_4ca6_a3b6_c7bf192ba44a#0.output
    Microsoft.Resources/Operation:
    Information
    Microsoft.Resources/ResourceUri:
    /subscriptions/eb880f80-0028-49db-b956-464f8439270f/resourceGroups/StreamAnalytics-Default-West-Europe/providers/Microsoft.StreamAnalytics/streamingjobs/TollData
    Type:
    CsvParserError
    Then I stopped the job, and connected to the event hub with a console app and received that:
    Message received. Partition: '11', Data: 'TollId,EntryTime,LicensePlate,State,Make,Model,VehicleType,VehicleWeight,Toll,Tag
    85,21/01/2015 10:24:56,QBQ 1188,OR,Toyota,4x4,1,0,4,361203677
    Message received. Partition: '11', Data: 'TollId,EntryTime,LicensePlate,State,Make,Model,VehicleType,VehicleWeight,Toll,Tag
    33,21/01/2015 10:25:42,BSE 3166,PA,Toyota,Rav4,1,0,6,603558073
    Message received. Partition: '11', Data: 'TollId,EntryTime,LiMessage received. Partition: '10', Data: 'TollId,EntryTime,LicensePlate,State,Make,Model,VehicleType,VehicleWeight,Toll,Tag
    59,21/01/2015 10:23:59,AXD 1469,CA,Toyota,Camry,1,0,6,150568526
    Message received. Partition: '10', Data: 'TollId,EntryTime,LicensePlate,State,Make,Model,VehicleType,VehicleWeight,Toll,Tag
    25,21/01/2015 10:24:17,OLW 6671,NJ,Honda,Civic,1,0,5,729503344
    Message received. Partition: '10', Data: 'TollId,EntryTime,LicensePlate,State,Make,Model,VehicleType,VehicleWeight,Toll,Tag
    51,21/01/2015 10:24:23,LTV 6699,CA,Honda,CRV,1,0,5,169341662
    Note the bug on the 3rd message. In my opinion it's unrelated, it could be the writeline that can't keep up with the stream in the console application. And at worst it's in the stream, but then I should see at least some lines in output for the correctly
    formatted messages.

  • Using Parallel Processing for Collection worklist Generation

    We are scheduling the program UDM_GEN_WORKLIST in Background mode with the below mentioned values in the variant
    Collection Segment - USCOLL1
    Program Control:
    Worklist valid from - Current date
    Ditribution Method - Even Distribution to Collection Specialists
    Prallel Processing:
    Number of jobs - 1
    Package Size - 500.
    Problem:
    The worklist gets generated but it drops lot of customers items from the worklist when the program is schduled in background using above parameters.
    Analysis:
    - When I run the program UDM_GEN_WORKLIST  in online mode all customers come through correctly on the worklist.
    - When I simulate strategy with the missing customers it evaluates them high so there is nothing wrong with the strategy and evaluation.
    - I increased the Pacakge size to its maximum size but still doesnt work.
    - Nothing looks different in terms of Collection Profile on the BP master.
    - There are always a fixed set of BP's that are missing fromt the worklist.
    It looks like there is something that I dont know about the running these jobs correctly using the parallel processing parameters, any help or insight provided in this matter would be highly appreaciated.
    Thanks,
    Mehul.

    Hi Mehul,
    I have a similar issue now; since a couple of days, the WL generation fails in background mode. Although when I'm running it in foreground processing it's completed w/o any problem.
    My question is that would you confirm that you did reduce the package size to 1?
    so, your parameters are: nr of jobs: 1  and package size: 1
    Is that right? Did it completely solve your issue?

  • Table for infopacakges used in process chains

    Hi,
    Can any one disclose me the table name for the infopackages used in process chains??
    Raj

    Another one:
    RSLDPIO Links datasource to infopackages
    RSLDPIOT  InfoPackage Text Description
    RSLDPRULE ABAP source code for InfoPackages
    RSLDPSEL   Hardcoded selections in InfoPackages
    RSMONICDP Contains the request-id number by data target
    RSPAKPOS List of InfoPackage Groups / InfoPackages
    RSSELDONE InfoPackage selection and job program
    From: table that contains the name of infopacks
    Lot of info already available on this in SDN.
    Hope it helps.
    AT

  • Data load error for master data

    hi
    both errors i need to fix manually or any automatic is there
    psa i need to edit that record and run again infopackage.
    i need keep that infoobject(BIC/ZAKFZKZ ) alpha conversion at infoobject level.
    Errors in values of the BOM explosion number field:
    Diagnosis
        Data record 5584 & with the key '001 3000000048420000 &' is invalid in
        value 'BOX 55576\77 &' of the attribute/characteristic 0RT_SERIAL &.
    InfoObject /BIC/ZAKFZKZ contains non-alpha compliant value 1

    Hi Suneel,
    I feel that the symbol '&' is cauing the issue in ur data loads..
    So try to edit the same at the PSA and then load the data from PSa to the target..
    Also u can use RSKC to handle these special Chars..
    For more information u can search SDN..
    Thanks
    Hope this helps

  • Data Usage Error- iPhone reports using 4GB sent & 4.1GB received??

    My iphone 4 reports having Sent 4GB and received 4.1GB of data. There's no possible way I could've used that much data, especially sending that much. I hardly use 3G and I remain on WiFi where it's available. I also reset my usage statistics at the start of billing cycle so it's not accumulated. I contacted AT&T knowing that that amount of data would cost a fortune seeing that I'm only on the 200 mb plan. I figured it had to be an error of some sort because I never received an alert for reaching the half or full data limit. AT&T replied that I have only used 75 mb. What a relief! I remained on the phone for another 40 minutes trying to find out why I got an error. However, they weren't able to help me.
    To avoid being stuck on the phone for another 40 minutes, I went to the Genius Bar. The "Genius" there had no idea why the phone reported that amount. He claimed that the amount was accumulated, I replied that there's no way I could've used that amount of data since July and I have reset the counter on 10/4/10. My send data is always 1/10th of what I receive. He claimed that the data counter includes Wifi usage. I stressed that the counter says that it is "Cellular Network Data" again and again seeing if he might correct himself. His final solution was to just restore the phone.
    I went home, reset the usage counter, connected to wifi, downloaded and an app, and voila! the data usage says 0 bytes received proving that my wifi has not been running up the counter. Apparently Apple needs restaff that store.
    I've discovered a few ways to accurately check my data through AT&T:
    1) Get AT&T's myWireless app (which I am unable to access because I am not the AT&T account holder.)
    2)*3282# receives an alert on data usage.
    3) simply call AT&T
    Here's my question: How could the counter come up with that outrageous of a number? I wouldn't want it to someday send that data amount to AT&T and bill me for it.

    "Here's my question: How could the counter come up with that outrageous of a number? I wouldn't want it to someday send that data amount to AT&T and bill me for it."
    Just to reassure the op, that is not how your bill is calculated, so don't fear that. AT&T has to bill you for the data as calculated by their equipment on their end. They cannot charge you for data that their in-house systems cause.
    Also, the iPhone records all data, not just that which actually incurs a bill entry on your statement. Reset your counter before going to bed, and clear all apps from the recent task bar, and make sure no background tasks are running (no push mail and so forth). Check your useage in the morning, and there will be at the very least a few 10's-100's of kilobytes sent and received. I have never heard a detailed explanation for this, but it is some sort of housekeeping data exchange. This data will not show on your billing statement.
    At least that has been my experience with my 3Gs and AT&T.
    Others have reported far greater data flow overnight then what I typically see (I've never had more than 250K), again without getting an explanation from AT&T about what it is, but they are not being charged for it.

  • Invalid data format error for CLOB

    I am trying to migrate a piece of code from WLS 8.1.2 to WLS 8.1.5.
    WLS 8.1.2 has ojdbc14.jar with version "Oracle JDBC Driver version - 9.0.2.0.0"
    WLS 8.1.5 has ojdbc14.jar with version "Oracle JDBC Driver version - 10.1.0.4.0"
    In the older version, I am storing an encrypted string value in a clob and saving in the db.
    When I try the same in the new code, it displays an error that the data is not of the proper format.
    If I read any data entered using the older application, in the new one, it is still valid.
    But, if I enter a new value using WLS 8.1.5, only those are invalid.
    I even tried deploying the application with the old and new ojdbc14.jar files with it. In either case, it still gives an error. If I use the later version of ojdbc14.jar, the method getAsciiStream() is deprecated.
    How can I make my code independent of the version of ojdbc14.jar and store and read the clob?

    Rohit B wrote:
    I am trying to migrate a piece of code from WLS 8.1.2 to WLS 8.1.5.
    WLS 8.1.2 has ojdbc14.jar with version "Oracle JDBC Driver version - 9.0.2.0.0"
    WLS 8.1.5 has ojdbc14.jar with version "Oracle JDBC Driver version - 10.1.0.4.0"
    In the older version, I am storing an encrypted string value in a clob and saving in the db.
    When I try the same in the new code, it displays an error that the data is not of the proper format.
    If I read any data entered using the older application, in the new one, it is still valid.
    But, if I enter a new value using WLS 8.1.5, only those are invalid.
    I even tried deploying the application with the old and new ojdbc14.jar files with it. In either case, it still gives an error. If I use the later version of ojdbc14.jar, the method getAsciiStream() is deprecated.
    How can I make my code independent of the version of ojdbc14.jar and store and read the clob?Hi. You are suffering with the evolution of Oracle's driver. If you can make a standalone
    program that contains some data, inserts it and extracts it an compares it and proves
    the bug, we can open a case with Oracle. In general you want to use their latest driver,
    but if you can't keep up with their bugs/fixes, you can always keep using the same version
    of the driver everywhere. The way to do that is not to put the driver in you packages,
    but simply to keep the version you want in the weblogic installation's server\lib
    directory (ojdbc14.jar)
    Joe

  • Data load error for Init.

    Hello Gurus,
    I am having some problme in loading Init,
    In data load i am getting following error messages,
    1,System error occurred (RFC call)
    2,Activation of data records from ODS object ZDSALE01 terminated
    3,No confirmation for request ODSR_45F46HE2FET0M7VFQUSO2EHPZ when activating the ODS object ZDSALE01
    4Request REQU_45FGXRHFHIKAEMXL3D7HU6OFR , data package 000001 incorrect with status 5
    5,Request REQU_45FGXRHFHIKAEMXL3D7HU6OFR , data package 000001 not correct
    6,Inserted records 1- ; Changed records 1- ; Deleted records 1-
    Please help me in resolving these errors.

    Hi,
    are you loading a flat file? If yes check if the file is available at the specified path and if you/sap has the authority to access that path/file.
    regards
    Siggi

Maybe you are looking for

  • 9.2 Client Upgrade Necessary?

    Hello all, I'm working in a large group, and a decsion has been made to upgrade the SERVER from 9.0 to 9.2. Is it absoluetely necessary (from a JDBC standpoint) to upgrade the CLIENTs on all the dev machines? In the past, the upgrades haven't mattere

  • I can´t open itunes 10.5!!!

    When it start running suddenly stops working. I have tried everithimg but uninstalling and installing it again but I´am afraid to loose my music and movies.

  • Monitoring User in CRM Portal

    Hi All, Our company have finished CRM portal and this week we will doing a stress test by using portal system at a time with lot of user. How to monitoring users that have been logged into portal in one day?Not only the total number of user but also

  • Form id

    I am trying out this form trying to move from formsite. I do have a few questions I hope someone will be able to answer. Is it possible to create a custom form ID for each form filled out? For example the id would be ws(year)(month)(day).(time)  We d

  • Academic version of Aperature, Same as Standard?

    I have the ability to purchase the Academic version of Aperature which affords substantial savings over the regular version. Is it the same version as the regular and can it be updated. I have heard remarks about an inability to upgrade the Academic