Initial Load of customers stopped

Hello,
I started download of customer_main in CRM 4.0 and as a sales group couldn´t be found (which is also deleted in R3, but still to find in customer masters; also R3 gives an error if you go into VD02 because of this) an BDOC was generated, what is quite ok.
BUT - the problem is that the initial load was stopped completely.
In Rel. 3.0 we got also BDoc for errors BUT the download always finished. So afterwards errors were cleaned up.
Has somebody an idea whether this is normal behaviour in Rel.4.0 or what I can maybe do that download runs further after stop?
I´d be happy if I´d get some hints.
Kind regards
Christina

Hello,
it´s difficult to filter out customers as I don´t know upfront which one run into error, as I don´t know which sales group, offices have been deleted in R3 and are still existing in customer master.
We found note 823594 and it seems that with this the check of sales group+offices is simply ignored, what means the initial load will run through smoothly.
Problem can be that you get no BDOC anymore if a saes group is missing and then inconsistency of systems increase.
The error message was
"CRM_BUPA_MAPPING_30110 - No CRM sales office can be determined fr R3 sales group THS".
So the error shows what to do basically. But why the whole download is stopping is not clear.
If someboy knows about please reply on this.
Kind regards
Christina

Similar Messages

  • No initial load of Customers, Material and delta load of Sales Orders.

    Hi Experts,
    I am facing a very troublesome issue. I am not able to setup the Middleware portion for initial and delta loads. I read a lot of documents and corrected a lot of things. finally, the connectivity is done with R/3 and CRM. Initial load of all objects is successful (as per Best PRactices guide). Customizing load is successful.
    But after now I have these open issues for which I am unable to find any answers (am really exhausted!!):
    - Customer_main load, it was succesful, but no BP's of R/3 are available.
    - Material, it failed in SMW01, SMQ2, the errors are:
    Mat. for Initial Download: Function table not supported
    EAN xxxxxxxxxxxxxxxxxx does not correspond to the GTIN format and cannot be transferred
    EAN yyyyyyyyyyyyyyyyyy does not correspond to the GTIN format and cannot be transferred
    Plant xx is not assigned to a business partner
    - Sales order, it shows green bdoc, but error segments says "No upload to R/3" and the order does not flow to R/3.
    We had our system setup 3 years back for data transfer and Middleware. But few things changed and connectivity stopped. I did all that again now, but am not yet successful. Any inputs will be greatly appreciated.
    Thanks,
    -Pat

    Hi Ashvin,
    The error messages in SMW01 for MAterial initial load is :
         Mat. for Initial Download: Function table not supported
         EAN 123456789000562 does not correspond to the GTIN format and cannot be transferred
         EAN 900033056531434 does not correspond to the GTIN format and cannot be transferred
         Plant 21 is not assigned to a business partner
    I have done the DNL_PLANT load successfully. Why then the plant error?
    Some of the messages for BP:
    Messages for business partner 1331:
    No classification is assigned to business partner 1331
    For another,
         Partner 00001872(469206A60E5F61C6E10000009F70045E): the following errors occurred
         City Atlanta does not exist in country US
         Time zone EST_NA does not exist
         You are not allowed to enter a tax jurisdiction code for country US
         Validation error occurred: Module CRM_BUPA_MAIN_VAL, BDoc type BUPA_MAIN.
    Now, the time zone EST is assigned by default in R/3. Where do I change that? I do not want to change time zones as this may have other impacts. Maybe CRM I cna change this, not for sure in R/3. City check has been deactivated in R/3 and CRM, still the error.
    Till these 2 are not solved, I cannot go into the Sales order loads.
    Any ideas will be greatly appreciated.
    Thanks,
    -Pat

  • Initial Load of Customers as Persons?

    Hello,
    does anyone know, whether it is possible to download some customers from R/3 to CRM as persons instead of organizations.
    The background is, that we have debtors who are persons, not organizations.
    As R/3 does not differ between Org's and Persons, but CRM does, I suppose there must be a solution for this common situation.
    Thanks in advance for your help. Points will be rewarded.
    Regards
    Birgit Selle

    Hello Naveen,
    thanks for that information but unfortunately I don't know in which context I have to call that BAPI. Is there a user exit?
    Perhaps I did not post my question precisely enough. So I try to ask again.
    In the pide I can assign a classification (1 of 5 possibilities) to the relevant account group. I can choose between:  A: Consumer Organization, B: Customer, C: Sales Prospect, D: Competitors and E: Consumer Person. So I choose classification B: Customer. But that means that the customers will automatically be created as organizations in CRM. But I would expect the system to offer me the possibility to choose between Person and Organization as it happens with classification A and E for consumers.
    Thanks
    Birgit

  • Replicating data once again to CRM after initial load fails for few records

    My question (to put it simply):
    We performed an initial load for customers and some records error out in CRM due to invalid data in R/3. How do we get the data into CRM after fixing the errors in R/3?
    Detailed information:
    This is a follow up question to the one posted here.
    Can we turn off email validation during BP replication ?
    We are doing an initial load of customers from R/3 to CRM, and those customers with invalid email address in R/3 error out and show up in SMW01 as having an invalid email address.
    If we decide to fix the email address errors on R/3, these customers should then be replicated to CRM automatically, right? (since the deltas for customers are already active) The delta replication takes place, but, then we get this error message "Business Partner with GUID 'XXXX...' does not exist".
    We ran the program ZREPAIR_CRMKUNNR provided by SAP to clear out any inconsistent data in the intermediate tables CRMKUNNR and CRM_BUT_CUSTNO, and then tried the delta load again. It still didn't seem to go through.
    Any ideas how to resolve this issue?
    Thanks in advance.
    Max

    Subramaniyan/Frederic,
    We already performed an initial load of customers from R/3 to CRM. We had 30,330 records in R/3 and 30,300 of them have come over to CRM in the initial load. The remaining 30 show BDOC errors due to invalid email address.
    I checked the delta load (R3AC4) and it is active for customers. Any changes I make for customers already in CRM come through successfully.  When I make changes to customers with an invalid email address, the delta gets triggered and data come through to CRM, and I get the BDOC error "BP with GUID XXX... does not exist"
    When I do a request load for that specific customer, it stays in "Wait" state forever in "Monitor Requests"
    No, the DIMA did not help Frederic. I did follow the same steps you had mentioned in the other thread, but it just doesn't seem to run. I am going to open an OSS message with SAP for it. I'll update the other thread.
    Thanks,
    Max

  • SYSFAIL in Initial Load of BUPA_MAIN

    Hi Colleagues,
    We are performing an initial load from a IS-M solution, so we use BUPA_MAIN instead of CUSTOMER_MAIN.
    We have some BDocs that present problems in this load, but it has been preventing the queue R3AI_BUPA_MAIN from being processed. It stops with SYSFAIL with message "Error in Mapping: Detais trx. SMW01).
    We expected that initial load shouldn't stop at any BDoc with error. We expected that we could just solve those BDocs with trx SMW01.
    And there's something else. I've seen in other project that if Middleware detects an error in a BDoc block, it just split the entries with error and process the correct entries. In our case, if one entry is wrong, all entries keep in the BDoc, including those that has no error at all.
    Is there normal behaviour or is there a parameter that control its behaviour. In simple words what we need is:
    1) BDocs with error are dealt in SMW01 but don't stop the queue, preventing all other BDocs from being processed.
    2) A block of BDoc is split when some of its etries have an error, allowing the other entries to be processed.
    Best regards,
    Renato

    Here is the best answer I got:
    the u201EError in Mappingu201C issue happens at a very first moment of the data flow when the data is just coming into CRM Middleware and there is still no BDoc created. This happens often when there is a problem, i.e.  with the table definition, in transaction /nr3ac1.  When you have a single queue handling then this problem will only stop the queue related to this BP. If you have reduced queue names or mass queues it will stop the queues, indeed.
    The procedure you are describing regarding the handling of errors (processing the good ones, and keeping the bad ones) is available since CRM 4.0. I do not know a single parameter where you can switch on or off this behavior.
    Please have a look for parameter MAX_QUEUE_NUMBER_INITIAL. Hope you will find the correct note.

  • Error while Initial load

    Hi Forum,
    Iam doing Middleware setup for downloading customer master from R/3 to CRM.Iam trying to do initial load of customizing objects viz.,DNL_CUST_ACGRPB,DNL_CUST_ADDR,
    DNL_CUST_KTOKD,DNL_CUST_TVKN,DNL_CUST_TVLS,DNL_CUST_TVPV... which are to be loaded before doing initial load of CUSTOMER_MAIN.
    while doing initial load of customizing objects iam getting the below mentioned error
    <b>001 No generation performed. Call transaction GN_START.</b>
    <b>002 Due to system errors the Load is prohibited (check transaction MW_CHECK)!</b>
    <b>-</b>when I do GN_START
    "A  job is already scheduled periodically.
    Clicking on 'Continue' will create another job
    that starts immediately.
    Do you want to continue?" message is displayed
    and I have sheduled it.
    But in SMWP transaction I can see in
    <b>BDoc Types: Generation of other runtime objects</b>
    Not generated / <b>generated with errors 2 entries  
                    31.08.2006 05:33:50</b>
    and the objects with errors are
    <b>POT_LISTWRITE
    SPE_DDIC_WRITE</b>
    <b>-</b>In transaction MW_CHECK, system displays message as <b>No generation performed. Call transaction GN_START.</b>     
    when I regenerate these objects(generated with errors)from the context menu I find no difference.
    I have also referred to the <b>Note :637836 and 661067</b> which also suggests to run few reports and GN_START but inspite of doing all the corretion parameters in the note Iam still unable to come out of the situation.
    Please Guide
    Thanks in Advance
    Shridhar.
    Message was edited by: Shridhar Deshpande

    Hi Rahul,
    Thanks for the reply.I checked in transaction MW_CHECK and the system throws the message as
    <b>No generation performed. Call transaction GN_START.</b>
    In the long text the below message is available
    <b>No generation performed. Call transaction GN_START.
    Message no. SMW_GEN_KERNEL005
    Diagnosis
    An upgrade was performed.
    <b>System response</b>
    The Middleware is stopped because MW objects must be generated.
    <b>Procedure</b>
    Excecute transaction GN_START.</b>
    If GN_START is executed,I dont find any change.
    I also checked in <b>smq2</b>in CRM and I found the status of the queue as below
        CL       Queue Name      Entries  Status   Date
    <b>200 CSABUPA0000000042  5       SYSFAIL  31.08.2006 09:54:05 31.08.2006 09:54:11</b>
    Thanks
    Shridhar
    Message was edited by: Shridhar Deshpande

  • Initial load of small target tables via flashback query?

    A simple question.
    Scenario: I’m currently building a near real time warehouse, streaming some basic facts and dimension tables between two databases. I’m considering building a simple package to "reset" or reinitialize the dimensions an all-round fix for variety of problem scenarios (since they are really small, like 15 000 rows each). The first time I loaded the target tables I utilized data pump with good success, however since streams transforms data on the way a complete reload is somewhat more complex.
    Considered solution: Ill just write a nice flashback query via db-link fetching data from a specific (recent) SCN and then I reinitialize the table on that SCN in streams...
    Is this a good idea? Or is there something obvious like a green and yellow elephant in the gift shop that I overlooked? Why I’m at all worried is because in the manuals this solution is not mention among the supported ways to do the initial load of a target table and I’m thinking there is a reason for this?

    I have a series of streams with some tranformations feeding rather small dimensional tables, I want to make this solution easy to manage even when operations encounter difficult replication issues, so Im developing a PL/SQL package that will:
    1) Stop all streams
    2) Clear all errors
    3) Truncate target tables
    4) Reload them including transformation (using a SELECT AS OF "ANY RECENT SCN" from target to source over dblink)
    5) Using this random recent SCN I will re-instantiate the tables
    6) Start all streams
    As you see datapump even if it works is rather difficult to utilize when you tranform data from A to B, using AS OF I not only get a constant snapshot from the source, I also get the exact SCN for it.
    What do you think? Can I safely use SELECT AS OF SCN instead of datapump with SCN and still get a consisten sollution?
    For the bigger FACT tables im thinking about using the same SELECT AS OF SCN but there with particular recent paritions as targets only and thus not having to reload the whole table.
    Anyways this package would ensure operations that they can recover from any kind of disaster or incomplete recovery on both source and target databases, and just re-instantiate the warehouse within minutes.

  • Initial Load performs deletion TWICE!!

    Hi All,
    I face a very peculiar issue. I started an initial load on a codition object. On the R/3 there are about 3 million records. The load starts
    1)First it deletes all the records in CRM(count bcomes 0)
    2) Then it starts inserting the new records ( the records get inserted and the count reaches 3 million)
    in R3AM1 this adapter object(DNL_COND_A006) status changes to "DONE"!!
    Now comes the problem
    There are still some queue entries which again starts deleting the entries from the condition table and the
    count starts reducing and the record count becomes 0 agai n in the conditio table!
    Then it again starts inserting and the entire load stops after insertin 1.9 million records! Thsi isvery strange.Any pointers will be helpful
    I also checked whether the mappin module is maintained twice in CRM but that is also not the case. Since the initial load takes more than a day i checked whether there are any jobs scheduled but there are no jobs scheduled also.
    I am really confused as to why 2 times deletion should happen. Any pointers will be highly appreciated.
    Thanks,
    Abishek

    Hi Abishek,
    This is really strange and I do not have any clue. What I can suggest is that before you start the load of DNL_COND_A006, load the CNDALL & CND object again. Some time CNDALL resolve this kind of issues.
    Good luck.
    Vikash.

  • Initial load methodology

    hello
    My project is to replicat in real time a JD Edwards database (oracle 10.2.0.4, 1To) with goldengate (11.1.1.1.2 on Aix 6.1)
    I want to be sure to validate my initial load setup, what I want to do is :
    1 -  start extract + pump, begin now  (apply stopped)
    2 -  start export (expdp from source + impdp on target),  get date time of end  import process :  date_end_import
    3 -  alter replicat begin ( date_end_import + 10mn)
    4 -  start replicat
    is it Ok ?
    thank you

    I think I'm wrong, seems the right operations are :
    1 - start extract + pump, begin now  (apply stopped)
    2 - start export (expdp from source + impdp on target),  get date time of begin export process : date_begin_export
    3 - AFTER IMPORT :  alter replicat begin (date_begin_export)
    4 - AFTER IMPORT : start replicat
    But reading the forum give the following setup :
    1 - start extract + pump, begin now  (apply stopped)
    2 - ON SOURCE : select dbms_flashback.get_system_change_number() from dual;  (ex : scn=123)
    3 - ON SOURCE expdp ... flashback_scn=123 ...
    4 - ON TARGET  impdp ...
    5 - AFTER IMPORT : start replicat AFTERCSN 123
    Thank you

  • Middleware starts replicating before initial load

    Hi folks,
    currently i´m setting up the middleware to connect a live ERP with a fresh CRM. The initial load is not done yet, also a lot other configuration stuff. I noticed that at the time of registering the queues the middleware is connecting itself and listing first BDocs in SMW01. Mostly R3AD delta(!) material BDocs (which fail by mapping error currently) but also green ones like 'class message' or 'objcl_message' which seem to have dummy or real entries. This is a problem as i did not configure e.g. R3AC1 filters or number ranges yet.
    I thought delta load is triggered first after initial load. How can i stop the middleware before i finish the configuration stuff. If there is a 'switch off' (like deactivate a business adapter object in R3AC1) i guess i would have to switch it on again just before the initial load and then risking uncontrolable connection againg before i can execute R3AS. I mean if i would e.g. deregister the queues i could end my configuration but i would have to register the queues just before initial load and would again having no control about the start of the replication. Is that a problem for you? - How do you handle that issue usually if you set up a new CRM?
    Thanks

    Hi,
    Read these topics:
    Re: Turning Bdoc On & Off, & its monitoring
    Re: Deactivation of data exchange between ERP and CRM
    Denis.

  • Message no. R1286 when starting initial load again

    When I start the initial load again ( Customer_main from ECC to CRM )  , after changing the ppoma structure to make sure all data is there, for all customers that already have been replicated in the past I get the Message R1286 :
    Business partner XXXXXX already exists. (CRM 7)
    While this statement is ofc correct, the customer does already exist, in the past the customer was updated, and the bdoc processing status green. We have upgraded from SP4 to SP8, and now the error message is shown and the bdoc is not processed.
    Has something changed ?
    How do I update all my customers to reflect changes in filters / PPOMA or just update them to fix small replication problems.
    I have the same message when I start a request instead of an initial load.
    Thx !!
    Edited by: Olivier Priem on Nov 25, 2011 1:54 PM

    Any help is welcome ...

  • Agentry Sales Manager Initial Load problem

    Hello,
    We've implemented the Agentry Sales Manager solution, everything work well in the development and test environments, but in production we have performance issues for specific users:
    We have a user with:
    5900 Accounts
    21900 Contact Persons
    Which are very large numbers, but the person responsible for our OSS question says this is feasible in the agentry environment.
    The problem occurs when we perform the initial load/transmit for this user, the Accounts are processed like it should, but during the process of the contact persons something goes wrong:
    I see that the function module /SYCLO/CRMMD_DOMYCONTACT_GET is being started and completely processed (Initially we had a dump with a timeout, but this has been solved).
    Then the Agentry server is processing the results of that function module:
    In the log I notice these lines:
    getDocumentLinks::begin
    getDocumentLinks::getDocumentLinks
    Afterwards the server processes the results via the steplets, after which the data is being processed on the device (iPad). Then the employeeFetch should be triggered.
    In our test with a user with lesser data this happens, but in this case we notice the following:
    2015/04/16 15:46:48.852:           + BackEnd=Java-1
    2015/04/16 15:46:48.852:             + BackEnd=Java-1
    2015/04/16 15:46:48.852:               com.syclo.sap.FetchSession::endFetchRemoval::::begin
    2015/04/16 15:46:48.852:             + BackEnd=Java-1
    2015/04/16 15:46:48.852:               com.syclo.sap.FetchSession::endFetchRemoval::::--------------------------------
    2015/04/16 15:46:48.852:           + BackEnd=Java-1
    2015/04/16 15:46:48.852:             + BackEnd=Java-1
    2015/04/16 15:46:48.852:               com.syclo.sap.FetchSession::beginFetchObjectRead::::begin
    2015/04/16 15:46:48.852:             + BackEnd=Java-1
    2015/04/16 15:46:48.852:               com.syclo.sap.FetchSession::beginFetchObjectRead::::--------------------------------
    2015/04/16 15:46:48.852:           + BackEnd=Java-1
    2015/04/16 15:46:48.852:             + BackEnd=Java-1
    2015/04/16 15:46:48.852:               com.syclo.sap.FetchSession::endFetchObjectRead::::begin
    2015/04/16 15:46:48.852:             + BackEnd=Java-1
    2015/04/16 15:46:48.852:               com.syclo.sap.FetchSession::endFetchObjectRead::::--------------------------------
    2015/04/16 15:49:21.108: + Thread=4172
    2015/04/16 15:49:21.108:   + Server=Agentry
    2015/04/16 15:49:21.108:     + BackEnd=Java-1
    2015/04/16 15:49:21.108:       Java Back End: current jvm memory usage is 1682243584 bytes
    2015/04/16 15:49:38.096:   + Server=Agentry
    2015/04/16 15:49:38.096:     + BackEnd=Java-1
    2015/04/16 15:49:38.096:       Java Back End: current jvm memory usage is 1682309120 bytes
    2015/04/16 15:49:55.100:   + Server=Agentry
    2015/04/16 15:49:55.100:     + BackEnd=Java-1
    2015/04/16 15:49:55.100:       Java Back End: current jvm memory usage is 1682374656 bytes
    After the last line nothing else happens. 
    In the Agentry GUI I see also that the Connection has disappeared  without an error/exception what so ever...
    Anybody has an idea what might cause this issue?
    We've set the timeout and keepalive parameters to 36000 seconds (10 hours) in the agentry ini, so I think that it isn't a time out.
    Thanks in advance!
    Kind regards,
    Robin

    Hi Jason,
    Thanks for your answer, it is a standalone Agentry server (without SMP). It looks to me also that the amount of data being fetched is too big. But the customer wants to get it on the device as the person on OSS said it should be possible.
    When I look in the AgentryGUI on the server during the fetch I notice the following (see screenshot below):
    The fetch is still busy but the connection is gone. At the point of the screenshot we see the Fetch is taking more then 3 hours (9:27 AM to 12:51 PM), but the connection for that user has been gone in the AgentryGUI from around 11:00 AM.
    Even stranger is that nowhere an exception is thrown. The process on the server continues until the complete data set is processed in the steplets (seen in the log). Then the server is trying to allocate more jvm heap space. But at some point process just stops. In stead of continuing with the process.
    The data is also not sent to the device at that point, so the problem is somewhere on the agentry server it seems.
    The server's memory is 8GB and i'v set the maxheapspace variable in the agentry.ini as following:
    maxHeapSize=2048
    In the log I see that the server that cap doesn't reach.
    We run on an iPad, only iOS devices were in scope.
    Any idea's on where we might change something else?
    Kind regards,
    Robin

  • Initial load for Object CUSTOMER_MAIN

    Hi,
    My client has taken their initial load more than year back. During that time some 900 odd customers where not copied to CRM from ECC because of various master data issues. Now, they have entry of Business Partner in table crmm_but_custno  however none of them are actually downloaded from ECC.
    We have given requirement to download them again from ECC and create them in CRM. For the same I tried to download them using CRMM_BUPA_MAP but due to the fact those customers has been part of initial load  the system is taking delta load for them.
    Here, I am facing error Business partner with GUID xxx does not exist  (this error would come because system in taking delta load with update indicator 'U' which should come as 'I' for insert as those customer are not created yet!)
    The main issue here is even in synchronization or initial load if we download the customer back which were part of initial load system would take only delta values and not all data from ECC.
    Does any one has any suggestion how to download data with full details in them as they would come in initial load in this case?
    I have already attempted following option:
    1. Used initial load again
    2. Synchronization with request using Tcode R3AR2/R3AR4/CRMM_BUPA_MAP
    Thanks,
    Alok Mehta

    Hi Sudhir,
    Thanks for the answer. I tried the same but I am getting following error in BDOC analysis:
    Customer number xxx is already assigned to a business partner.
    The above error is coming because this customer was part of initial load however the BDOCs were in error mode and no one had tried to reprocess them after clearing errors.
    Now, I do not have those BDOC as they are deleted from the system. I am suspecting that I need to delete some entry in table where the reference was stored in earlier initial run.
    Request you to please reply in case you have any suggestions.
    Thanks,
    Alok

  • "sh-2.05a#" on crashing initial loading screen.

    Yesterday, I had a problem with my cd drive:
    http://discussions.apple.com/thread.jspa?threadID=1559020&tstart=0
    Trying to fix it, I followed these instructions:
    http://docs.info.apple.com/article.html?artnum=106345
    which involved updating the hostconfig file. I now wish I hadn't of done this.
    I restarted my computer as was told and now it doesn't get past the initial grey loading screen and has a:
    "sh-2.05a#"
    code message at the top left hand corner of the screen. The loading wheel now stops too.
    Please help!!!!
    I seriously can't lose everything on my harddrive, it would be a major problem for me.

    at the sh-2.05a# prompt you could undo what you did by reversing step 6.
    You did do step 6 yes? If you didn't then this won't apply. What step 6 did was copy the original and save as a file called hostconfg.old.
    Now following the code below replace the changed hostconfig with the original one that you saved.
    6. Type
    sudo mv /etc/hostconfig.old /etc/hostconfig
    7. Press Return.
    8. Type the Admin user password, and press Return. (that might not be necessary here but do so if requested)
    Now restart and see if it boots correctly.
    By the way, the article that you followed applies to 10.0.1 to 10.0.4. Your profile says you are on Jaguar 10.2, so at first glance, I would not have thought that solution would apply to you.
    Try replacing the CD drive. Do you have a spare one around?

  • "sh-2.05a#" on crashing initial loading screen.  Is this an error?

    Yesterday, I had a problem with my cd drive:
    http://discussions.apple.com/thread.jspa?threadID=1559020&tstart=0
    Trying to fix it, I followed these instructions:
    http://docs.info.apple.com/article.html?artnum=106345
    which involved updating the hostconfig file. I now wish I hadn't of done this.
    I restarted my computer as was told and now it doesn't get past the initial grey loading screen and has a:
    "sh-2.05a#"
    code message at the top left hand corner of the screen. The loading wheel now stops too.
    Please help!!!!
    I seriously can't lose everything on my harddrive, it would be a major problem for me.
    Message was edited by: tom_dutch69
    Message was edited by: tom_dutch69

    Hi Tom (and anyone else reading this message),
    First, a warning: Terminal can be dangerous if not fatal to a Mac. Unless you know exactly what you're doing, I STRONGLY encourage people not to fiddle with it. Unfortunately, you seem to have found out the hard way.
    I would post this issue in the OSX folder and see if someone knowledgeable in these matters can help you undo what you did in Terminal.
    Beyond that, I'd try booting into FireWire Target Disk Mode. If that works, you can get your files off. If that's all you need, and if you don't get a fix from the OSX people, then I would erase and zero the HD, then remote install a new OS on it via another Mac.
    Oh, and I almost forgot. Apparently you didn't believe in backups before. I hope you see the need for frequent backups now, the more important the data, the more frequent the backups. I do it at least daily. For the very little time it takes, it sure is a cheap insurance policy. External HD: $100. FW cable: $5. My data: priceless.

Maybe you are looking for

  • GWCheck repetitive 'CODE 93 unused blob files (deleted)

    I've run the standalone gwcheck a few times now. with 'contents+attachment file check' and Fix problems. I've run it on user/message. It seems to me that it keeps reporting the same message after every run, I get Correctable conditions encountered: C

  • Is it possible to create a report combining data from 2 reports?

    Post Author: swalker CA Forum: General We are upgrading the application, ServiceCenter.  We have reports that show our Response and Resolution Metrics.  I have 2 set of reports, one for the old version and one for the new version.  I need to combine

  • I can't see my own linksys router

    I installed a linksys router for a friend, I can connect to it with my iphone, and my laptop. He can not even see it in the list of wifi signals. He can however connect to the neighbors device but not his own. I have been working on this for several

  • Configuring queue for Producer - first-available Consumer delivery

    Hello, Setting consumerFlowLimit to 1 consumerFlowThreshold to 0 doesn't appear to be enough to ensure that messages from a queue are delivered to the first available consumer. We are trying to implement a classic Producer(s) -> Consumer(s) job distr

  • Can users who accessed the system via the ITS be identified?

    Dear experts, I wondered if there is any SAP system table which provides the information that a certain logged-in user has connected to the SAP-system via the ITS. I couldn't find anything and I've also been searching the SDN forums for that informat