Initial Data Load

Hi,
I've a need to upload large amount of data into Siebel UCM/OCH that I'd like to DQ outside of UCM prior to uploading into UCM. I'm looking for some suggestions on how to do that - how are other customers doing it? Could IIR (Informatica Identity Resolution) be inovked directly outside of UCM? Would appreciate the feedback.
Thanks in advance!

Hi Prawin,
Generally, if the parameter has a valid default value, the report runs automatically when we first view or preview it.
Based on your requirement, you can create a new dataset which is includes all available values of the parameter, and then using a default value to run the report. In order to achieve this, please refer to the steps below:
1. Create a dataset which is named DataSet2for the parameter using the following query string:
SELECT 0 AS Expr1
UNION ALL
SELECT Field FROM table
2. Edit your base dataset query.
   The original query:
      WHERE (Field = @Parameter)
   Change it to this:
      WHERE (Field = @Parameter) OR (@Parameter = ‘0’)
3. In the Default Values page, select “Get values from a query” option. Set the "Dataset" and "Value field" like this:
Dataset                        Value field
DataSet2                      Expr1
When you first view or preview the report, it will display the complete data. For more inoformation, please refer to:
http://social.technet.microsoft.com/Forums/en-US/sqlreportingservices/thread/48de91f9-1844-40c1-9614-5ead0b4b69a5#P1Q15
Regards,
Bin Long

Similar Messages

  • OIM Initial Data Load - Suggestion Please

    Hi,
    I have the following scenario :
    1.My client is currently having 2 systems 1.AD and 2.HR application.
    2.HR application is having only Employee Information.(it contains information like firstname,lastname,employeenumber etc)
    3.AD is having both employees and contractor information
    4.Client wanted my HR application to be the trusted source but the IDM login ID of existing users should be same us that of AD samaccount name.
    I am using OIM 9.0 and 9041 connector pack.What would be the best way to do the initial data loading in this case?.Thanks in advance.

    Hi,
    Can you tell me how do you relate employee in HR to corresponding record in AD.Then I will be in better situation to explain how you can do it.
    But even without this information following approach will solve your requirment.
    1.Do the trusted recon from AD.
    2.samAccountName will be mapped to user id field of OIM profile.
    3.Do the trusted recon with HR.The matching key should be the answer of my above question.
    4.For trusted recon with HR remove the Action "Create User" on No Match Found event.
    Hope this will help.
    Regards
    Nitesh
    Regards
    Nitesh

  • Sap ME 5. initial data load problem

    Hi
    I instaled Sap ME 5.2 SP3 and now the initial data load script give me a long error. Any ideas?
    Enter password for admin user:
    Confirm password for admin user:
    Error while processing XML message <?xml version="1.0" encoding="UTF-8"?>
    <DX_RESPONSE>
    <CONCLUSION success='no'/>
    <IMPORT_RESPONSE>
    <TOTALS failed='2' total='4' invalid='0' succeeded='2'/>
    <LOG>
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            ALARM_TYPE_CONFIG_BO processed
            Database commit occurred
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            1 records skipped; update is not allowed
            ALARM_CLEAR_CONFIG_BO processed
            Database commit occurred
            ALARM_PROP_BO failed (SITE:*, PROP_ID:1)
            Exception occurred: (Failed in component: sap.com/me~ear) Exception rais
    ed from invocation of public void com.sap.me.frame.BasicBOBean.DXImportCreate(co
    m.sap.me.frame.Data,com.sap.me.frame.Data,boolean) throws com.sap.me.frame.Basic
    BOBeanException method on bean instance com.sap.me.alarm.AlarmPropBOBean@143f363
    e for bean sap.com/me~earxml|me.alarm.ejb-5.2.3.1-Base.jarxml|AlarmPropBO in a
    pplication sap.com/me~ear.; nested exception is: javax.ejb.EJBException: (Failed
    in component: sap.com/me~ear) Exception raised from invocation of public void c
    om.sap.me.dbsequence.DBSequenceBOBean.adjustSequence(java.lang.String,java.lang.
    String,long) throws com.sap.me.frame.BasicBOBeanException method on bean instanc
    e com.sap.me.dbsequence.DBSequenceBOBean@712676c3 for bean sap.com/me~ear*xml|me
    .dbsequence.ejb-5.2.3.1-Base.jar*xml|DBSequenceBO in application sap.com/me~ear.
    ; nested exception is: javax.ejb.EJBException: nested exception is: com.microsof
    t.sqlserver.jdbc.SQLServerException: Incorrect syntax near the keyword 'TABLE'.;
    nested exception is: javax.ejb.EJBException: (Failed in component: sap.com/me~e
    ar) Exception raised from invocation of public void com.sap.me.frame.BasicBOBean
    .DXImportCreate(com.sap.me.frame.Data,com.sap.me.frame.Data,boolean) throws com.
    sap.me.frame.BasicBOBeanException method on bean instance com.sap.me.alarm.Alarm
    PropBOBean@143f363e for bean sap.com/me~earxml|me.alarm.ejb-5.2.3.1-Base.jarxm
    l|AlarmPropBO in application sap.com/me~ear.; nested exception is: javax.ejb.EJB
    Exception: (Failed in component: sap.com/me~ear) Exception raised from invocatio
    n of public void com.sap.me.dbsequence.DBSequenceBOBean.adjustSequence(java.lang
    .String,java.lang.String,long) throws com.sap.me.frame.BasicBOBeanException meth
    od on bean instance com.sap.me.dbsequence.DBSequenceBOBean@712676c3 for bean sap
    .com/me~earxml|me.dbsequence.ejb-5.2.3.1-Base.jarxml|DBSequenceBO in applicati
    on sap.com/me~ear.; nested exception is: javax.ejb.EJBException: nested exceptio
    n is: com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near the
    keyword 'TABLE'.
            Database rollback occurred
            ALARM_FILTER_BO failed (SITE:*, FILTER_ID:11)
            Exception occurred: (Failed in component: sap.com/me~ear) Exception rais
    ed from invocation of public void com.sap.me.frame.BasicBOBean.DXImportCreate(co
    m.sap.me.frame.Data,com.sap.me.frame.Data,boolean) throws com.sap.me.frame.Basic
    BOBeanException method on bean instance com.sap.me.alarm.AlarmFilterBOBean@4e101
    e8b for bean sap.com/me~earxml|me.alarm.ejb-5.2.3.1-Base.jarxml|AlarmFilterBO
    in application sap.com/me~ear.; nested exception is: javax.ejb.EJBException: (Fa
    iled in component: sap.com/me~ear) Exception raised from invocation of public vo
    id com.sap.me.dbsequence.DBSequenceBOBean.adjustSequence(java.lang.String,java.l
    ang.String,long) throws com.sap.me.frame.BasicBOBeanException method on bean ins
    tance com.sap.me.dbsequence.DBSequenceBOBean@75bb17af for bean sap.com/me~ear*xm
    l|me.dbsequence.ejb-5.2.3.1-Base.jar*xml|DBSequenceBO in application sap.com/me~
    ear.; nested exception is: javax.ejb.EJBException: nested exception is: com.micr
    osoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near the keyword 'TABL
    E'.; nested exception is: javax.ejb.EJBException: (Failed in component: sap.com/
    me~ear) Exception raised from invocation of public void com.sap.me.frame.BasicBO
    Bean.DXImportCreate(com.sap.me.frame.Data,com.sap.me.frame.Data,boolean) throws
    com.sap.me.frame.BasicBOBeanException method on bean instance com.sap.me.alarm.A
    larmFilterBOBean@4e101e8b for bean sap.com/me~ear*xml|me.alarm.ejb-5.2.3.1-Base.
    jar*xml|AlarmFilterBO in application sap.com/me~ear.; nested exception is: javax
    .ejb.EJBException: (Failed in component: sap.com/me~ear) Exception raised from i
    nvocation of public void com.sap.me.dbsequence.DBSequenceBOBean.adjustSequence(j
    ava.lang.String,java.lang.String,long) throws com.sap.me.frame.BasicBOBeanExcept
    ion method on bean instance com.sap.me.dbsequence.DBSequenceBOBean@75bb17af for
    bean sap.com/me~earxml|me.dbsequence.ejb-5.2.3.1-Base.jarxml|DBSequenceBO in a
    pplication sap.com/me~ear.; nested exception is: javax.ejb.EJBException: nested
    exception is: com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax
    near the keyword 'TABLE'.
            Database rollback occurred
            Database commit occurred
    </LOG>
    </IMPORT_RESPONSE>
    </DX_RESPONSE> (Message 12943)
    Press any key to continue . . .
    Jennsen

    Hello,
    I think SAP ME is trying to work with DB using Oracle syntax while you have a MS SQL. This can be because of improper -Ddb.vendor setting. Please, refer to Installation Guide for SAP ME 5.2, section 4.1, step 12 (or near that). There should be something like this:
    12. Expand the template node and choose the instance node located in the navigation tree. Choose the
    VM Parameters and Additional tabs containing JVM settings and add the following settings:
    -Dsce.home=set to the AppServer directory specified during installation
    -Ddb.vendor=ORACLE | SQLSERVER
    -Dvm.bkg.processor=true
    Check this settings in your installation, maybe it matters.
    Anton

  • 2lis_03_um initial data load

    Dear Consultants,
    Our r/3 system is very busy.So I don't want to stop the r/3 system to extract the stock data.Is there any way to do it.I mean Can I extract the initial data while r/3 system is running.I am worrying about loosing the data.
    Best regards

    Hi,
    In case of V3 update of Queue Delta update methodes,We have to lock the users when we are filling the setup tables. But in cese of Direct Delta update Mode, we need to lock the R/3 users until we complete both Filling set uptale and Delta init.
    For more information on all these Methods, take a look on the note: 505700
    <i>1. Why should I no longer use the "Serialized V3 update" as of PI 2002.1 or PI-A 2002.1?
    The following restrictions and problems with the "serialized V3 update" resulted in alternative update methods being provided with the new plug-in and the "serialized V3 update" update method no longer being provided with an offset of 2 plug-in releases.
    a) If, in an R/3 System, several users who are logged on in different languages enter documents in an application, the V3 collective run only ever processes the update entries for one language during a process call. Subsequently, a process call is automatically started for the update entries from the documents that were entered in the next language. During the serialized V3 update, only update entries that were generated in direct chronological order and with the same logon language can therefore be processed. If the language in the sequence of the update entries changes, the V3 collective update process is terminated and then restarted with the new language.
    For every restart, the VBHDR update table is read sequentially on the database. If the update tables contain a very high number of entries, it may require so much time to process the update data that more new update records are simultaneously generated than the number of records being processed.
    b) The serialized V3 update can only guarantee the correct sequence of extraction data in a document if the document has not been changed twice in one second.
    c) Furthermore, the serialized V3 update can only ensure that the extraction data of a document is in the correct sequence if the times have been synchronized exactly on all system instances, since the time of the update record (which is determined using the locale time of the application server) is used in sorting the update data.
    d) In addition, the serialized V3 update can only ensure that the extraction data of a document is in the correct sequence if an error did not occur beforehand in the U2 update, since the V3 update only processes update data, for which the U2 update is successfully processed.
    2. New "direct delta" update method:
    With this update mode, extraction data is transferred directly to the BW delta queues every time a document is posted. In this way, each document posted with delta extraction is converted to exactly one LUW in the related BW delta queues.
    If you are using this method, there is no need to schedule a job at regular intervals to transfer the data to the BW delta queues. On the other hand, the number of LUWs per DataSource increases significantly in the BW delta queues because the deltas of many documents are not summarized into one LUW in the BW delta queues as was previously the case for the V3 update.
    If you are using this update mode, note that you cannot post any documents during delta initialization in an application from the start of the recompilation run in the OLTP until all delta init requests have been successfully updated successfully in BW. Otherwise, data from documents posted in the meantime is irretrievably lost.
    The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method.
    This update method is recommended for the following general criteria:
    a) A maximum of 10,000 document changes (creating, changing or deleting documents) are accrued between two delta extractions for the application in question. A (considerably) larger number of LUWs in the BW delta queue can result in terminations during extraction.
    b) With a future delta initialization, you can ensure that no documents are posted from the start of the recompilation run in R/3 until all delta-init requests have been successfully posted. This applies particularly if, for example, you want to include more organizational units such as another plant or sales organization in the extraction. Stopping the posting of documents always applies to the entire client.
    3. The new "queued delta" update method:
    With this update mode, the extraction data for the affected application is compiled in an extraction queue (instead of in the update data) and can be transferred to the BW delta queues by an update collective run, as previously executed during the V3 update.
    Up to 10,000 delta extractions of documents to an LUW in the BW delta queues are cumulated in this way per DataSource, depending on the application.
    If you use this method, it is also necessary to schedule a job to regularly transfer the data to the BW delta queues ("update collective run"). However, you should note that reports delivered using the logistics extract structures Customizing cockpit are used during this scheduling. There is no point in scheduling with the RSM13005 report for this update method since this report only processes V3 update entries. The simplest way to perform scheduling is via the "Job control" function in the logistics extract structures Customizing Cockpit. We recommend that you schedule the job hourly during normal operation - that is, after successful delta initialization.
    In the case of a delta initialization, the document postings of the affected application can be included again after successful execution of the recompilation run in the OLTP, provided that you make sure that the update collective run is not started before all delta Init requests have been successfully updated in the BW.
    In the posting-free phase during the recompilation run in OLTP, you should execute the update collective run once (as before) to make sure that there are no old delta extraction data remaining in the extraction queues when you resume posting of documents.
    If you want to use the functions of the logistics extract structures Customizing cockpit to make changes to the extract structures of an application (for which you selected this update method), you should make absolutely sure that there is no data in the extraction queue before executing these changes in the affected systems. This applies in particular to the transfer of changes to a production system. You can perform a check when the V3 update is already in use in the respective target system using the RMCSBWCC check report.
    In the following cases, the extraction queues should never contain any data:
    - Importing an R/3 Support Package
    - Performing an R/3 upgrade
    - Importing a plug-in Support Packages
    - Executing a plug-in upgrade
    For an overview of the data of all extraction queues of the logistics extract structures Customizing Cockpit, use transaction LBWQ. You may also obtain this overview via the "Log queue overview" function in the logistics extract structures Customizing cockpit. Only the extraction queues that currently contain extraction data are displayed in this case.
    The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method.
    This update method is recommended for the following general criteria:
    a) More than 10,000 document changes (creating, changing or deleting a documents) are performed each day for the application in question.
    b) In future delta initializations, you must reduce the posting-free phase to executing the recompilation run in R/3. The document postings should be included again when the delta Init requests are posted in BW. Of course, the conditions described above for the update collective run must be taken into account.
    4. The new "unserialized V3 update" update method:
    With this update mode, the extraction data of the application in question continues to be written to the update tables using a V3 update module and is retained there until the data is read and processed by a collective update run.
    However, unlike the current default values (serialized V3 update), the data is read in the update collective run (without taking the sequence from the update tables into account) and then transferred to the BW delta queues.
    The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method since serialized data transfer is never the aim of this update method. However, you should note the following limitation of this update method:
    The extraction data of a document posting, where update terminations occurred in the V2 update, can only be processed by the V3 update when the V2 update has been successfully posted.
    This update method is recommended for the following general criteria:
    a) Due to the design of the data targets in BW and for the particular application in question, it is irrelevant whether or not the extraction data is transferred to BW in exactly the same sequence in which the data was generated in R/3.
    5. Other essential points to consider:
    a) If you want to select a new update method in application 02 (Purchasing), it is IMPERATIVE that you implement note 500736. Otherwise, even if you have selected another update method, the data will still be written to the V3 update. The update data can then no longer be processed using the RMBV302 report.
    b) If you want to select a new update method in application 03 (Inventory Management), it is IMPERATIVE that you implement note 486784. Otherwise, even if you have selected another update method, the data will still be written to the V3 update. The update data can then no longer be processed using the RMBWV303 report.
    c) If you want to select a new update method in application 04 (Production Planning and Control), it is IMPERATIVE that you implement note 491382. Otherwise, even if you have selected another update method, the data will still be written to the V3 update. The update data can then no longer be processed using the RMBWV304 report.
    d) If you want to select a new update method in application 45 (Agency Business), it is IMPERATIVE that you implement note 507357. Otherwise, even if you have selected another update method, the data will still be written to the V3 update. The update data can then no longer be processed using the RMBWV345 report.
    e) If you want to change the update method of an application to "queued delta", we urgently recommended that you install the latest qRFC version. In this case, you must refer to note 438015.
    f) If you use the new selection function in the logistics extract structures Customizing Cockpit in an application to change from the "Serialized V3" update method to the "direct delta" or "queued delta", you must make sure that there are no pending V3 updates for the applications concerned before importing the relevant correction in all affected systems. To check this, use transaction SA38 to run the RMCSBWCC report with one of the following extract structures in the relevant systems:
        Application 02:   MC02M_0HDR
        Application 03:   MC03BF0
        Application 04:   MC04PE0ARB
        Application 05:   MC05Q00ACT
        Application 08:   MC08TR0FKP
        Application 11:   MC11VA0HDR
        Application 12:   MC12VC0HDR
        Application 13:   MC13VD0HDR
        Application 17:   MC17I00ACT
        Application 18:   MC18I00ACT
        Application 40:   MC40RP0REV
        Application 43:   MC43RK0CAS
        Application 44:   MC44RB0REC
        Application 45:   MC45W_0HDR.
    You can switch the update method if this report does return any information on open V3 updates. Of course, you must not post any documents in the affected application after checking with the report and until you import the relevant Customizing request. This procedure applies in particular to importing the relevant Customizing request into a production system.
    Otherwise, the pending V3 updates are no longer processed. This processing is still feasible even after you import the Customizing request using the RSM13005 report. However, in this case, you can be sure that the serialization of data in the BW delta queues has not been preserved.
    g) Early delta initialization in the logistics extraction:
    As of PI 2002.1 and BW Release 3.0B, you can use the early delta initialization to perform the delta initialization for selected DataSources.
    Only the DataSources of applications 11, 12 and 13 support this procedure in the logistics extraction for PI 2002.1.
    The early delta initialization is used to admit document postings in the OLTP system as early as possible during the initialization procedure. If an early delta initialization InfoPackage was started in BW, data may be written immediately to the delta queue of the central delta management.
    When you use the "direct delta" update method in the logistics extraction, you do not have to wait for the successful conclusion of all delta Init requests in order to readmit document postings in the OLTP if you are using an early delta initialization.
    When you use the "queued delta" update method, early delta initialization is essentially of no advantage because here, as with conventional initialization procedures, you can readmit document postings after successfully filling the setup tables, provided that you ensure that no data is transferred from the extraction queue to the delta queue, that is, an update collective run is not triggered until all of the delta init requests have been successfully posted.
    Regardless of the initialization procedure or update method selected, it is nevertheless necessary to stop any document postings for the affected application during a recompilation run in the OLTP (filling the setup tables).</i>
    With rgds,
    Anil Kumar Sharma .P

  • Best practices for initial data loads to MDM

    Hi,
       We need to load more than 300000 vendors from SAP into MDM production repository. Import server might take days to load that much if no error occurs.
    Are there any best practices for initial loads to MDM available? What considerations must be made while doing the initial loads.
    Harsha

    Hello Harsh
    With SP05 patch1 there is a file aggregation functionality in the import port. Is is supposed to optimize the import performance.
    BTW, give me your mail address and I will send you an idoc packaging paper for MDM.
    Regards,
    Goekhan

  • Need to help in initial data loading from ISU into CRM

    One our client has requirement as
    All ISU data applicable (BP, BA, the appropriate technical data, Contracts, Products, Product Configuration and their correspoinding price Keys and Price Amount) to CRM  should be loaded into CRM as the part of the intial load.
    Eventhough ECRM_GENERATE_EVERH  in ISU but its documentation is not available.
    Is any provision like Report or RFC or function module present in SAP.
    I would appreciate  for you all quick reply with positive and appropriate solution. mailto [email protected]

    Got my answer.We can clear the data using MDX.

  • Handling Unit - Initial data load/Stock transfer (cutover activity)

    Hello,
    I am working on SAP WM with HU managed project. I need help on Inventory transfer for HU managed material. I do want to transfer all my HU from SAP (Old client) to SAP (New client). What are the steps i do have to follow to do that?
    I am doing HU Inventory load first time. Can any one help?
    Smit.

    Follow steps from OSS note 368460 which describes how to use the RHUWM_INITIAL_LOAD report to make the initial load on a HU managed warehouse. This report does everything, the 561 movement in MM, creates a Inbound delivery/TO for putaway, can confirms it.
    Inside that note there is also a report that creates a sample load file, so you can have an idea of how to fill it.

  • Supplier / Vendor Initial Data Load

    Hello All,
    1) Is there anyone used the following approach for initial suppiler,vendor data in MDG hub.
    Installing, Configuring, and Using the Excel Upload for Business Partner
    Please let us know pros and cons of this approach.
    Thanks in Advance.

    Hi Joseph,
    I have used that approach using DIF Framework and data successfully got uploaded in MDG.
    There is lot of advantages of using these approach like data uploading will be fast with governance process, even you can use parallel processing and even you can schedule your job.
    But the one problem which i faced in doing these is:
    1. If you are using internal number range and you want to create vendor with multiple company code or multiple purchase org or any thing multiple for single vendor. You can not do that, first you have to create vendor with single company code and then after activation extend it to another company code.
    2. If you are using External number range then above issue will not come and you will be able to create vendor with multiple company codes in one go.
    Regards,
    Sudhir Wakodikar

  • Preparing the initial data load

    Hallo again,
    what is the best approach to delete all the test data in our SoD environment? Batch delete or via web services? Any best practices, experiences and suggestions are welcome...
    Thanks in advance
    Michael

    Hi Michael,
    Batch delete is good enough but take a note that it is not available for custom objects
    -- Venky CRMIT

  • Can't Delete Data Load Request

    Hi All,
    During the initial data load there was some error and the data load in Manage ODS is shown as RED, also 12 records are seen as uploaded, which can be seen in New Data Tab. This request is also seen as available for reporting but with RED Light. Now, we want to delete this request but we can't, the reason is beyond our knowledge. Can some help how to delete this request, so that we can load data again. Below is wht we could get
    Message type__________ E (Error)
    Message class_________ RSMPC (Messages for Process Chains of Generic Processes)
    We didn't use Process Chain for uploading the initial data
    Thanks & Regards
    Vikas

    Hi,
    Go to the info package
    select the "Sheduler" menu
    Delete the initial upload data and try the initial load again
    regards
    Happy Tony

  • XEM - Unable to load the initial data or the variances(delta) data into sys

    I am installing xEM 2.0 SP 10 (SAP xApp Emissions Management) in a windows environment with SQL 5000.  I installed xEM on NW 2004, usage types AS Java and EP 6.
    I am attempting to load the initial data or the variances (delta) data into the system.  Instruction is on page 15 in the install guide.
    I am supposed to enter the following in the command line:
    java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=[JDBC Driver];[JDBCUrl];[User];[Password]
    Example command for import into SQL Server:
    java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=com.ddtek.jdbc.sqlserver.SQLServerDriver; jdbc:datadirect:sqlserver://vma03:1433;SAPC11DB;password
    The customer I am with is running the xEM database on a different instance.  This is where I run into a problem.  I am not sure how to specify the instance in the script.  This is what  I have attempted so far:
    C:\>cd temp\load
    C:\Temp\load>java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=com.ddte
    k.jdbc.sqlserver.SQLServerDriver;jdbc:datadirect:sqlserver://PRODSQL43:SQL3:1534;SAPPEMDB;password
    java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:datadirect:sqlserver:/
    /PRODSQL43:SQL3:1534 as user SAPPEMDB (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver]Unable to connect.  Inva
    lid URL.): [DataDirect][SQLServer JDBC Driver]Unable to connect.  Invalid URL.
            at com.sap.sdm.util.dbaccess.DBTask.dbImport(DBTask.java:356)
            at com.sap.sdm.util.dbaccess.SapTransTask.perform_import(SapTransTask.java:293)
            at com.sap.sdm.util.dbaccess.SapTransTask.execute(SapTransTask.java:51)
            at com.sap.sdm.util.dbaccess.SapTrans.main(SapTrans.java:21)
    import aborted with java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:da
    tadirect:sqlserver://PRODSQL43:SQL3:1534 as user SAPPEMDB (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver]Unab
    le to connect.  Invalid URL.): [DataDirect][SQLServer JDBC Driver]Unable to connect.  Invalid URL.
    C:\Temp\load>java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=com.ddte
    k.jdbc.sqlserver.SQLServerDriver;jdbc:datadirect:sqlserver://PRODSQL43;SQL3:1534;SAPPEMDB;password
    java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:datadirect:sqlserver:/
    /PRODSQL43 as user SQL3:1534 (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver]Error establishing socket. Connec
    tion refused: connect): [DataDirect][SQLServer JDBC Driver]Error establishing socket. Connection refused: connect
            at com.sap.sdm.util.dbaccess.DBTask.dbImport(DBTask.java:356)
            at com.sap.sdm.util.dbaccess.SapTransTask.perform_import(SapTransTask.java:293)
            at com.sap.sdm.util.dbaccess.SapTransTask.execute(SapTransTask.java:51)
            at com.sap.sdm.util.dbaccess.SapTrans.main(SapTrans.java:21)
    import aborted with java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:da
    tadirect:sqlserver://PRODSQL43 as user SQL3:1534 (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver]Error establi
    shing socket. Connection refused: connect): [DataDirect][SQLServer JDBC Driver]Error establishing socket. Connection ref
    used: connect
    C:\Temp\load>java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=com.ddte
    k.jdbc.sqlserver.SQLServerDriver;jdbc:datadirect:sqlserver://PRODSQL43:1534;SQL3;SAPPEMDB;password
    java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:datadirect:sqlserver:/
    /PRODSQL43:1534 as user SQL3 (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver][SQLServer]Login failed for user
    'SQL3'.): [DataDirect][SQLServer JDBC Driver][SQLServer]Login failed for user 'SQL3'.
            at com.sap.sdm.util.dbaccess.DBTask.dbImport(DBTask.java:356)
            at com.sap.sdm.util.dbaccess.SapTransTask.perform_import(SapTransTask.java:293)
            at com.sap.sdm.util.dbaccess.SapTransTask.execute(SapTransTask.java:51)
            at com.sap.sdm.util.dbaccess.SapTrans.main(SapTrans.java:21)
    import aborted with java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:da
    tadirect:sqlserver://PRODSQL43:1534 as user SQL3 (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver][SQLServer]Lo
    gin failed for user 'SQL3'.): [DataDirect][SQLServer JDBC Driver][SQLServer]Login failed for user 'SQL3'.
    C:\Temp\load>java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=com.ddte
    k.jdbc.sqlserver.SQLServerDriver;jdbc:datadirect:sqlserver://PRODSQL43:1534:SQL3;SAPPEMDB;password
    java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:datadirect:sqlserver:/
    /PRODSQL43:1534:SQL3 as user SAPPEMDB (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver]Unable to connect.  Inva
    lid URL.): [DataDirect][SQLServer JDBC Driver]Unable to connect.  Invalid URL.
            at com.sap.sdm.util.dbaccess.DBTask.dbImport(DBTask.java:356)
            at com.sap.sdm.util.dbaccess.SapTransTask.perform_import(SapTransTask.java:293)
            at com.sap.sdm.util.dbaccess.SapTransTask.execute(SapTransTask.java:51)
            at com.sap.sdm.util.dbaccess.SapTrans.main(SapTrans.java:21)
    import aborted with java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:da
    tadirect:sqlserver://PRODSQL43:1534:SQL3 as user SAPPEMDB (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver]Unab
    le to connect.  Invalid URL.): [DataDirect][SQLServer JDBC Driver]Unable to connect.  Invalid URL.
    C:\Temp\load>
    My last attempt was a command using colons and semicolons with the following results.  The closest (there was a significant delay before the error or failure) appears to have been //PRODSQL43;SQL3:1534; (second attempt).  The error listed from this attempt is   "Error establishing socket. Connection refused: connect".
    I also checked the default database that user SAPPEMDB has in place and it is assign the correct database.
    Please help.
    Message was edited by: Owner
            Mike Smayrabunya

    Hey,
    It looks like one of the following:
    1. The DB is down,
    2. The user SAPPEMDB does not have the right authorization.
    3. The password of the user SAPPEMDB is not password
    4. The syntax is incorrect
    in order to find what is the problem,
    please:
    1. Login in the the DB PRODSQL43:1534 with the user "SAPPEMDB" and the password "password",
    this will eliminate the options 1 - DB down, 2 -SAPPEMDB does not have authorization and 3 - password of the user SAPPEMDB is not password.
    2. If the login failed, than please run sql trace with security elements (in the client there is a tool called "SQL Profiler"
    3. If the login is correct, than you check the syntax of the command:
    "java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=com.ddtek.jdbc.sqlserver.SQLServerDriver; jdbc:datadirect:sqlserver://vma03:1433;SAPC11DB;password"
    According to the error message "Error establishing socket. Connection refused"
    it looks like The DB is down or syntax is incorrect.

  • 0distr_chan initial master data load, text missing

    Hey: I am trying to do an initial master data load for 0distr_chan on text. The load is not successful. In the monitor detailed error message, it has red alert for transfer rules missing message, update missing message.
    0distr_chan attibute is OK.
    I am also having problems when loading 0salesorg on text. It fails also.
    Can anyone give me a clue?
    Thanks!

    Hi, thank you for your answer. I am new to this system. Please give me more guidance.
    To check on datasource on R3, I use transaction RSA6, underneath the SD-IO master data, I found 0DISTR_CHAN_TEXT with green text icon on the right. Is this a valid way to verify that datasource for 0DISTR_CHAN_TEXT has been activated OK on R3? I am not sure if there is any way for me to check if any transfer rules for the datasource has been activated OK on R3.
    thanks!

  • Change language after load initial data (ME 5.2)

    Hello,
    we have an ME system installed with german language settings in the Netweaver config tool. We would like to change this english. I found in the installation guide the following remark:
    The u2013Duser.country and u2013Duser.language values must be set to the single, supported language
    for the SAP ME installation and cannot be modified after initial data has been loaded in SAP ME in
    the Loading Initial Data section below. For more information, see your SAP ME consultant.
    Can you tell the tables which need to be changed?
    Regards,
    Kai

    Hi,
    You can change language in netwear level SAP Basis people can do that this setting.
    Thanks,
    Ramesh

  • Initial full load of Master data using process chain

    Hi All,
    Could you please help me regarding, initial master data load to characteristics with attributes and text. I need to load master data to 23 info objects, by using process chain can I do full load of master data to all info objects at a time. And one more doubt is, as per my knowledge we can't maintain more than one variant in an info package, is that right ? or we can ?
    Means Start Variant -> Info Package (0Customer_Text, 0Customer_Attr,0BILL_TYPE_TEXT, BILL_CAT_TEXT) -> DTP ( ", ", ", ") -> ACR.
    Your Help will be appreciated.
    Thanks & Regards
    Sunil

    Hi,
    "I need to load master data to 23 info objects, by using process chain can I do full load of master data to all info objects at a time."
    if there is no dependency between attributes then you add you can create process chains and trigger them at a time. No issues.
    we can't maintain more than one variant in an info package, is that right ? or we can ?
    With one info pack you can't load data to all 23 psa. because each data source have own psa. you need to sue 23 info packs.
    in general start variant--> info pack --> dtp (assuming as your bw 7.x)---> attribute change run.
    like that you need to create 23 chains 
    Or create one two big chains.
    one is for attribute and another for text.
    In attribute
    start varaint--> info pack(info bject 1)--DTP(infoobject 1))--> info pack(infoo bject 2)-->dtp(infoobject 2).
    Like that way you can create in series and parallel chains to load attributes data into info objects. at end you add change run for 6 info objects each. SAme you can do for text loads also.
    Thanks

  • How to load initial data into MI7.1?

    I get a requirement from my boss to load the initial data into MI7.1.
    I have no idea how to do it.
    Could anybody kindly give some links or documents on this?
    I appreciate with points.  Thanks!

    Hi,
    Initial load is actually for data objects(DO's) which are under a software component version(SWCV).
    You can find all the software component versions available in your system by executing transaction SDOE_WB and doing a search based on *.
    Then find out the correct SWCV under which your actual data objects exists.
    Now to execute initial load, execute the transaction SDOE_LOAD in your system and
    then first chose the proper SWCV and then a data objcect under it and then execute it.
    You can monitor the initial load under transaction SMQ2 where in there will be a queue entry running for every data object which fetches data from the backend to the DOE middleware.
    Hope it helps.
    Best Regards,
    Siva.

Maybe you are looking for