Loading from XI to BW

Hi,
Can anyone guide me on how to load data from XI to BW. Need to just bring data from table in XI which is just 2 fields.Is the data extraction any similar to SAP R/3 extraction or it is totally different.
Thanks,
Kal

Hi,
Specific to BW there is a HOW to guide in the service market place, you should take a look at it.
Generally about knowing how to do integration with XI, you should look at these videoes in e-learning. Also, there is some information in the XI home page.
https://www.sdn.sap.com/irj/sdn/developerareas/xi
https://www.sdn.sap.com/irj/sdn/developerareas/xi?rid=/webcontent/uuid/a680445e-0501-0010-1c94-a8c4a60619f8 [original link is broken]
Regards,
DST

Similar Messages

  • Chinese characters scrambled when loading from DS to BW

    Hi, I've been pulling my hair out with this issue.
    I have a flat file containing Chinese text. When I load this in BW using 'FLATFILE' as a source system, it works fine. BW shows the correct Chinese characters.
    When I do the same load using BODI, I get funny characters.
    When I use BODI to load from one flat file into another flat file, the Chinese characters remain correct.
    What do I need to do to make sure I get the right Chinese characters in BW when loading from BODI?
    BODI is installed on Unix on Oracle 10.
    I run the jobs as batch processes.
    The dsconfig.txt has got:
    AL_Engine=<default>_<default>.<default>
    There are no locale settings in al_env.sh
    BW target is UTF-8 codepage.
    File codepage is BIG5-HKSCS
    BODI is set up as a Unicode system in SAP BW.
    When loading flat file to flat file, I get a message:
    DATAFLOW: The specified locale <eng_gb.iso-8859-1> has been coerced to <Unicode (UTF-16)
    because the datastore <TWIN_FF_CUSTOMER_LOCAL> obtains data in <BIG5-HKSCS> codepage.
    JOB: Initializing transcoder for datastore <TWIN_FF_CUSTOMER_LOCAL> to transcode between
    engine codepage<Unicode (UTF-16)>  and datastore codepage <BIG5-HKSCS>
    When loading to BW the messages are almost the same, but now the last step in UTF-16 to UTF-8.
    I read the wiki post which definitely helped me to understand the rationale behind code page, but now I ran out of ideas what else to check ( http://wiki.sdn.sap.com/wiki/display/BOBJ/Multiple+Codepages )
    Any help would be greatly appreciated.
    Jan.

    Hi all. Thanks for the Inputs. This is what I got when I clicked on the Details Tab of the Monitor....
    Error when transferring data; communication error when analyzing
    Diagnosis
    Data packages or InfoPackages are missing in BI but there were no apparent processing errors in the source system. It is therefore probable that there was an error in the data transfer.
    The analysis tried to read the ALE outbox of the source system. This lead to error .
    It is possible that there is no connection to the source system.
    Procedure
    Check the TRFC overview in the source system.
    Check the connection to the source system for errors and check the authorizations and profiles of the remote user in both the BI and source systems.
    Check th ALE outbox of the source system for IDocs that have not been updated.

  • Full load from a DSO to a cube processes less records than available in DSO

    We have a scenario, where every Sunday I have to make a full load from a DSO with OnHand Stock information to a cube, where I register on material and stoer level a counter if there is stock available.
    The DTP has no filters at all and has a semantic group on 0MATERIAL and 0PLANT.
    The key in the DSO is:
    0MATERIAL
    0PLANT
    0STOCKTYPE
    0STOR_LOC
    0BOM
    of which only 0MATERIAL, 0PLANT and 0STORE_LOC are later used in the transformation.
    As we had a growing number of records, we decided to delete in the START routine all records, where the inventory is not GT zero, thus eliminating zero and negative inventory records.
    Now comes the funny part of the story:
    Prior to these changes I would [in a test system, just copied from PROD] read some 33 million of records and write out the same amount of records. Of course, after the change we expected to write out less. To my total surprise I was reading now however 45 million of records with the same unchanged DTP, and writing out the expected less records.
    When checking the number of records in the DSO I found the 45 million, but cannot explain why in the loads before we only retrieved some 33 millions from the same unchanged amount of records.
    When checking in PROD - same result: we have some 45 million records in the DSO, but when we do the full load from the DSO to the cube the DTP only processes some 33 millions.
    What am I missing - is there a compression going on? Why would the amount of records in a DSO differ from the amount of records processed in the DataPackgages when I am making a FULL load without any filter restrictions and only a semantic grouping in place for parts of the DSO key?
    ANY idea, thought is appreciated.

    Thanks Gaurav.
    I did check if there were more/any loads doen inbetween - there were none in the test system.  As I mentioned that it was a new copy from PROD to TEST, I compared the number of entries in the DSO and that seems to be a match between TEST and PROD, ok a some more in PROD but they can be accounted for. In test I loaded the day before the changes were imported to have a comparison, and between that load and the one ofter the changes were imported nothing in the DSO was changed.
    Both DTPs in TEST and PW2 load from actived DSO [without archive]. The DTPs were not changed in quite a while - so I ruled that one out. Same with activation of data in the DSO - this DSO get's loaded and activated in PROD daily via process chain and we load daily deltas into the cube in question. Only on Sundays, for the begin of the new week/fiscal period, we need to make a full load to capture all materials per site with inventory. The deltas loaded during the week are less than 1 million, but the difference between the number of records in the DSO and the amount processed in the DataPackages is more than 10 millions per full load even in PROD.
    I really appreciated the knowledgable answer, I just wished you would pointed out something that I missed out on.

  • Delta records are not loading from DSO to info cube

    My query is about delta loading from DSO to info cube. (Filter used in selection)
    Delta records are not loading from DSO to Info cube. I have tried all options available in DTP but no luck.
    Selected "Change log" and "Get one request only" and run the DTP, but 0 records got updated in info cube
    Selected "Change log" and "Get all new data request by request", but again 0 records got updated
    Selected "Change log" and "Only get the delta once", in that case all delta records loaded to info cube as it was in DSO and  gave error message "Lock Table Overflow" .
    When I run full load using same filter, data is loading from DSO to info cube.
    Can anyone please help me on this to get delta records from DSO to info cube?
    Thanks,
    Shamma

    Data is loading in case of full load with the same filter, so I don't think filter is an issue.
    When I follow below sequence, I get lock table overflow error;
    1. Full load with active table with or without archive
    2. Then with the same setting if I run init, the final status remains yellow and when I change the status to green manually, it gives lock table overflow error.
    When I chnage the settings of DTP to init run;
    1. Select change log and get only one request, and run the init, It is successfully completed with green status
    2. But when I run the same DTP for delta records, it does not load any data.
    Please help me to resolve this issue.

  • Data load from DSO to cube

    Hi gurus
    We have a typical problem, we have to combined 2 records into one when they reach to DSO.
    One source is flat file and the other is R3. so few fields I am geting from flat file and few from R3 . They are creating one record when they loaded to DSO . Now I am getting one record in active data table but when I load the data to cube from that DSO ( Data is going from change log to cube ) to cube , I am getting 2 seperate records ( one which is loaded from flat file and one which is loaded from R3 ) in cube which I dont want. I want only one record in cube just like i have in my active data table in DSO.
    I cant get the data from Active data table because I need delta load.
    Would you please advise what can I do to get that one record in Cube?
    Pl help

    Ravi
    I am sending the data thro DTP only but is there any solution to get one record bcos in other scenario I am getting data from 2 different ERP sources and getting one record in DSO and cube as well.
    but that is not happening for this second scenario where i am getting data from FLAT file and ERP and traing to create one record.

  • Data load from DSO to cube fails

    Hi Gurus,
    The data loads have failed last night and when I try to dig inside the process chains I find out that the cube which should be loaded from DSO is not getting loaded.
    The DSO has been loaded without errors from ECC.
    Error Message say u201DThe unit/currency u2018source currency 0currencyu2019 with the value u2018spaceu2019 is assigned to the keyfigure u2018source keyfigure.ZARAMTu2019 u201D
    I looked in the  PSA has about 50, 000 records .
    and all data packages have green light and all amounts have 0currency assigned
    I went in to the DTP and looked at the error stack it has nothing in it then I  changed the error handling option from u2018 no update,no reportingu2019 to u2018valid records update, no reporting (resquest red)u2019 and executed then the error stactk showed 101 records
    The ZARAMT filed has 0currency blank for all these records
    I tried to assign USD to them and  the changes were saved and I tried to execute again but then the message says that the request ID should be repaired or deleted before the execution. I tried to repair it says can not be repaired so I deleted it and executed . it fails and the error stack still show 101 records.when I look at the records the changes made do not exist anymore.
    If I delete the request ID before changing and try to save the changes then they donu2019t get saved.
    What should I do to resolve the issue.
    thanks
    Prasad

    Hi Prasad....
    Error Stack is request specific.....Once you delete the request from the target the data in error stach will also get deleted....
    Actually....in this case what you are suppose to do is :
    1) Change the error handling option to u2018valid records update, no reporting (resquest red)u2019 (As you have already done....) and execute the DTP......all the erroneous records will get sccumulated in the error stack...
    2) Then correct the erroneos records in the Error Stack..
    3) Then in the DTP----in the "Update tab"....you will get the option "Error DTP".......if it is already not created you will get the option as "Create Error DT".............click there and execute the Error DTP..........The Error DTP will fetch the records from the Error Stack and will create a new request in the target....
    4) Then manually change the status of the original request to Green....
    But did you check why the value of this field is blank....if these records again come as apart of delta or full then your load will again fail.....chcek in the source system and fix it...for permanent solution..
    Regards,
    Debjani......

  • Data loading from DSO to Cube

    Hi,
    I have a question,
    In book TBW10 i read about the data load from DSO to InfoCube
    " We feed the change log data to the InfoCube, 10, -10, and 30 add to the correct 30 value"
    My question is cube already have 10 value, if we are sending 10, -10 and 30 Values(delta), the total should be 40 instead of 30.
    Please some one explaine me.
    Thanks

    No, it will not be 40.
    It ll be 30 only.
    Since cube already has 10, so before image ll nullify it by sending -10 and then the correct value in after immage ll be added as 30.
    so it ll be like this 10-10+30 = 30.
    Thank-You.
    Regards,
    Vinod

  • Data load from DSO to Cube in BI7?

    Hi All,
    We just migrated a dataflow from 3.5 to 7 in development and moved to production. So till now in production, the dataloads happend using the infopackages.
    a. Infopackage1 from datasource to ODS and
    b. Infopackage2 from ODS to the CUBE.
    Now after we transported the migrated dataflow to production, to load the same infoproviders I use
    1. Infopackage to load PSA.
    2. DTP1 to load from PSA to DSO.
    3. DTP2 to load from DSO to CUBE.
    step1 and step2 works fine but when I run the DTP2 it is getting terminated. But now when I tried the step b (above), it loads the CUBE fine using the infopackage. So I am unable to understand why the DTP failed and why the infopackage load is successful. In order to use the DTP do we need to do any cleanup when using it for first time? Please let me know if you have any suggestions.
    Please note that the DSO already has data loaded using infopackage.  (Is this causing the problem?)
    Thanks,
    Sirish.

    Hi Naveen,
    Thanks for the Reply. The creation of DTP is not possible without a transformation.
    The transformation has been moved to production successfully.

  • Invokin SQL*Loader from a stored procedure

    I try to invoke SQL*LOADER from within a database package by using external C procedure (the procedure calls the system() C function) but the loader generates the following error in its log file :
    SQL*Loader -523: error -2 writing to file (STDERR)
    and no data is uploaded.
    I have tried to use system() from within database procedures to execute OS commands and it works. Does anyone know what is the problem with using system() to execute "sqlldr <parameters>"? Is there some other way to call the loader from within a stored PL/SQL procedure?
    Thank you very much for your help.
    Aneta Valova
    null

    Hi
    What is your task and why you are trying to invoke SQL*Loader from strorage procedure or package? Maybe the redirecting of stderr will resolve your problem but thik is it the best way to do your job.
    I am not sure, that invoking other executables from Oracle instance is good idea.
    Regards
    null

  • Partial data loading from dso to cube

    Dear All,
    I am loading the data from dso to the cube through DTP. The problem is that some records are getting added  through data
    packet 1 to the cube around 50 crore records while through data packet2 records are not getting added.
    It is full load from dso to cube.
    I have tried deleting the request and executing DTP but again the same no of records get added to the cube through datapacket 1   and after that records are not added to the cube through datapacket 1  ;request remains in yellow state only.
    Please suggest .

    Nidhuk,
    Data load transfers package by package. Your story sounds like it got stuck in second package or something. Suggest you check package size, try to increase to a higher number to see if anything  changes ? 50 records per package kind of low, your load should not spread out into too many packages.
    Regards. Jen
    Edited by: Jen Yakimoto on Oct 8, 2010 1:45 AM

  • Problem during  Data Warehouse Loading (from staging table to Cube)

    Hi All,
    I have created a staging Module in owb to load my flat files to my staging tables.I have created an Warehouse module to load my staging tables to Dimension and Cube that I have created.
    My senario:
    I have a temp_table_transaction which had loaded my flat files to it .This table had loaded with 168,271,269 milion record as through this flat file.
    I have created a mapping in owb which loaded my temp_table_transaction which has join with other tables and some expression and convert function that these numbers fill to a new table called stg_tbl_transaction in my staging module.Running this mapping takes 3 hours and 45 minutes with this configue of my mapping:
    Default operation mode in running parameter of Mapp config=Set based
    My dimesion filled correctly but I have two problem when I want to transfer my staging table to my Cube:
    #1 Problem:
    i have created a cube is called transaction_cube with owb and it generated and deployed correctly.
    i have created a map to fill my cube with 168,271,268 milon recodes in staging table was called stg_tbl_transaction and deployed it to server (my cube map operating mode is set based)
    but after running this map it did not complete after 9 hour and I forced to cancel my running's map by kill its sessions .I want to know this time for loading this capacity of data is acceptable or for this capacity of data we should spend more time.Please let me know if anybody has any Issue.
    #2 Problem
    To test my map I have created a map with configure set based in operation modes and select my stg_tbl_transaction as source with 168,271,268 records in it and I have created another table to transfer and load my data in it.I wanted to test the time we should spend on this simple map but after 5 hours my data had not loaded in new table.I want to know where is my problem.Should I have set something in configue of map or anothe things.Please guide me about these problems.
    CONFIGURATION OF MY SERVER:
    i run owb on two socket xeon 5500 series with 192 GB ram and disks with RAID 10 Array
    Regards,
    Sahar

    For all of you
    It is possible to load from Infoset to Cube we did it, and it was ok.
    Data are really loaded from Infoset (Cube + master dat) to cube.
    When you create a transformation under a cube Infoset is proposed, and it works fine ....
    Now the process is no more operationnal and i don't understand why .....
    Load from infoset to cube is possible, i can send you screen shot if you want ....
    Christophe

  • Jasper report on HTML when one image loaded from database and for the other

    How to generate jasper report on HTML when one image loaded from database and for the other we give a image path
    My code
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
              exporter = new JRHtmlExporter();
              exporter.setParameter(JRExporterParameter.JASPER_PRINT, print);
              exporter.setParameter(JRExporterParameter.OUTPUT_STREAM, baos);
              exporter.setParameter(JRHtmlExporterParameter.IMAGES_URI, strImageInputDirectory);
         exporter.setParameter(JRHtmlExporterParameter.IMAGES_DIR_NAME, strImageOutputPath == null ? "." : strImageOutputPath);
         exporter.setParameter(JRHtmlExporterParameter.IS_OUTPUT_IMAGES_TO_DIR, Boolean.TRUE);
         exporter.setParameter(JRHtmlExporterParameter.IS_USING_IMAGES_TO_ALIGN, Boolean.FALSE);
         exporter.setParameter(JRHtmlExporterParameter.IS_WHITE_PAGE_BACKGROUND, Boolean.FALSE);
              exporter.exportReport();
              byte[] bdata = ((ByteArrayOutputStream) baos).toByteArray();
    Can any one help pls
    Message was edited by:
    ameet.au

    hey sorry for posting it in this forum.
    but do u have sample code for making it work.. since i am able to do it on PDF format(image from Database and another stored in the webserver) using
    byte image[] =(byte[]) outData.get("image");
                        ByteArrayInputStream img = new ByteArrayInputStream(image);
                        hmimg.put("P_PARAMV3", img);
    print = JasperFillManager.fillReport(reportFileName, hmimg, jrxmlds);
    bdata= JasperExportManager.exportReportToPdf(print);

  • My LV program can't find the iak file when the program is loaded from a different PC.

    Hello all,
    My LV vi with Fieldpoint I/O and it's IAK file are stored on the company network.  If I load and run the LV vi from the PC it was developed on, everything is fine.  If I load and run on a different PC (that has an identical install of LV), the LV vi can't find the IAK file (all of the I/O pointers to the file are grayed out.) 
    Also, if I make an EXE out of the vi on the machine that can find the IAK, it won't find it if it's run from the other machine.
    It is not related to whether one machine has different "rights", as they are identical.  It's not related to different installs of LV, as they are identical.   
    I have found that this is related to the network login name/password, and to the machine's network name.  If I login on the second machine and try to run the program, it won't find the IAK.  If I point it to the IAK it will run.  If I save it and log out, and then login as a different user with the same rights on the second machine, it won't find the IAK again, until I re-point it there when I re-run it.  This is true for both the vi and the EXE.
    This problem makes deploying the EXE difficult to different machines, as the first time it's run, the EXE has to be pointed to the IAK.  Pointing the EXE to the IAK is difficult as these installations have no keyboard or mouse.
    Does anyone have any ideas on how to fix this so the LV vi or it's EXE finds the IAK from no matter where it's loaded from?
    Thanks,
    Mike

    Laura,
    I figured I'd continue this thread because it's along he same line..
    I have two Fieldpoint systems. They are identical, except for one thing...one has an AI-110 in position 7...the other has nothing. Both have the same IP address, etc and I swap the ethernet cable between them when I want to select which one I use.  I use a dedicated ethernet card to the FP processor...and it's not the company net.
    There are two IAK files...one for the system with the AI-110, one for the system without the AI-110. The two IAKs are in different paths.
    Before I had the system and the IAK without the AI-110, I made an EXE of the code that woked with the AI-110, and it worked fine.
    But now if I run the EXE after I've used MAX and opened the IAK without the AI-110, the EXE won't read the AI-110!  I have to go back into MAX and open the IAK that has the AI-110 in it, close MAX and restart the EXE for the EXE to see the AI-110.  (Max is NOT SET to "Download Items on Save")
    I looks like the EXE is looking at MAX to get the IAK or a pointer to the IAK.
    Did I build my EXE incorrectly?  Shouldn't an EXE completely ignore MAX once it's made?
    Thanks,
    Mike

  • Error in delta loading from a planning cube

    hi all,
    I am using a delta load from one planning cube to another.When the changed records are of a magnitude of 100 then it is successful but when the records are of magnitude of 100000 then it stays yellow with no data records sent and eventually fails.the packet size is 20000.
    any help is appreciated.

    Hello Ajay,
    Have you checked that your Request is closed in the planning cube? FM RSAPO_CLOSE_TRANS_REQUEST will do it
    It is a customizing issue to tell the infopackage to end "green" when it has no data.
    Transaction Spro ==> SAP Netweaver ==> SAP Business Information Warehouse ==> Automated Processes ==> Monitor Settings ==> Set Traffic Light Color. There you have to set the Traffic light on green if no data is available in the system.
    Hope that helps
    best regards
    Yannick

  • Data load from Legacy system to BW Server through BAPI

    Requirements: We have different kind of legacy systems and SAP BW server. We want to load all legacy system data into SAP BW server using BAPI. Before loading we have to validate all data. If there are bad data, data missing we have to let the legacy system user/ operator knows to fix the data into their system with detail explanation. When it is fixed, we have to load the data again.
    Load Scenario:  We have two options to load data from legacy systems to BW Server.
    1.     We need to load data directly from legacy system to BW Server using BAPI program.
    2.     Legacy Systems data would be in workstations or flash drive as .txt (one line separated by comma) or .csv file. Need to load from .txt /.csv file to BW Server using BAPI program.
    What we want in the BAPI program code?
    It will Read / Retrieve data from text / csv file and will put into the Internal table. Internal table structure would be based on BAPI InfoObject structure.
    Call BAPI InfoObject function module ‘BAPI_IOBJ_CREATE’ to create InfoObject, include all necessary / default components, do the error check, load the data and return the status.
    Could some one help me with the sample code please? I am new in ABAP / BAPI coding.
    Is there any other better idea to load data from legacy system to BW server? BTW we are using BW 3.10. Is there any better option with BI 7.0 to resolve the issue? I appreciate your help.

    my answers:
    1. this is a scendario for a data push into SAP BW. You can only use SOAP-Based Transfer of Data.
    http://help.sap.com/saphelp_nw04/helpdata/en/fd/8012403dbedd5fe10000000a155106/frameset.htm
    (here for BW 3.5, but you'll find similar for 7.0)
    In this scenario you'll have an RFC dinamically created for every Infosource you need to transfer data.
    2. You can make a chain for each data load, so you can call the RFC "RSPC_API_CHAIN_START" to start the chain from external.
    the second solution is more simply and available on every release.
    Regards,
    Sergio

  • Can we use 0INFOPROV as a selection in Load from Data Stream

    Hi,
    We have implemented BW-SEM BPS and BCS (SEM-BW - 602 and BI 7 ) in our company.
    We have two BPS cubes for Cost Center and Revenue Planning and we have Actuals Data staging cube, we use 0SEM_BCS_10 to load actuals.
    We created a MultiProvider on BPS cubes and Staging cube as a Source Data Basis for BCS.
    Issue:
    When loading plan data or Actuals data into BCS (0BCS_C11) cube using Load from Data Stream method, we have performance issue, We automated load process in a Process Chain. Some times it take about 20 hrs for only Plan data load for 3 group currencies and then elimination tasks.
    What I noticed is, (for example/) when loading Plan data, system is also reading Actuals Cube which is not required, there is no selection available in Mapping or selection tab where I can restrict data load from a particular cube.
    I tried to add 0INFOPROV into databasis but then it doen't show up as selection option in the data collection tasks.
    Is there a way where I can restrict data load into BCS using this load option and able to restrict on cube I will be reading data from ?
    I know that there is a filter Badi available, but not sure how it works.
    Thanks !!
    Naveen Rao Kattela

    Thanks Eugene,
    We do have other characteristics like Value Type (10 = Actual and 20 = Plan) and Version (100 = USD Actual and 200 = USD Plan), but when I am loading data into BCS using Load from Data Stream method, the request goes to all the underlying cubes, which in my case are Planning cubes and Actual Cube, but I don't want request to goto Actual Cube when I am running only Plan load. I think its causing some performance issue.
    For this reason I am thinking if I can use 0INFOPROV as we use in Bex queries to filter the InfoProvider so that the data load performance will improve. 
    I was able to to bring in 0INFOPROV into DataBasis by adding 0INFOPROV  in the characteristics folder used by the Data Basis.
    I am able to see this InfoObject Data Stream Fileds tab. I checked marked it to use it in the selection and regenerated the databasis.
    I was expecting that now this field would be available for selection in data collection method, but its not.
    So If its confirmed that there is no way we can use 0INFOPROV as a selection then I would suggest my client for a redesign of the DataBasis it self.
    Thanks,
    Naveen Rao Kattela

Maybe you are looking for

  • Error message when trying to connect to a web service

    Hi all, I am trying to consume a third party web service using the attached code and i get this big massive error about complex types: [The following information is meant for the website developer for debugging purposes. Error Occurred While Processi

  • Displaying images in report builder

    aoa to all! Dear gurus, i want to develop such type of report in report builder 6i. empno employee name emp photo 1 abc (here i want to show photo) i have build such report but the problem is when any employee who has no photo, i want to display defa

  • BAPI  to get all properties of all levels of all hierarchies of a cube

    Hi Experts, Is there are way to get all properties of all levels of all hierarchies of all dimensions of a cube (InfoCube, ODSObject, QueryCube, InfoSet)  in one BAPI call? I use BAPI_MDPROVIDER_GET_PROPERTIES, and only set input parameters: CAT_NAM,

  • Attachment Link item type

    hi all i facing problem in viewing the attachments which were attached to oaf page attachmentlink item type. Edited by: 995022 on Jun 3, 2013 10:27 PM

  • Performance statistics on disk to disk backup (Rman) on OEL v4.5

    We just migrated from AIX running Power processor to Oracle Enterprise Linux v4.5 running on IBM xSeries 3850 M2 (Intel Xeon Quad Core). The RMAN backups are taking twice as long as on AIX (it was taking 3.5 hours and now it is taking 7 hours) to lan