Extract DataSource Question

Hi All!
I would LOVE to be able to have an export datasource from a Master data InfoObject (0MAT_PLANT) feed MULTIPLE infoSources. Is that possible? Is there any top secret, back-door way to accomplish this, or am I restricted to using the Hub-and-spoke method to create multiple flatfiles from this to hook to infosources?
Thanks!!!
Mark Ulrich

hi,
you can have master data infoobject as datasource,
rsa1->infoprovider->right click certain infoarea and 'insert characteristic as infoprovider', type in 0mat_plant, then right click that 0mat_plant there and 'generate export datasource', right click source system bw myself and 'replicate datasource', the datasource will be 8mat_plantT and 8mat_plantM something. assign the datasource to infosource and let this infosource update your data targets.
hope this helps.

Similar Messages

  • Problem when extracting DataSource 0PAYSCALELV_ATTR from R/3 ECC6 to BW 3.5

    We have recently begun upgrading to ECC6.0 from 4.7.  We are still using BW 3.5.  After upgrading our Sandbox to ECC6.0, we found a problem when extracting Pay scale level attributes using DataSource 0PAYSCALELV_ATTR.  For some reason the table values are being multiplied by 26 by the extractor, which was not happening in 4.7.  Therefore in our BW system, these employees salaries are being calculated incorrectly because the hourly rate coming from R/3 is already wrong.  Please help!
    Kim Huskey
    Tarrant County IT
    CCC - SAP Support

    Hi Surendhar,
        Did you try to check the connection b/w ypur source system and the target system?
    You can do it by executing T.Code RSA1-> Select Source Systems-> Right click your source system-> Check.
    Also, pls check ST22 for any short dumps and SM21 for system logs.
    Hope it helps!!!!
    Amit
    Message was edited by: Amit

  • KODO DataSource questions

    Our setup:
    Tomcat 5.5, KODO 3.4.1, single transaction per request on the application (application is sessionless).
    Observations:
    1. When we specify javax.jdo.option.ConnectionURL, javax.jdo.option.ConnectionUserName, javax.jdo.option.ConnectionPassword in the properties file, it uses the KODO DataSource specifically com.solarmetric.jdbc.PoolingDataSource.
    2. If we want to use JNDI, we specify javax.jdo.option.ConnectionFactoryName=java:comp/env/jdbc/MyDB in the properties file and then in our tomcat server.xml context we add something like the following:
       <Resource
       name="jdbc/MyDB"
       auth="Container"
       type="javax.sql.DataSource"
       driverClassName="oracle.jdbc.driver.OracleDriver"
       url="jdbc:oracle:thin:@_host_:_port_:_SID_"
       username="_username_"
       password="_password_"
       maxActive="10"
       maxIdle="5"
       maxWait="-1"/> 3. When specifying with JNDI in tomcat, it appears to use the tomcat DataSource (org.apache.tomcat.dbcp.dbcp.BasicDataSource).
    4. Using the tomcat BasicDataSource means that we get no prepared statement caching - this is a documented issue with using third party DataSources.
    5. Using the tomcat BasicDataSource also seems to do a database rollback after every persistence manager 'close', which in our setup equates to each request.
    Questions:
    1. How do we force KODO to use it's own DataSource when using JNDI in tomcat?
    2. How would we force KODO to use a different DataSource in the properties file (like the tomcat BasicDataSource)?
    3. Is there any way to stop the database rollback after each persistenceManager close when using the tomcat BasicDataSource?
    4. Is it recommended to use the KODO DataSource? Are there any reasons to not use it?
    5. What are the main differences between the KODO DataSource and the tomcat BasicDataSource?
    Many thanks in advance.
    Nick

    Can anyone shed some light on how we get statement caching working whilst using JNDI to specify the db connection details? Is this possible???
    Any info/help would be very much appreciated as statement caching is something we would now like to enable.
    Many thanks,
    Nick

  • Datasource question

    Hi. I have a number of jsp pages which contain code as follows:
    <sql:setDataSource dataSource="jdbc/dbtest"/> <sql:query var="items" sql="SELECT * FROM homepagesubscriptions" > </sql:query>
    Now my question is will performance of the app be affected due to referring to the datasource multiple times in these various jsp's. I am just trying to get my head around what a dataource is. I have configured the datasource in my tomcat context file. Am I right in thinking that when the web server starts up, the datasource is initialised once and everytime I reference the datasource from my app it is referring to this same datasource - in which case it is not negatively affecting performance.
    thanks

    i am using connection pooling. Here is what I have in my tomcat context file:
    <Context reloadable="true" docBase="C:\struts2\website" >
         <Resource name="jdbc/dbtest"
              auth="Container"
              type="javax.sql.DataSource"
              factory="org.apache.commons.dbcp.BasicDataSourceFactory"
              driverClassName="com.mysql.jdbc.Driver"
              url="jdbc:mysql://localhost/website"
              username="root"
              password="xxxx"
              maxActive="20"
              maxIdle="10"
              maxWait="10000"
              removeAbandoned="true" />
         <Realm className="org.apache.catalina.realm.DataSourceRealm" debug="99"
              dataSourceName="jdbc/dbtest"
              digest="MD5"
                    localDataSource="true"
              userTable="users" userNameCol="user_name" userCredCol="user_pass"
                    userRoleTable="user_roles" roleNameCol="role_name"/>
    </Context>thanks darted

  • Generic DataSource Question

    Hi Friends,
    We have generic extractor based on the function module.
    It submits the report program and returns some value.
    This function module has parameter called S_S_IF-Maxsize = I-Maxsize.
    My counter is based on this,
    Counter GE S_S_IF-Maxsize.
    Now my question is what value of S_S_IF-Maxsize is determined by system and how?
    When I am checking in RSA3 it is based on the selection screen parameters on RSA3(By default:100).
    When I test in SE37-function module it  takes 000000 by default, I may change here to test and debug. But I wanted to know that when called by BW how it takes value of S_S_IF-Maxsize.
    If somebody knows the procedure on how to debug extractor when called by BW, please let me know.
    Please suggest.
    Thanks,
    John.

    Hi,
    A simple way to check how BW calls the extractor and what values are passed is to use RSA3 itself in the source system.
    Execute an infopackage in BW and note down the request number or you can use the request number of an existing one.
    Go to RSA3 in the source system, enter your datasource details there.  There will be a button next to the execute button at the top called "Import from BW".  Click that and enter the request number and the BW system logical name and hit enter.
    The parameters from the request will be imported.  You can view them and then debug using this data.
    Cheers
    Neeraj

  • Datasource question on ecc selection fields.

    I have accidentally clicked on maintaince for 2lis_02_itm datasource in lbwe and it went to inactive mode.
    1)How do I make it active without altering the fields in the selected ds. Do i have to replicate it back if I make it active again ?
    2) If I add the selection criteria in the selection of fields so that those are available in IP selection, do i have to replicate the ds on BI. (in rsa6)
    3) I need to add one more field from right to left and now its available in the extracture structure. To get that field to ecc, should i just replicate the ds. How about creating the infobject for that field and replicate or ?
    4) there are lot of other flow to the pur flow, so will all those get deactivated when the structure of the ds changes. do i need to activate all the trans dtp etc.

    Hi Daniel,
    1)How do I make it active without altering the fields in the selected ds. Do i have to replicate it back if I make it active again ?
    Just click on activate the data source with out generating the data source ( if you had not enhanced any of the fields).
    2) If I add the selection criteria in the selection of fields so that those are available in IP selection, do i have to replicate the ds on BI. (in rsa6)
    If you had enables the option "SELECTION" for particular file, first you need to activate in RSA6 and replicate the data source in BW side so that the changes are effected in BW side and the field will be made available for loading.
    3) I need to add one more field from right to left and now its available in the extracture structure. To get that field to ecc, should i just replicate the ds. How about creating the infobject for that field and replicate or ?
    If the field is available in the Right hand side then move that field to Left hand side it will be available in the extract structure, click on generate data source and activate in LBWE.
    Replication of data source we need to do it in BW side,inorder to copy the same extract struture of the Source.
    Yes you need to assign this to an info object(you can search for BI content object,as most of the objects are available in the bi content if not create your own io).
    4) there are lot of other flow to the pur flow, so will all those get deactivated when the structure of the ds changes. do i need to activate all the trans dtp etc.
    If you are changing a particular data source extract structure then the changes are reflected with respect to that single data source only.
    If the option HIDE is selected :
    Hide Field: Activating this indicator effectively removes the field from the data-transfer process. The field will no longer be available in the transfer structure and therefore will not be pulled into BW. This field should not be active unless you know that the field contains data not relevant to the DataSource.

  • Extracter/datasource that used last time to load the data

    All,
      Can some one help me to find out a table where it stores log/detail of a datasource or extract structure (in BW) when was the last time it was used to load the data ? Help would be highly appreciated.
    Thanks,
    Hari

    Hi Hari, Welcome to SDN.
    In bw side select datasource and monitor. Increase monitor selection to get the details.
    Hope it Helps
    Srini
    [Dont forget to close the call by assigning poings.... - Food for Points: Make a Difference through Community Contribution!|https://www.sdn.sap.com/irj/sdn/index?rid=/webcontent/uuid/007928c5-c4ef-2a10-d9a3-8109ae621a82]

  • 7.9.6.1 Financial Analytics - JDE - Extract Dates question

    Hi all,
    Implementation of 7.9.6.1 Financial Analytics + Procurement and Spend Analytics
    OLTP: JDE E9
    OLAP DB: Oracle 11g
    We were trying to adjust the # of prune days for an incremental load, when we discovered the change was having no effect.
    Our situation - JDE*
    Looking at the parameters in DAC, we found that the incremental condition in the SDEs is:
    date >= *$$LAST_EXTRACT_JDEDATE*
    In DAC, this parameter expands to:
    TO_NUMBER(TO_CHAR(TO_DATE('@DAC_$$TGT_REFRESH_DATE'), 'DDD'))+(TO_NUMBER(TO_CHAR(TO_DATE('@DAC_$$TGT_REFRESH_DATE'), 'YYYY')) -1900) * 1000
    If one keeps digging,
    +$$TGT_REFRESH_DATE = @DAC_TARGET_REFRESH_TIMESTAMP in custom format, MM/DD/YYYY+
    Compared to EBS*
    Now, if I look at the *$$LAST_EXTRACT_DATE* parameter (used in EBS SDEs)
    It expands to:
    +@DAC_SOURCE_PRUNE_REFRESH_TIMESTAMP in custom format, MM/DD/YYYY+
    Conclusion and question*
    Obviously the Julian date conversion is required in $$LAST_EXTRACT_JDEDATE, but apparently the prune days are being considered in $$LAST_EXTRACT_DATE and not in $$LAST_EXTRACT_JDEDATE.
    An obvious fix is to use @DAC_SOURCE_PRUNE_REFRESH_TIMESTAMP in $$LAST_EXTRACT_JDEDATE, but I don't know if that would have any side effects. I'll test it.
    I'll raise a SR with Oracle, but wanted to check if you guys had seen this before.
    Thanks, regards.-
    Alex.-
    Edited by: Alejandro Rosales on Feb 22, 2011 5:57 PM

    Hi Alejandro,
    Maybe this has been updated/correct in 7.9.6.2. Later today I will be in the office and can check in a VM image.
    I'll update the thread as soon as soon as I have checked this out.
    Regards,
    Marco Siliakus

  • Extract Content Question?

    Sometimes I can  extract content right,but sometimes I cann't.
    when PDEObjectGetType == kPDEContainer, I extract some garbled content, I don't know why?
    My code is:
    #include "CDocument.h"
    #include "PIHeaders.h"
    #ifdef _DEBUG
    #undef THIS_FILE
    static char THIS_FILE[] = __FILE__;
    #define new DEBUG_NEW
    #endif
    // char* gTextBuf = NULL;
    void getText(PDEText pdeText, ASInt32 i, ASInt32 numElems)
        ASUns8 *textBuf = NULL;
        ASInt32 textLen, textLenNext;
        ASInt32 numRun = PDETextGetNumRuns(pdeText);
        textLen = PDETextGetText(pdeText, kPDETextRun, i, textBuf);
        // textLenNext = PDETextGetText(pdeText, kPDETextRun, (i + 1), textBuf);
        if (textLen >= 0)
          for (ASInt32 index = 0; index < numRun; index++)
                textBuf = (ASUns8 *)ASmalloc(textLen + 1);
                memset(textBuf, 0, textLen + 1);
                textLen = PDETextGetText(pdeText, kPDETextRun, index, textBuf);
                AVAlertNote((char*)textBuf);
                if (textBuf)
                      ASfree(textBuf);
    void ContentRecursive(PDEContent pdeContent)
        ASInt32 numElems =  PDEContentGetNumElems(pdeContent);
        PDEElement pdeElement;
        PDEText textObject;
        for (ASInt32 i = 0; i < numElems; i++)
                pdeElement = PDEContentGetElem(pdeContent,i);
                ASInt32 objType = PDEObjectGetType(reinterpret_cast<PDEObject>(pdeElement));
                if (objType == kPDEText)
                    textObject = (PDEText)pdeElement;
                    getText(textObject, i, numElems);
              if (objType == kPDEContainer)
                   ContentRecursive(PDEContainerGetContent(reinterpret_cast<PDEContainer>(pdeElement)));
              if (objType == kPDEGroup)
                 ContentRecursive(PDEGroupGetContent(reinterpret_cast<PDEGroup>(pdeElement)));
    void ExtractContentInit()
        PDPage pdPage = NULL;
        PDEContent pdeContent = NULL;
        PDEElement  pdeElement = NULL;
        PDEText textObject = NULL;
        CDocument cDocument;
        pdPage = (PDPage)cDocument;
        if (pdPage == NULL)
                return;
        DURING
                pdeContent = PDPageAcquirePDEContent(pdPage, gExtensionID);
                ContentRecursive( pdeContent );
        HANDLER
              if  (pdeContent!=NULL)
                  PDERelease(reinterpret_cast<PDEObject>(pdeContent));
                  pdeContent =  NULL;
              if  (pdeElement !=  NULL)
                  PDERelease(reinterpret_cast<PDEObject>(pdeElement));
                  pdeElement =  NULL;
              if  (textObject !=  NULL)
                  PDERelease(reinterpret_cast<PDEObject>(textObject));
                  textObject =  NULL;
        END_HANDLER
    Please point out my mistakes methods!

    PDETextGetText gives you back the raw IDs – it doesn’t do anything to convert that information to an encoding that may be useful to you.  Please read the documentation for more info.

  • Missing dimension members - - initial extract date question

    We are configuring and implementing the out-of-the-box analytics. In the DAC, we set the initial extract date to be Jan 1 2006. It appears that when we run our initial full load ETL, it only loads suppliers that have been modified after Jan 1 2006. Suppliers prior to that are missing.
    Considering that there are facts that can be associated to suppliers before that initial Jan 1 2006 date, we are noticing a problem. Is my understanding correct and how do we resolve this?
    Thanks in advance.

    Hi Ed,
    Yes, I have set this $$INITIAL_EXTRACT_DATE parameter as you mentioned, but this is working fine in case 1 and NOT working in case 2.
    I am using the same versions of DAC, Informatica and BI Apps in both the cases. Only the source system is different.
    In case 1, its not an initial full load.
    in case 2, its an initial load.
    Thanks,
    Harish.

  • Incremental extract design question

    Hello Gurus,
    We have built a custom LKM that extracts data from a Java source(we have to use Java apis to pull from a LDAP source). It all works for the initial full-load data extact and the stage/target for this is the Oracle Database tables.
    Now since the Java APIs do not have the ability to find incremental(update, delete, insert since last pull) data from the source , what would be the best way to design my ODI project?
    Shoud I have a seperate Stage area from theTarget. Should I use an IKM that compares incremental changes(between Stage and Target) since last execution ? Or is there any other way I can do this optimally(like customizing my LKM itself outside of using the Java apis to have steps that find incremental data, is that possible) ?
    Inputs are greatly appreciated
    Happy holidays

    The LKM read source data and then put these data into the C$table in the staging.
    The IKM read the C$table
    Choose an IKM incremetal update, all data from the C$table will be loaded into the I$Table,
    But only new, updated, or deleted records will be loaded into the target.

  • Errors occurred during the extraction datasource of CRM System in Check extractor

    Dear All,
    Please find the below error's while I am extracting data from the data source in CRM.
    RM_QUOTA_ORDER_I: Table SMOXHEAD contains no BDOC name .
    Errors occurred during the extraction in Check extractor.
    Please let me know solution to resolve the issue.
    Thanks
    Regards,
    Sai

    Hi,
    Where your getting this error?
    during info pack load or checking at crm system RSA3?
    if your running your info pack then please run data source at RSA3.
    Seems like your source table may not have any bdocs.
    Where your runing this loads? prod or dev?
    Thanks

  • Datasource questions

    All,
    I'm confused by the relationship between XA dataources and Enlisted
    datasources. If I understand the Kodo manual correctly, all XA datasources
    MUST be Enlisted. However, I infer that it is possible to enlist a non-XA
    datasource. That is, it is possible to set up a datasource that does NOT
    use an XA driver, and then set datasourcemode to 'enlisted'. Is this
    correct?
    In the same vein, I'm puzzled about when a second connection is needed.
    The Kodo manual clearly states that:
    "Kodo needs to have access to a data source that will not be enlisted
    in an XA transaction for things like obtaining database sequence numbers
    for datastore identity, which should not be part of Kodo's transaction."
    The manual then describes the need to set up a second, non-XA,
    datasource if you have an enlisted XA datasource. But, if my enlisted
    datasource is not an XA datasource, would I need the second datasource?
    Finally, what is the effect of setting transactionmode to 'managed', but
    leaving datasourcemode as 'local', rather than 'enlisted'?
    Thanks,
    Mike

    Unfortunately, the answer varies from appserver to appserver and
    configuration to configuration.
    Basically, when we refer to XA datasources, it covers all datasources
    that are transactional based on the global J2EE tranasction. Most
    application servers treat these regardless of driver and DB as an XA
    resource.
    While this could be clarified in the documentation, you should treat all
    datasources which are enlisted in global transaction as an indication to
    use enlisted as the DataSourceMode and configure a second
    ConnectionFactory DataSource for things that should not be held by the
    global transaction such as sequence generation.
    Probably worth noting is WebSphere which all datasources are
    transactional by default.
    Michael England wrote:
    All,
    I'm confused by the relationship between XA dataources and Enlisted
    datasources. If I understand the Kodo manual correctly, all XA datasources
    MUST be Enlisted. However, I infer that it is possible to enlist a non-XA
    datasource. That is, it is possible to set up a datasource that does NOT
    use an XA driver, and then set datasourcemode to 'enlisted'. Is this
    correct?
    In the same vein, I'm puzzled about when a second connection is needed.
    The Kodo manual clearly states that:
    "Kodo needs to have access to a data source that will not be enlisted
    in an XA transaction for things like obtaining database sequence numbers
    for datastore identity, which should not be part of Kodo's transaction."
    The manual then describes the need to set up a second, non-XA,
    datasource if you have an enlisted XA datasource. But, if my enlisted
    datasource is not an XA datasource, would I need the second datasource?
    Finally, what is the effect of setting transactionmode to 'managed', but
    leaving datasourcemode as 'local', rather than 'enlisted'?
    Thanks,
    Mike
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com

  • Export Datasource Question

    Task : To load the actuals into a planning cube.
    After generating an export datasource on  a cube infosource 8CUBE is created.
    When I create an infopackage ..in the Datatargets tab how do I find the planning cube..?
    What am I missing here ..?

    Hi,
    Assuming you have created update rules from the 8CUBE to the planning cube, right click > Switch Transactional Cube > Choose Can be Loaded.
    This can also be done using FM (so that you can switch between the 2 states in a process chain and automate the process)
    Hope this helps...

  • Extracted Audio Question

    PLEASE PLEASE HELP ME, I'M SO FRUSTRATED!!!!
    I extracted ALL of the audio in my movie and sent it to the trash. I then exported the silent movie and burned it to DvD using idvd.
    NOW, it turns out I need the audio that was attached to many of the clips in the movie. I have many hours, and many hi-8 tapes of footage and do not have time to sort through it all to import the original footage again.
    I opened the project and copied the audio from the trash, then pasted it into mu imovie project. My problem is that now the audio is playing fine. BUT the video clips will not play at all. The clips are still there, the timeline for video and audio is activated, but the clips do not move. There is just a still image of the frame as the audio plays. If I manually move the cursor to different spots on the clips the still frame displayed changes. (So I believe the clips are still intact.) My computer is now running VERY slow, and I can not simply drag the cursor over the movie to manually play it.
    Any Ideas on how to get the movie to play properly????

    Hi Mochastix - my comment here may not help solve your problem, but for any future editing of the kind you did, consider this:
    When you 'extract' audio, the original audio in the video clips is not removed - it is simply muted. Thus it can be 'un-muted' again. If you want to create a soundless DVD, either select all of the clips and lower their audio volume to zero, OR more simply, uncheck the little box at the right-hand end of the video track. So, there was no need to trash the 'extracted' audio, nor was there a need to drag it back from the trash.
    As Sue suggests, it does seem that you are very low on storage space. If all else fails, trash the audio track again, and raise the audio level within the video clips. To do that, select all of the clips, and use the slider. See if that helps.

Maybe you are looking for

  • It wont let me download apps it says i have to verified my id but i did

    I made a apple id account for my apps and music and it wont let me download it says that I need to verified which I did cause I checked my email it say previously verified and im pretty sure that it is verified. Thank you and let me know what I can d

  • IBook CD-RW Drive Disk Question

    I recently bought a iBook 500MHz with the CD-RW drive and OSX 10.3.9. At first, I could load and read any CD and never had need to write to one - until now. All of a sudden, I was having trouble getting it to load and read disks. I followed all the s

  • TS1702 How can I eliminate duplicates in my devices when I sync?

    I do not know if it is related to icloud but when I sync now I get duplicates in the calendar on my different devices. Ipod, IPAd1 and IPad 2. I also would like my dedvice to sync my contacts and calendar from Outlook on my PC as prime feeder, it doe

  • BAPI Update

    Hello Friends, I am using the BAPI "BAPI_BUPA_CENTRAL_CHANGE", in order to update the SAP Business Partner data. But dont know what is wronge, its not working, here is my sample code. ftemplate = repository.getFunctionTemplate("BAPI_BUPA_CENTRAL_CHAN

  • A freezeframe  to print

    Hey all ..... is there a way to take a freezeframe or still from imovie and get it to a file that i can print. I'd like to use a shot for the jacket of a DVD. Thanks in advance