LM03 skip source data

Hi all,
  I have defined the verification profile to scan the destination bin. And I assigned this profile to the goods movement 101 for receiving. Also i have checked the skip source data field.
But when I use LM03, it still displays the source screen. i am not sure if iam missing any setting?
Appreciate any suggestion on this.
Thanks

Hi,
Can you provide more information on where you *** singed the  profile and in which config path you change the soruce Data skip check.
Per my understanding we cannot skip the initial screen  and you have to enter / scan the TO number i LM03.
Thx
MJ

Similar Messages

  • Creation of DME medium FZ205 There is no source data found

    We are executing payment runs using F110 and then creating data medium - a file to send to the bank.
    In the variant for the program I am putting C:\ however when I have several users executing payment runs at the same time, the data medium is not creating and I am getting the error message that the source data cannot be found
    Can anyone help me with this issue - should I leave the file name as blank?
    Thanks
    Liz

    Hello,
    In order to avoid FZ205 please review your selection parameters and F1 help for the print program when creating the file:
    1. If you are taking the Output to file system:
    If required, the file can be written to the file system. The created file can be copied to a PC using data medium exchange management. You should be looking for downloaded files here, since the data carrier is not managed within the SAP system, but is already stored in the file system by the payment medium program. The file name should be defined by the user. You should make sure that existing files with the same name have already been processed, because they will be overwritten.
    Note:If a file cannot be found using the data medium exchange management the reason could be that the directory that was written to at the start of the payment medium program (in background processing, for example) cannot be read online.
    You should then select a directory which can be recorded and read by several different computers. Due to the problems described above and the resulting lack of data security, we advise against writing to the file system. This method is only beneficial if the data carrier file is taken from the file system by an external program, to be transferred to the bank.
    2. If you are taking Output into TemSe:
    If required, the file created can be stored within the SAP System(store in the TemSe and not in the file system),thus protecting it from unauthorized external access. You can download the file into the user's file system via the DME manager. The name of the file to be created during the download can be determined when running the payment medium program: the contents of the
    file name parameter are stored in the management data and defaulted when running the download.
    Please check the corresponding files in the DME administration for all files and check if the output medium 'File-System' has been
    chosen, that means output medium '0'. In order to use the TemSe you have to use the output medium '1'. Furthermore see if the PC-file- paths, like c:\filename.DAT, instead of application file names. The FDTA has difficulties to find these files, especially by using 2 application servers.
    To avoid problems with the files SAP recommends you to use the TemSe   with output medium '1', or the file system with output
    medium '0'. TemSe is always a better option.
    I hope this helps.
    Best regards,
    Suresh Jayanthi.

  • [APEX 3] Requested source data of the report has been modified

    Hello APEX-Friends,
    I have a common problem but the situation is a bit different here. Many of you might know the "invalid set of rows requested the source data of the report has been modified" problem. Often it occurs on submit. That means, you have a report, you select rows, you do things, you submit the page and everything blews up.
    This is because you enter some values into fields the report depends on and so you modify your report parameters and the source data changes.
    But:
    In my case I have a dynamically created report that blews up before any submits occur or values change.
    My query is a union of two selects. Both query different views. Those views use a date field as parameter and some compare functions.
    I read the field with a V-Function i wrapped arround the apex V Function - declared as deterministic. My date compare function is also declared deterministic (I doubt this makes any differences as it might be only important for the optimizer, but as long as I don't know exactly what APEX evaluates, I go for sure).
    I ensured, that the date field is set by default with the current date (and that works, because my interactive report initially displays correct data from the current date).
    So everything is deterministic and the query must return same results on subsequent calls, but APEX still throws this "source data has changed" error and I am to 99.99% sure, that this cannot be true.
    And now the awesome thing about this:
    If I change the value of the date field, an javascript performs a submit. The page is reloaded (without resetting pagination!) and everything works fine. I can leave the page, reenter, do things - everything works well.
    But if I log into the application and directly move to the corrupted report and try to use the pagination without editing fields or submitting the page the error occurs.
    Do you have any Idea what's happing there? I could try to workaround this by submitting the page the first time it's entered to trigger this "mystery submit" that gets everything working. But I would like to understand this issue and have a clean solution.
    Thanks in advance,
    Mike aka UniversE

    Okay, I found a solution, but I do not understand it - it might be a design flaw in APEX.
    I mentioned the date field that is used in the query. I also mentioned that it is set with the current date by default. I did not mention how.
    There are some possibilities in APEX to do so.
    1. Default-Setting in the element properties
    2. Static assignment if no value is in session cache
    3. Computation before header
    I did the first and second.
    BUT:
    An interactive report seems to work as follows. A query is executed to get all rows of the report. Then a second query is executed to get the rows that shall be displayed. And the order is screwed up, I think.
    1. The first report query to get all rows
    2. The elements are loaded and set to default values
    3. The second report query to get the display rows
    And that's the reason why nothing worked. Scince I added a computation before header the date field is set before the report queries are executed and everything works all fine now.
    But I think it's a design flaw. Either both queries shall be executed before Regions or afterwards but not split as field values might change when elements are loaded.
    Greetings,
    UniversE

  • User View is not reflecting the source data - Transparent Partition

    We have a transparent partition cubes. We recently added New fiscal year details to the cube (user view as well as source data cube). We loaded the data to the source data cube. From the user view, we tried to retrieve data, it shows up 0's. but the data is availble in the source data cube. Could anyone please provide the information what might be the issue?
    Thanks!

    Hi-
    If u haven't add the new member in the partition area, then Madhvaneni's advice is the one u should do. Because, if u haven't add the member, the target can't read the source.
    If u have already added the new member in the partition area, and still the data won't show up, sometimes it's worth to try re-save the partition, and see what's the outcome.
    -Will

  • How to deal with such Unicode source data in BI 7.0?

    I encountered error when activating DSO data. It turned out that the source data is Unicode in the HTML representation style. For example, the source character string is:
    ABCDEFG& #65288;XYZ  (I added a space in between & and # so that it won't be interpreted to Unicode in SDN by web browser)
    After some analysis, I see it's actually the Unicode string
    ABCDEFG(XYZ
    Please notice the wide left parenthesis. It's the actual character from the HTML $#xxx style above. To compare, here is the Unicode parenthesis '('  and here is the ASCII one '(' . You see they are different.
    My question is: as I have trouble loading the &#... string, I think I should translate the string to actual Unicode character (like '(' in this case). But how can I achieve this?
    Thanks!
    Message was edited by:
            Tom Jerry

    I found this is called "Numeric character reference", or NCR, in HTML term. So the question is how to convert string in NCR fashion back to Unicode. Thanks.

  • There is no source data for this data record, Message FZ205

    Hi Experts,
    I am facing a problem with the DME File download. This problem happened all of sudden in our production system since last month and it was never before. Our system landscape has also not been changed but as per our basis consultant he has added two-three more new application server to the Production client. Even we do not have this problem in our testing clients.
    Please note that we have been using the output medium '1' from the day one and thus the system has been generating the DME in 'File System' which we download on the desktop and upload the same to the bank online. After running the payment run when we trying to download the DME File, the system gives the error "There is no source data for this data record, Message FZ205".
    I tried to fix this issue through many ways but not able to. So can you please let me know the reason of this error and solution to fix this.
    With best regards,
    BABA

    Hi Shailesh,
    Please share how you solved this problem.
    Many Thanks,
    Lakshmi

  • Error during data load due to special characters in source data

    Hi Experts,
    We are trying to load Billing data into the BW using the billing item datasource. It seems that there are some special characters in the source data. When the record with these characters is encountered, the request turns red and the package is not loaded even into the PSA. The error we get in the monitor is something like
    'RECORD 5028: Contents from field ****  cannot be converted into type CURR',
    where the field **** is a key figure of type currency. We managed to identify the said record in RSA3 on the source system and found that one of the fields contains some invalid (special) characters that show up as squares in RSA3. The data in the rest of the fields, including the fields mentioned in the error  looks correct.
    Our source system is a non-unicode system wheras the BW system is unicode enabled. I figure that the data in the rest of the fields are getting misaligned due to the presence of the invalid characters in the above field. This was confirmed when we unassigned the field with the special characters from the transfer rules and removed the source field from the transfer structure. After doing this the data was loaded successfully and the request turned green.
    Can anyone suggest a way to either filter out such invalid characters from the source data or make some settings in the BW systems such that the special characters are no longer invalid in the BW system? We cannot write code in the transfer rules because the data package does not even come into the PSA. Is there any other method to solve this problem?
    Regards,
    Ted

    Hi Thad,
    I was wondering, if system is unicode or non unicode that should not matter the amount and currency field . As currencies are defined by SAP and it is in pure English at least a currency code part of 3 Chars. 
    Could this because of some incosistency of data ??
    I would like to know for Which currency  had some special characters it it in that particular record ??
    Hope that helps.
    Regards
    Mr Kapadia

  • Using sqlldr when source data column is 4000 chars

    I'm trying to load some data using sqlldr.
    The table looks like this:
    col1 number(10) primary key
    col2 varchar2(100)
    col3 varchar2(4000)
    col4 varchar2(10)
    col5 varchar2(1)
    ... and some more columns ...
    For current purposes, I only need to load columns col1 through col3. The other columns will be NULL.
    The source text data looks like this (tab-delimited) ...
    col1-text<<<TAB>>>col2-text<<<TAB>>>col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    END-OF-RECORD
    There's nothing special about the source data for col1 and col2.
    But the data for col3 is (usually) much longer than 4000 chars, so I just need to truncate it to fit varchar2(4000), right?
    The control file looks like this ...
    LOAD DATA
    INFILE 'load.dat' "str 'END-OF-RECORD'"
    TRUNCATE
    INTO TABLE my_table
    FIELDS TERMINATED BY "\t"
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    col1 "trim(:col1)",
    col2 "trim(:col2)",
    col3 char(10000) "substr(:col3,1,4000)"
    I made the column 3 specification char(10000) to allow sqlldr to read text longer than 4000 chars.
    And the subsequent directive is meant to truncate it to 4000 chars (to fit in the table column).
    But I get this error ...
    Record 1: Rejected - Error on table COL3.
    ORA-01461: can bind a LONG value only for insert into a LONG column
    The only solution I found was ugly.
    I changed the control file to this ...
    col3 char(4000) "substr(:col3,1,4000)"
    And then I hand-edited (truncated) the source data for column 3 to be shorter than 4000 chars.
    Painful and tedious!
    Is there a way around this difficulty?
    Note: I cannot use a CLOB for col3. There's no option to change the app, so col3 must remain varchar2(4000).

    You can load the data into a staging table with a clob column, then insert into your target table using substr, as demonstated below. I have truncated the data display to save space.
    -- load.dat:
    1     col2-text     col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    XYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    END-OF-RECORD-- test.ctl:
    LOAD DATA
    INFILE 'load.dat' "str 'END-OF-RECORD'"
    TRUNCATE
    INTO TABLE staging
    FIELDS TERMINATED BY X'09'
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    col1 "trim(:col1)",
    col2 "trim(:col2)",
    col3 char(10000)
    SCOTT@orcl_11gR2> create table staging
      2    (col1 varchar2(10),
      3       col2 varchar2(100),
      4       col3 clob)
      5  /
    Table created.
    SCOTT@orcl_11gR2> host sqlldr scott/tiger control=test.ctl log=test.log
    SCOTT@orcl_11gR2> select * from staging
      2  /
    COL1
    COL2
    COL3
    1
    col2-text
    col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    XYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    1 row selected.
    SCOTT@orcl_11gR2> create table my_table
      2    (col1 varchar2(10) primary key,
      3       col2 varchar2(100),
      4       col3 varchar2(4000),
      5       col4 varchar2(10),
      6       col5 varchar2(1))
      7  /
    Table created.
    SCOTT@orcl_11gR2> insert into my_table (col1, col2, col3)
      2  select col1, col2, substr (col3, 1, 4000) from staging
      3  /
    1 row created.
    SCOTT@orcl_11gR2> select * from my_table
      2  /
    COL1
    COL2
    COL3
    COL4       C
    1
    col2-text
    col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    XYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    1 row selected.

  • Upgrade ST-A/PI and ST-PI plug-in in R3 source data

    Hi gurus:
    I planning an upgrade of this tools in R3 4.6C system (source data):
    addon: ST-A/PI from 01H_R3_46C to 01J_R3_46C
    and
    addon: from ST-PI 2005_1_46C SP level 4 to 2005_1_46C SP level 7
    I have a R3 4.6C and BW system 3.5 (BW system extract data from R3 by RFC).
    This upgrade (in plug in tools of R3) generate some errors in BW vs R3 interactions???
    BW use this tools for extract data???
    BW system have this level tools:
    ST-A/PI 01J_BCO640
    ST-PI 2005_1_640 SP level 6
    I must have some considerations for make this upgrade in R3??
    Best regards,

    Hi Raghu,
    you can get into the problems which I have said if you dont implement that,also many support packs do have their dependencies on ST-PI and ST-A/PI and seriously I havent heard people implementing this on prod and not dev and qa....I have worked on with many big clients but never heard this
    Also if you ignore whatever I have said,still it is still recommended that you never apply anything directly on Production,It is always better if you first implement on dev and qa and see if you are facing any issues while implementing...so its always better to implement on dev and then qa
    and It doesnt take too much of time,so that should not be an issue after all
    Always follow SAP best Practices
    Rohit

  • How to handle duplicate Primary Key entries in the Source data

    This is my first experience with ODI.
    I receive Source data from the customer that includes a one letter designation, ACTION_CODE, in each record of data as to the disposition of the record:
    ‘R’ represents Re-issue in which case I’m to modify the corresponding Target record based on the Primary Key.
    ‘N’ represents an Insert in which case I’m to insert a new record into the Target.
    ‘D’ represents a delete in which case I’m to delete the record with the corresponding Primary Key from the Target.
    The Source data comes in an XML file and the Target is an Oracle DB.
    I have chosen the IKM Oracle Incremental Update (MERGE) Knowledge Module.
    I filter ACTION_CODE to just collect records that are ‘N’ or ‘R’ and I exclude the ACTION_CODE from the mapping but since within the same Source
    set there may be an ‘N’ and ‘R’ with the same primary key I receive Primary Key errors.
    Should I alter CKM to not check for duplicates in the Source?
    Is there a better way?

    Ganesh,
    Identifying Duplicates is a logical activity.  More or less it need Manual intervention to judge both the records means common.  if few unique paramenters like Telephone, Pincode, SSN, passport no etc can be used on filters for searching the records.  Currently there are no automatic method to identify the duplicates.  In MDM 5.5 SP04 which is next release there will be auto de-duplicate facility based on tresholeds and matching criteria that you will setup.
    I hope i have answered your query transparently. if you have any queries futher you can reply here.
    Regards
    Veera

  • ODI 11.1.1.6.4 - Interface failed at SnpsSqlUnload - Unload source data Step

    Hi Experts -
    We are getting error during interface execution in ODI 11.1.1.6.4. We are unable to find the root cause.
    Interface is using customized LKM (LKM SQL to Oracle (SQL*Loader). This LKM is basically connecting to SQLSERVER fetching required rows from the table and creating a LDR file. Post this data from LDR file is then populated in Stage Table (Oracle DB).
    We are getting Error while connecting to SQLSERVER at step - Unload Source Data (It uses: SnpsSqlUnload command)
    This step is connecting to SQLSERVER and LDR file also getting generated.
    Any suggestions?
    Regards,
    Andy
    Error Message in ODI-
    ================================
    java.sql.SQLNonTransientConnectionException: [FMWGEN][SQLServer JDBC Driver]The DBMS returned an unspecified error.  The command code was 224.
    at weblogic.jdbc.sqlserverbase.ddb_.b(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddb_.a(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddb9.b(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddb9.a(Unknown Source)
    at weblogic.jdbc.sqlserver.tds.ddr.c(Unknown Source)
    at weblogic.jdbc.sqlserver.tds.ddr.a(Unknown Source)
    at weblogic.jdbc.sqlserver.tds.ddq.a(Unknown Source)
    at weblogic.jdbc.sqlserver.tds.ddr.a(Unknown Source)
    at weblogic.jdbc.sqlserver.tds.ddr.a(Unknown Source)
    at weblogic.jdbc.sqlserver.ddh.a(Unknown Source)
    at weblogic.jdbc.sqlserverbase.ddcq.k(Unknown Source)
    at weblogic.jdbc.sqlserverbase.dddm.next(Unknown Source)
    at com.sunopsis.dwg.tools.SqlUnload.unloadDataDe(SqlUnload.java:711)
    at com.sunopsis.dwg.tools.SqlUnload.actionExecute(SqlUnload.java:357)
    at com.sunopsis.dwg.function.SnpsFunctionBase.execute(SnpsFunctionBase.java:276)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execIntegratedFunction(SnpSessTaskSql.java:3437)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.executeOdiCommand(SnpSessTaskSql.java:1509)
    at oracle.odi.runtime.agent.execution.cmd.OdiCommandExecutor.execute(OdiCommandExecutor.java:44)
    at oracle.odi.runtime.agent.execution.cmd.OdiCommandExecutor.execute(OdiCommandExecutor.java:1)
    at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2913)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2625)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:558)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:464)
    at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2093)
    at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1889)
    at oracle.odi.runtime.agent.processor.impl.StartScenRequestProcessor$2.doAction(StartScenRequestProcessor.java:580)
    at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
    at oracle.odi.runtime.agent.processor.impl.StartScenRequestProcessor.doProcessStartScenTask(StartScenRequestProcessor.java:513)
    at oracle.odi.runtime.agent.processor.impl.StartScenRequestProcessor$StartScenTask.doExecute(StartScenRequestProcessor.java:1073)
    at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
    at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$1.run(DefaultAgentTaskExecutor.java:49)
    at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:49)
    at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor.executeAgentTask(DefaultAgentTaskExecutor.java:41)
    at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor.doExecuteAgentTask(TaskExecutorAgentRequestProcessor.java:92)
    at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor.process(TaskExecutorAgentRequestProcessor.java:83)
    at oracle.odi.runtime.agent.support.DefaultRuntimeAgent.execute(DefaultRuntimeAgent.java:68)
    at oracle.odi.runtime.agent.servlet.AgentServlet.processRequest(AgentServlet.java:512)
    at oracle.odi.runtime.agent.servlet.AgentServlet.doPost(AgentServlet.java:442)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:389)
    at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
    at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:326)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
    at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:879)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:747)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
    at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:520)

    So in the Operator section on step 6 'Load data into Planning' I get the below.  Fields 4 and 5 aren't even in the Select statement.  Only the mapped fields are showing up here.  My view has two fields that when you sort by in Excel do exactly what I need.  We just can't figure out how to get those fields to/from staging area.  They can't be input into Planning as they would have nowhere to go.  Maybe they can be ignored during dimension build?
    from com.hyperion.odi.common import ODIConstants
    from com.hyperion.odi.connection import HypAppConnectionFactory
    from java.lang import Class
    from java.lang import Boolean
    from java.sql import *
    from java.util import HashMap
    # Get the select statement on the staging area:
    sql= """select   C1_ACCOUNT    "Account",C2_PARENT    "Parent",C3_ALIAS__DEFAULT    "Alias: Default",C4_DATA_STORAGE    "Data Storage",C5_UDA    "UDA",C6_DATA_TYPE    "Data Type",C7_ACCOUNT_TYPE    "Account Type",C8_TIME_BALANCE    "Time Balance",C9_VARIANCE_REPORTING    "Variance Reporting",C10_AGGREGATION__PROJECTS_    "Aggregation (Projects)" from "C$_0Account"  where     (1=1)     """
    srcCx = odiRef.getJDBCConnection("SRC")
    stmt = srcCx.createStatement()
    srcFetchSize=30
    stmt.setFetchSize(srcFetchSize)
    rs = stmt.executeQuery(sql)
    #load the data
    stats = pWriter.loadData(rs)
    #close the database result set, connection
    rs.close()
    stmt.close()

  • FDM validation report with source data gives 0

    Hi All,
    I'm building a validation report to report on the source data (so using ~ instead of |).
    I've created a logic group to sum up the amounts to a member TOTAL.
    When i use the validation editor lookup screen i can browse for Entity, Account (TOTAL), UD1, UD2 and UD3.
    The category, Period and Year are left blank.
    When trying to test i have to select a test entity (which is the target entity dimension). However, these entities are nog in my source entities (since they are mapped).
    So the lookup value gives a value of 0 while a value should be there (since i've selected all dimensions).
    Any suggestions? i remember that in the past i did just select a random test entity and it did work.
    We're using version 11.1.1.3.
    Thanks!

    Hi Tony,
    What do you mean by target information.
    I only retrieve source information but i have to select an entity in the test window (where only the target entities are shown).
    So all fields in the import screen (except those in the pov) are defined and match the import screen members exact.
    Furthermore when i click browse for the years i get an error as well..

  • MRP Wizard  - Source data selection

    Currently the MRP Wizard Source Data selection does not allow the selection of a single (or selected) Sales Order (SO).
    Enabling such, would allow easy MRP planning around a specific order(s) received. (For instance an order is received for a task that is actually a complete project. Buying (and Production Orders) is done dedicated for this project. (And even stored separatly, by project.)
    All sorts of work arounds exist - creating Forecasts from the Sales Order etc.... - which by the way is not very elegantly supported by the objects in the SDK either, so in the end, the missing link is to select the MRP Wizard source by SO.
    By extension.... A Sales Order can be easily moved to a Delivery Note, via a Pick List (i.e the item is available off the shelf)
    However a Sales Order cannot equally be moved to a Delivery Note via a Production Order (i.e the item has to be made) either via MRP Wizard, or manually. An easy "Copy To" (Production Order) extension would facilitate this process greatly.
    Where multiple Production Orders are generated (say via the MRP Wizard) all from (one) Sales Order, can the (current for information only) Sales Order field on teh Production Order also be updated when the Production Order was generated by the MRP Wizard (or SDK for that matter)?
    Can this be considered?

    Hi Sam,
    Please check this thread to find some clues:
    /message/5718460#5718460 [original link is broken]
    Thanks,
    Gordon

  • FDM drill-through to source data residing in SQL Server

    Hi-
    I am trying to create an FDM app allowing for drill through down to the source residing in SQL Server tables. Is it currently possible with a custom integration script which pulls data from the source and some out of the box functionality which traces back to the source data? The drill through stops at FDM work table but I need to be able to see live data in the source. What do I need to do or install to achieve that? Thank you in advance.

    Well thats news to me. I know there are some custom solutions offered by partners which can drill back to any ODBC source but I've not seen a Oracle offering. Marketing literature is not always the most reliable source of information :)

  • How to skip the date in breakdown maint order

    Dear PM Guru's,
    please suggest me how to skip the date inbetween the two working days. please check the below scenario: based on factory calendar (production ) saturday was holiday. one breakdown maintenance order was opened on friday afternoon and its closed on sunday afternoon. now user wants only breakdown hours of production running days/ hours. production was produced upto saturday earlymorning and its renewed sunday morning. where as saturday producion was not planned and its not produced , so user wants to skip that time from the breakdown duration calculation. please suggest me is it right practice? if its right how we have to do in SAP? please let me know it.
    thanks in advance to all the PM guru's
    regards
    Jalu

    Hi,
        check this link Equipment Downtime
    I believe u may have to recalculate the breakdown hrs using some FM in user exit QQMA0014 QM/PM/SM: Checks before saving a notification

Maybe you are looking for