Open PO Concept

I want to know whether anybody has vested time in customizing Open PO concept which is nothing but Blanket Functionality instead of Amount & Price we need to manage against the quantity & Price and create releases for the PO. I know this is a unique requirement.
Thx.
null

Hi ,
I dont think this feature is possible.
But i would suggest a work around ...
Why dont u have a db trigger to restrict further releases once the required quantity
is reached..
null

Similar Messages

  • Reg Open Cursor Concept

    Friends,
    Please kindly help me to analyse this dump.
    In BI end routine select * query has been written to fetch values from active dso . They are using non primary key in the where condition.
    More than 2 crore 20 million records were avaialble in that DSo for that condtion. While executing this query its going to dump.
    If i move ahead with Open cursor set Hold will this query work fine and using open cursor set hold can i fetch  2 crore 20 million records.
    In dump its shwoing to check the parameters : ztta/roll_area  ,ztta/roll_extension and abap/heap_area_total . Even i checked those parametrs in RZ11. The curent val is sufficient.
    Please kindly advice me for this dump and will open cursor set hold avoid the dump for  2 crore 20 million  records.
    Thanks
    Edited by: Suhas Saha on Sep 29, 2011 1:06 PM

    I am not completely convinced:  the difference depends on the task which has to be done with the records of a package.
    If the records are processed and the result must be written to the database into another table, then it will be necessary to COMMIT the changes, otherwise the 20.000.000 records will cause an overflow in the redo-logs of the database.
    => Only the OPEN CURSOR WITH HOLD will survive the DB-Commit.
    The SELECT ... PACKAGE SIZE is simpler and can be used, if the result is further processed and reduced in size, which makes it possible to keep all data in memory (no intermediate DB-COMMITs are necessary) .
    I would assume that case is the less frequent use case.
    Siegfried

  • Open PO concept in SRM

    Hi,
    How we can create OPEN PO in SRM. Our requirement is to create the multiple PO of one Invoice.
    E.g. PO has 1 quantity with 10000 Rs.. We want to create the 10 invoice based on the Value not based on QTY how we can achieve this.
    Regards
    Nishant Bansal

    Hi Nishant ,
    Their is standard sap delivered functionality in SRM called Limit PO which you can used type blanket PO of ECC where you can create po for value and time limit and within limit value time you can do invoice against PO .
    Limit items
      You can process the value limit (maximum purchase order value that goods or services must not exceed) and the expected value (estimated value of the purchaser, either under or the same as the value limit). The expected value serves to update the commitment in Controlling and the budget is reduced by this amount.
      You can spread the value limit over partial limits (contract limits and remaining limits) or establish an undefined value limit. The sum of the contract limits created may be greater than the value limit. The system checks and ensures that the value of the ordered items is not greater than the value limit.
      You can assign the relevant product category and define the period in which the goods should be delivered or the service performed. 
    Thansk
    Sharad

  • Open hub service - Cube as source system

    Dear gurus,
    i am new to open hub concept.
    i need to transfer values from my query to a custom table in R/3.
    im using bi7.
    Questions:
    1) i guess pen hub is my solution. is it correct?
    2) do i have to transfer values from my cube or can i send values from my query too?
    3) i have created openhub service under modelling tab, i have selected target type as TAB. it says database table /BIC/OHZTABLO.
    is it correct?
    4) then i am trying to create Transformation from open hub ZTABLO to/ BIC/OHZTABLO. ?
    in that case it shows that transformation target is DEST- ZTABLO? i mean open hub service is my destination? then what is the source?i am confused in that point.
    5) Thanks all!

    1) i guess pen hub is my solution. is it correct?
    ->If u want to transfer dat from yr query then APD (analysis Process designer) will be the best bet to go for. Create APD and schedule it in process chain
    2) do i have to transfer values from my cube or can i send values from my query too?
    ->Using OH(open hub) you have to use Cube as a source,,,you cant use query then.
    3) i have created openhub service under modelling tab, i have selected target type as TAB. it says database table /BIC/OHZTABLO.
    ->ya,correct,there are two ways to do that,,,either u send the cube data to a open hub table(as its happening in yr case) or send it to application server/flat file
    is it correct?
    --> yes
    4) then i am trying to create Transformation from open hub ZTABLO to/ BIC/OHZTABLO. ?
    in that case it shows that transformation target is DEST- ZTABLO? i mean open hub service is my destination? then what is the source?i am confused in that point.
    ---> yr source is InfoCube and target is Open hub table /BIC/OHZTABLO
    5) Thanks all!
    Than you.

  • Why can't I open a DITA topic file?

    FrameMaker 11
    DITA 1.2
    Yes, this is a repeat of an earlier query but I'm hoping the new title will help. I'm really stumped and this seems like it should be simple.
    When I originally set up my EDDs early this year, I did not want anyone using the generic topic; I wanted everyone to use the task, reference, and conept topics only, until they got more experienced. Now, however, I want to have the Topic topic available so I can create a collection topic for conrefs. However, when I try to open a topic from the FrameMaker DITA menu > New DITA File > New <topic>…, it opens a concept rather than a topic.
    I can’t remember what I did to restrict the use of topic, therefore I’m at a loss as to how to make it available now. It seems that it should be easy.
    BTW, topic is at the top of my structapps file, FWIW, but that structapps file is not was my roaming directory. I moved it there but that made no difference.
    Please help.
    Marsha

    Hi Marsha...
    The items available for selection on the New DITA File menu are defined by the current set of structure applications (in the application mapping dialog) specified in the DITA Options dialog. I believe that any element in any of these apps that is identified as "valid at top level" will show up in the list of New DITA file types. (I may not be totally correct here, but this is the general idea.)
    I have no idea why you'd get a "concept" topic when you select "topic" from the list .. but possibly your "topic" application has a "concept" element in it?
    You may need to try to compare the default structure application definitions and default EDDs with your custom versions. I assume that you cloned the default apps and created your own versions so it's easy to switch back and forth?
    You should not be moving or copying the structapps files around. There are only two in use at any time .. one in the "default" location (FMDIR\Structure\XML .. this is the "global" app definition file), and one in the "user" location (which will likely be at "%appdata%\Adobe\FrameMaker\11"). If you open these from the StructureTools menu, they will open the right files. The "global" file has the default apps, and the "user" file has your custom apps.
    Sorry I can't be of more help .. it's hard to know what was done so can't say how to fix it.
    Cheers,
    ...scott

  • Email report as attachment with link to webi version

    I want to schedule a webi report such that the report is sent as an excel attachment.
    In addition I want to provide a URL to the report that the user can click to view the report in Webi in the browser with live/current data if required. The actual report will also have options to slice and dice the data so user can interact with the data and derive any other information the he/she needs.
    I've got the email and attachment part taken care of.
    How to I provide a URL to the original report (not the instance of the report). I am using SAP BO 4.1.

    Hi Vivek,
    try this..
    Please use Open Document Concept here and copy OpenDocument URL Email Body as hyperlink.
    Ex:http://<Servername>:PortNo/BOE/OpenDocument/opendoc/openDocument.jsp?iDocID="
    Thanks,
    Venkat

  • Problem in XML to ITAB conversion - Application server files

    Hi Friends
    Earlier we used FTP concept, so I received XML file and length frpm FTP function modules, but now instead of FTP I am going to use the following function modules
    SCMS_BINARY_TO_XSTRING
    SMUM_XML_PARSE
    for that I need to pass the lenght of the file.But I dont know how to calculate the file length while receiving the files using Open Dateset concept.
    Kindly provide your inputs.
    Thanks
    Gowrishankar

    Could you please tell us how you solve it?
    Greetings,
    Blag.

  • Dashboard Performance Issue

    HI Ingo,
    Thanks a lot for the wonderful postings in SDN and for your blogs on SAP BI/BO Solution architecture.
    I  am looking for few clarifications on SAP BO Xcelsius Dashboards. 
    Though I know limitations on component number and data volumes which could badly affect performance of the dashboard, We do have a requirement to handle huge data volumes and multiple components. For drilling down and complete analysis for the users. Our source data lies in SAP BI system and we are using BICS connectivity/ Webi with Live office for updating data in Xcelsius.
    Our requirement is too complex where we should be in a position to meet user expectations for complete multidimensional analysis by different criterion.
    We have scenarios like, Delivery performance where we need to get Case fill rate, Line fill rate, OTIF (On Time In Full),  etc .along with that alerts and based on that we should be providing short term, long term and medium term analysis.  (also we have scenarios from Sales, Inventory , Supply chain which are interlinked to each other).
    Here are my questions,
    1.     Is there any way to provide complete functionality using large data sets to the users with the current architecture without any performance issues?
    2.     Are there any third party tools which can be used with Xcelsius for the performance improvement and handling huge volumes?
    3.     Do you suggest any alternate solution for complete functionality?
    Awaiting your response.
    Thanks & Regards,
    Paramesh Kumar Bada

    hi,
    Generally, it would be better to display summarized information in Xcelsius.
    So at database/Source level, maintain aggregated information which will serve as input to Xcelsius.
    You can use Crystal Reports and Xcelsius together.
    Because Crystal reports has the capability of embedding Xcelsius components within the report.
    So Charts can be built using Xcelsius and embed in Crystal. whereas Table related content can be developed within Crystal report directly. Also, try to provide URLs(Open Document concept) from Xcelsius to Crystal reports to drill-down from  Summary data to detailed data.
    Regards,
    Vamsee

  • JDeveloper IDE Tool - Thank you Oracle/Sun and Java community

    I am sure it has been done many times over within the JDeveloper community discussion but I felt compelled
    to stop, think and acknowledge the efforts here.
    To Oracle, Sun and others that have contributed to this software and resources a huge THANK YOU!
    I started in this business as a COBOL programmer on the mainframe where the only option for you to learn
    was to use a school or company mainframe to code. if you didn't have either your only
    option was reviewing code in textbooks.
    These tools are amazing for how much power they give those of us interested in furthering our skills but
    limited in financial resources. The effort is staggering to me. Just lookng at the release notes and bug
    fixes for the versions of this tool I am absolutely floored.
    Your efforts are greatly appreciated. I hope that the open source concept continues. I applaud and urge
    you to continue your efforts. You are all doing a wonderful thing really. In a time where the trend is to
    charge charge charge for anything we do on our Droids and Iphones it is good to see a humane effort
    flourish.
    Brian Quinn
    IT Application Developer
    Edited by: 814126 on Nov 21, 2010 8:40 AM

    Always bear in mind that there is no such thing in life, called a free lunch.
    I quote:
    "Even if something appears to be free, there is always a cost to the person or to society as a whole even though that cost may be hidden or distributed.
    For example, as Heinlein has one of his characters point out, a bar offering a free lunch will likely charge more for its drinks."
    http://en.wikipedia.org/wiki/There_ain%27t_no_such_thing_as_a_free_lunch
    NA
    http://nickaiva.blogspot.com

  • Using Solaris 11 Express for a multiprocessor CAD/CAM x86-64 system.

    I'm considering using Solaris 11 Express as the OS for a multiprocessor Computer Aided Design/Computer Aided Manufacturing custom built x86-64 system for my machine shop business. I would like clarification of the license in this situation.
    Would I be allowed to use Solaris 11 for free under the open-source license, or would I have to purchase a commercial license? Would I have to pay for program updates? Does just the Oracle Solaris Premier Subscription for Solaris 11 require payment rather than the actual software as well? Would I be required to purchase an Oracle Solaris Premier Subscription to use Solaris 11 in my situation, or can I purchase the Oracle Solaris Premier Subscription at a later date.
    I have a lot of respect the open-source concept, and I would only use Solaris while keeping it's license intact. So the answers to these questions are very important factors affecting my decision regarding whether or not my company will start to use Solaris.

    I'm afraid you'll have to contact Oracle directly to clarify what applies in your situation.

  • Uploading from File Server

    hi,
        i have a BDC program to be done however the file is in File server so wht shud i do inorder to access the file the file server for example the path is given :
    127.1.1.10\SharedFolder\20070704.txt
    and i have authorization set on it and i have been given a user id and password to access the folder so wht shud i be doin to read the file from the given location

    Hi
    Use the Tcode CG3Y to bring the file from Application to Local server and write the ordinary BDC code
    other wise
    Use OPEN DATASET concept..
    READ dataset
    TRANSFER dataset
    CLOSE dataset
    see the sample code
    START-OF-SELECTION.
    ld_file = p_infile.
    OPEN DATASET ld_file FOR INPUT IN TEXT MODE ENCODING DEFAULT.
    IF sy-subrc NE 0.
    ELSE.
      DO.
        CLEAR: wa_string, wa_uploadtxt.
        READ DATASET ld_file INTO wa_string.
        IF sy-subrc NE 0.
          EXIT.
        ELSE.
          SPLIT wa_string AT con_tab INTO wa_uploadtxt-name1
                                          wa_uploadtxt-name2
                                          wa_uploadtxt-age.
          MOVE-CORRESPONDING wa_uploadtxt TO wa_upload.
          APPEND wa_upload TO it_record.
        ENDIF.
      ENDDO.
      CLOSE DATASET ld_file.
    ENDIF.
    Reward points for useful Answers
    Regards
    Anji

  • How to write read dataset statement in unicode

    Hi All,
    I am writing the program using open dataset concept .
    i am using follwing code.
        PERFORM FILE_OPEN_INPUT USING P_P_IFIL.
        READ DATASET P_P_IFIL INTO V_WA.
        IF SY-SUBRC <> 0.
          V_ABORT = C_X.
          WRITE: / TEXT-108.
          PERFORM CLOSE_FILE USING P_P_IFIL.
        ELSE.
          V_HEADER_CT = V_HEADER_CT + 1.
        ENDIF.
    Read dataset will work for normal code.
    when it comes to unicode it is going to dump.
    Please can u tell the solution how to write read dataset in unicode.
    Very urget.
    Regards
    Venu

    Hi Venu,
    This example deals with the opening and closing of files.
    Before Unicode conversion
    data:
      begin of STRUC,
        F1 type c,
        F2 type p,
      end of STRUC,
      DSN(30) type c value 'TEMPFILE'.
    STRUC-F1 = 'X'.
    STRUC-F2 = 42.
    Write data to file
    open dataset DSN in text mode. ß Unicode error
    transfer STRUC to DSN.
    close dataset DSN.
    Read data from file
    clear STRUC.
    open dataset DSN in text mode. ß Unicode error
    read dataset DSN into STRUC.
    close dataset DSN.
    write: / STRUC-F1, STRUC-F2.
    This example program cannot be executed in Unicode for two reasons. Firstly, in Unicode programs, the file format must be specified more precisely for OPEN DATASET and, secondly, only purely character-type structures can still be written to text files.
    Depending on whether the old file format still has to be read or whether it is possible to store the data in a new format, there are various possible conversion variants, two of which are introduced here.
    After Unicode conversion
    Case 1: New textual storage in UTF-8 format
    data:
      begin of STRUC2,
        F1 type c,
        F2(20) type c,
      end of STRUC2.
    Put data into text format
    move-corresponding STRUC to STRUC2.
    Write data to file
    open dataset DSN in text mode for output encoding utf-8.
    transfer STRUC2 to DSN.
    close dataset DSN.
    Read data from file
    clear STRUC.
    open dataset DSN in text mode for input encoding utf-8.
    read dataset DSN into STRUC2.
    close dataset DSN.
    move-corresponding STRUC2 to STRUC.
    write: / STRUC-F1, STRUC-F2.
    The textual storage in UTF-8 format ensures that the created files are platform-independent.
    After Unicode conversion
    Case 2: Old non-Unicode format must be retained
    Write data to file
    open dataset DSN in legacy text mode for output.
    transfer STRUC to DSN.
    close dataset DSN.
    read from file
    clear STRUC.
    open dataset DSN in legacy text mode for input.
    read dataset DSN into STRUC.
    close dataset DSN.
    write: / STRUC-F1, STRUC-F2.
    Using the LEGACY TEXT MODE ensures that the data is stored and read in the old non-Unicode format. In this mode, it is also possible to read or write non-character-type structures. However, be aware that data loss and conversion errors can occur in Unicode systems if there are characters in the structure that cannot be represented in the non-Unicode codepage.
    Reward pts if found usefull :)
    Regards
    Sathish

  • Mail is frozen... Help!

    Well i received an email and its an concept...
    but when i open it , my mail frozes...
    if i force it to stop it closes
    but then when i open it again it wil open that concept again and it frozes again ...
    Please need help!
    tnx

    Is it an attachment within Mail? If so, download it to the desktop and then open it, rather than trying to open it within Mail. Or have you already tried that?

  • Hi one qury

    1. can we do back ground scheduling for gui_upoad before creating the session. i  mean can we upload the data from flat file to internal table in back groud process?
    is there any possible situation for tis?

    Hi
    NO
    We can't use the programs using GUI_UPLOAD or DOWNLOAD fun modules
    To use in Background we have to keep the fielss in Application server and to use OPEN DATASET concepts
    <b>Reward points for useful Answers</b>
    Regards
    Anji

  • Counting rows for db tables with 500 million+ entries

    Hi
    SE16 transaction timesout in foreground when showing number of entries for db tables with 500million+ entries. In background this takes too long.
    I am writing a custom report to get number of records of a table and using OPEN CURSOR concept to determine number of records.
    Is there any other efficient way to read number of records from such huge tables.
            OPEN CURSOR l_cursor FOR
            SELECT COUNT(*)
               FROM (u_str_param-p_tabn) WHERE (l_tab_cond).  " u_str_param-p_tabn is the table name in input and l_tab_cond is a
              DO.                                                                              " dynamic where condition
                FETCH NEXT CURSOR l_cursor INTO l_new_count.
               PACKAGE SIZE p_pack.
                IF sy-subrc NE 0.
                  EXIT.
                ELSE.
                  l_tot_cnt = l_tot_cnt + l_new_count. " l_tot_cnt will contain number of records at end of loops
                  CLEAR l_new_count.
                ENDIF.
              ENDDO.
              CLOSE CURSOR l_cursor.

    Hello,
    For sure it is a huge number of entries!!!
    Is any key-fields?
    Use a variable to keep a counter of the key-fields , for example row_table and also a low and high variable .
    Try to use a do-loop and in this loop use:
    do .
    low_row  = row_table.
    low_high = row_table + 200.000.:increse the select every 200.000 entries
    Do  "select statement into table"  based on the key-fields 
    For example : if the key-field is the docnr,
    Do a "select into table itab where docnr >low_row
                                            and docnr < low_high"
    Count the lines of itab and keep them in a variable.
    free the itab.
    enddo
    Good luck.
    Antonis

Maybe you are looking for