EBP PO : Unable to duplicate/copy  item,GR_NON_VAL issue

Hello,
I am using SRM 5.O .
In Process PO when I go to create the PO with more than one line item following issue comes :
When I entered one line item and check its ok when I click on  <b>Duplicate Selected</b> Item or <b>Copy</b> push button than check  following error appears .
<b><i>Flag 'Automatic Settlement' at item level is different; Change not possible 
Flag 'Invoice Expected' at item level is different; Change not possible </i></b>
Thanks ,
Sachin
null
null

Hello,
I have debugged whole program and found when there is single line item everything is fine & when i clicked on <b>Duplicate Selected  Item</b> the value of GR_NON_VAL indicator set in first line item and second Items indicator as it is blank .Where single line Item indicator was blank .
When I am copying the line item than its working OK .
Due to mismatch in items followinng program raise error message .
PERFORM downward_inheritance USING     p_hgp_ecom
                                             p_hgp_icom
                                             p_guid
                                             p_object_type
                                             p_itm_icom
                                             ls_igp_icom
                                             p_changed
                                   CHANGING  ls_header.
Is there any idea why system behaving like this ?
Thanks,
Sachin

Similar Messages

  • Prohibitory Sign and unable to copy items

    I restarted my Mac tonight and was presented with the Prohibitory Sign. I was able to reboot using a backup drive I have and also able to see the contents of my Mac.
    Before I do an Archive and Install, which unfortunately looks like the only option available I wanted to backup some stuff I've been working on since I last backed up (around 5 days ago)
    The thing is when I try to copy items from my Mac to the Backup drive it tries to copy and then I get the error: The Finder cannot complete the operation because some data could not be read or written. (Error code -36)
    I seem to be able to copy items from the Backup drive to my Mac though.
    Any ideas and help would be much appreciated. It's getting late now and my mind is starting to go so I think I'll leave it until the morning now in case I do something stupid!

    I had to take my MBP to the Apple Repair Centre who diagnosed the fault as a blown HD. They replaced it and all works well now.

  • I set up iPhoto v9.6 to copy items to library. Now I would like to change it to not copy. Do I have to start completely over? I have 106GB of photos and tags in my library. Will I lose all that work?

    I set up iPhoto v9.6 to copy items to library. Now I would like to change it to not copy. Do I have to start completely over? I have 106GB of photos and tags in my library. Will I lose all that work? Originally I thought by copying items to library I would be making an exact copy of my originals and that would serve as a duplicate or backup copy. My originals are on my EHD#1 and my iPhoto library is on my EHD#2. But now I realize that I can not view my iPhoto library unless my EHD#2 is connected to my computer (duh!!) So now I would like to NOT copy times to library and I want to have the library in the Pictures folder on my laptop HD. So I guess my question is do I have to start completely over? Is there some way to salvage all the work I've done so far?

    Yes you would have to start over from scratch.
    But you should really think this through very carefully. Experienced users do not recommend iPhoto used in this mode, and with very good reasons. Among them
    1. It's more work for exactly no gain
    2. It's unreliable - especially - when the library is on one volume and the photos are on another. One change to the path between the files and the library and you'll be reconnecting every photo in the Library back to the database, one by one.
    For more on iPhoto and file management see this User Tip:
    https://discussions.apple.com/docs/DOC-6361

  • Unable to cancel copy process

    I inserted a DVD into my laptop so that I would be able to transfer the music to my laptop. I dragged the file from the DVD to the desktop, and it began to transfer/copy the files. It then said one of the files was unable to be copied, and it almost froze. I have tried hitting the x to get it to cancel. But the box is still there saying Copying 185 items to "Desktop". It has not moved since I started it and told it to cancel. And it will not allow me to eject the disc since it says it is in use. I don't know how to fix this and eject the dvd. Help would be greatly appreciated.

    Try this:
    Click the Apple icon in the upper left corner.
    Choose *Force Quit*.
    Click Finder and then click the Relaunch button. Finder should close & reopen. You can check this by going to the dock and looking for the small blue circle underneath the Finder icon.
    If the small blue circle does not appear, save your work in your applications, close the applications & restart the computer.
    ~Lyssa

  • Move/Duplicate/Copy

    All,
    Ok, so essentially I'm trying to write a script so that I can batch process a whole bunch of .SVG files.
    The .SVG files come from a variety of sources... some open very cleanly with nice layer structure, etc... others open with completely wacked clipping masks grouped dozens of times, etc.... So, I need to try and "normalize" the .SVG with a batch process.
    Basically the "normalizing" process I'm trying to use involves:
    1. "Flattening" the layer structure by ungrouping any nested layers.
    1b. Moving all art objects into a single GroupItem.
    2. Move/Duplicate/Copy the GroupItem to a new Document with a predefined Artboard size.
    3. Re-sizing and positioning the GroupItem on the new Artboard based on some criteria.
    Anyway, I'm kind of mystified as to the best way to accomplish this, so I'm trying all kinds of things...
    I thought the simplest way to get all the artwork to the new Document would be to use the move() or duplicate() methods.
    However, I keep getting 'MRAP' errors for this line:
    item.move(newDoc.activeLayer, ElementPlacement.INSIDE);
    In the CS5 JavaScript Reference, the object parameter is referred to as a "relativeObject".
    Does this mean that you can only move() or duplicate() an item within the SAME document?
    Is the app.copy() and app.paste() methods the only way to move art from one Document to another?
    Thanks!

    Well, I think I found the cause of the 'MRAP' errors... I had my looping logic screwed up, so I essentially was trying to either:
    1. Move something that didn't exist.
    2. Move something that does exist into something that doesn't exist.
    3. Move something that does exist into itself.
    Also, fixing that allowed me to use the move() method to move art items (or a GroupItem) from one document to another.
    I didn't test it, but I imagine duplicate() would work as well. Both these seem to me to be more desireable than the app.copy() method.
    Also, a few notes to anyone trying to do something similar for prosterity sake:
    - Moving artwork within a document will reset the PageItem indexes, e.g. doc.pageItems[index]
    - If you remove all the artwork from a GroupItem, the GroupItem will remain in doc.pageItems unless you call app.redraw()
    - If you're trying to "flatten" some random amount of artwork, CompoundPathItems (as well as GroupItems) and their respective PathItems actually take up separate indexes of the doc.pageItems array. For example:
    doc.pageItems[5] // myCompoundPathItem
    doc.pageItems[6] // PathItem contained in myCompoundPathItem // Also: doc.pageItems[5].pathItems[0]
    doc.pageItems[7] // another PathItem contained in myCompoundPathItem // Also: doc.pageItems[5].pathItems[1]
    doc.pageItems[8] // a PathItem NOT contained in myCompoundPathItem
    So, generally you want to avoid moving a PathItem that has a CompoundPathItem as a parent, because it will destroy the shape and screw with your indexes to no end.

  • Duplicate calendar items

    all of a sudden (probably after this latest release), i am getting duplicate calendar items in my iphone on certain events. from what i can tell, if i create an event in ical and sync the iphone, the event is put on the phone correctly. if i then edit the event in ical and re-sync, it puts a duplicate event with the new information on the iphone, instead of just updating the event. it's highly annoying and i don't remember it ever being like this. anyone know how to fix this??

    I had this problem months ago on my iPod Touch (roughly when this dormant thread was started, before the 2.0 update, I think). With that update, it went away, but has recently resurfaced, perhaps with the 2.1 update or perhaps since then. I didn't notice the resurfacing of the problem until this morning, but that doesn't necessarily mean anything.
    The symptoms: When I create a calendar event in iCal, it syncs properly to my iPod. When I create a calendar event on the iPod, it does not sync to iCal. When I modify or delete an iCal-created event on my iPod, the event is not modified in iCal, and the original event is copied from iCal to the iPod, so I wind up with duplicate events (or the deleted event reappears).
    This morning, I overwrote all calendar data on my iPod, to no avail. When I first had the problem months ago, I reset the sync history and then reset the whole iPod, also to no avail; I haven't tried those more drastic steps recently.
    Is anyone else still/again having similar problems? Any solutions besides telling Apple and waiting for a fix?

  • Unable to duplicate DVD created on iDVD

    I have a DVD that I created using i DVD on my mac book pro.
    I brought this DVD to a dupliation service and they were unable to
    duplicate this DVD because they said the chapters were not "closed
    out". The DVD plays fine is several different DVD players and both
    mac and PC computers. However the duplicator company is not able to
    provide dupliation service the way the DVD is. I was unable to find
    any way to "close Out" the chapters in iDVD.
    I used "transparent Blue" theme from "Old Themes" - and used iDVD 7.0.4
    Anyone have a clue to fix this problem?
    Use a different duplication company?
    Upgrade to DVD studio Pro??

    Welcome to the forums!
    That is strange! DVDs made with iDVD are in the standard format of mpeg2 and should present no such problems.
    How many copies do you need? As long as you allow your Mac to rest and cool down every 3 copies or so, you can do this yourself.
    How to copy previously-burned DVD-R video discs
    http://support.apple.com/kb/HT2059

  • [AS][CS3] duplicate page items in a specific position on another doc

    Hello,
    I need to copy or duplicate page items of the active page of doc 1 in a specific position of the active page of doc 2.
    I can duplicate the page items of doc 1 into the doc 2 (with the code below) but can't find a way to duplicate them in a specific position: they are always duplicated in the same position on the active page of doc 2.
    The current working code (simplified) is:
    tell application "Adobe InDesign CS3"
    tell document 1
    tell layout window 1
    set pageitemlist to every page item of active page
    end tell
    end tell
    duplicate pageitemlist to the active page of layout window 1 of document 2
    end tell
    Thanks for your help!

    -- This works in CS4:
    tell application "Adobe InDesign CS4"
    tell document 1
    tell page 2
    set d to duplicate page item "textframe1" -- the text frame labeled "textframe1"
    -- FYI, d has a class of list:
    -- "text frame id 328 of page id 237 of spread id 232 of document "Untitled-2" of application "Adobe InDesign CS4""
    end tell
    move d to page 4 -- moves to 0,0 on pg4
    end tell
    end tell

  • Using "PARAMETER.param_name  as a "Copy item from  value" reference

    I a, using forms 9i and I am having a problem using a Parameter as a reference item. The parameter item is there and I am spelling it correctly, i have even cut and pasted the parameter name into the Copy item value. No matter what I do I get the same result.
    FRM-30047: Cannot resolve item reference PARAMETER.PROJECT_UID.
    Item: PROJECT_UID
    Block: GRANT_SUMMARY
    Form: GRANTS
    FRM-30085: Unable to adjust form for output.
    I have even added a colon to the word PARAMETER but this too fails.
    Suggestions please

    You have to assign it programatically as 'Copy item from Value' is mainly there to keep the master detail relationships. It actually expects a block item rather than a parameter.
    Antony.

  • Download Helper, even with paid converter upgrade, gives "Invalid Capture File" errors and will not record audio, with "File Creation Error - Unable to rename/copy audio file" Error.

    Download Helper Screen Capture worked to capture video if the default "no audio" option is active. But, no audio. The "speakers" or "microphone" audio options are confusing....the audio to be captured is from the video, so what do you choose? With either "speakers" or "microphone" selected, the captured file has poor audio and no video. Re-capture efforts (speakers) get "Invalid capture file error" and "File Creation error- Unable to rename/copy audio file"
    The paid upgrade of "Converter" doesn't work.
    Instructive documentation - not very good.
    Suggestions - Need time delay between initiation of "Record" and starting the video to be recorded.
    Could use timer tracking of the record process.
    Are there operating system limitations? (Have Windows XP Pro)

    That is an issue for the developer of that Download Helper.

  • BTREE and duplicate data items : over 300 people read this,nobody answers?

    I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
    Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
    The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
    While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
    I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
    I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
    Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
    Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
         while (i < hcp->dup_tlen) {
              memcpy(&len, data, sizeof(db_indx_t));
              data += sizeof(db_indx_t);
              DB_SET_DBT(cur, data, len);
              * If we find an exact match, we're done. If in a sorted
              * duplicate set and the item is larger than our test item,
              * we're done. In the latter case, if permitting partial
              * matches, it's not a failure.
              *cmpp = func(dbp, dbt, &cur);
              if (*cmpp == 0)
                   break;
              if (*cmpp < 0 && dbp->dup_compare != NULL) {
                   if (flags == DB_GET_BOTH_RANGE)
                        *cmpp = 0;
                   break;
    What's the expert opinion on this subject?
    Vincent
    Message was edited by:
    user552628

    Hi,
    The special thing about it is that with a given key,
    there can be a LOT of associated data, thousands to
    tens of thousands. To illustrate, a btree with a 8192
    byte page size has 3 levels, 0 overflow pages and
    35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note
    that I wrote "can", since some keys only have a few
    dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default
    lexical ordering with set_dup_compare is OK, so I
    don't touch that. I'm getting the data items sorted
    as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA)
    performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
    Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
    Setting the cache and the page size to their ideal values is a process of experimenting.
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
    While there may be a lot of reasons for this anomaly,
    I suspect BDB spends a lot of time tracking down
    duplicate data items.
    I wonder if in my case it would be more efficient to
    have a b-tree with as key the combined (4 byte
    integer, 8 byte integer) and a zero-length or
    1-length dummy data (in case zero-length is not an
    option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
    You can have records with a zero-length data portion.
    Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
    Another possibility would be to just add all the
    data integers as a single big giant data blob item
    associated with a single (unique) key. But maybe this
    is just doing what BDB does... and would probably
    exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
    Or, the slowdown is a BTREE thing and I could use a
    hash table instead. In fact, what I don't know is how
    duplicate pages influence insertion speed. But the
    BDB source code indicates that in contrast to BTREE
    the duplicate search in a hash table is LINEAR (!!!)
    which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
    This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
    Regards,
    Andrei

  • Syncing iPad2: PLEASE HELP ME! My iPad2 has been stuck on step 8 of 8 for a day now - waiting to copy items. What can I do to fix this? Nothing is actually happening! I'm running ios5 on an iPad 2.

    I recently upgraded to to the iOS 5 on my iMac and iPad. Now when i sync my iPad it get stuck on step 8 with a prompt that says "waiting to copy items" it stays on like that for hours. When I eventually do disconnect my iPad it says sync is not complete. Now, some of my album artwork is not copied into my iPad but on iTunes in my iMac desktop everything is still there. Pulling my hair out with this one. Please help!

    my iPad it get stuck on step 8 with a prompt that says "waiting to copy items" it stays on like that for hours
    Reset your iPad.
    Hold the On/Off Sleep/Wake button and the Home button down at the same time for at least ten seconds, until the Apple logo appears.
    I recently upgraded to to the iOS 5 on my iMac and iPad
    You don't need to manually sync using iOS 5.
    Help here >  Apple - iCloud - Learn how to set up iCloud on all your devices.
    Message was edited by: Carolyn

  • Duplicate Payment item

    Hi,
    when posting electronic bank statement in FEBA duplicate payment item generating for same refernce. Can anyone tell root cause.
    Regards
    MRS

    Check the iput file and also check the posting rules assigned to that external transaction in EBS costomization.
    Regards,
    SDNer

  • Unable to remove this item from GRIR.  It was a consignment PO receipt done

    Hi Guys,
    The user saying that"unable to remove this item from GRIR.  It was a consignment PO receipt done improperly, so it needs to be deleted".
    Because of this the the material documet which shows $6725.43-(Negative) in GRIR account.when i check this this transation was originated through MI10(Physical inventory differenvces posting).May be user done wrongly.
    Now the problem is this
    -          how this transaction originated?
    -          Why is this posted to GR/IR account?
    -          How do we correct the open item in the GR/IR account?
    -          What need to happen to rectify the mistake?
    can any tell me how can we do this,i will be greatful to u.
    Thanks&Regards,
    Babu,
    09930154536

    Hi Jurgen,
    when i check the material document and accounting document,it was showing that said amount in negative sign and transaction done by MI10 means posted differences with out reference to document.
    it means the user may entered wrongly i think.
    was there any GRIR account for MI10?
    MI10 does not have reversal or cancel?
    so how to resolve it?
    as per my idea it can be done by FI posting.
    please help to resolve.
    Thanks in advance.
    Regards,
    Babu
    09930154536

  • Unable to get the item value in cursor

    I have function which returns organization_id for each item selected in Sales Order window.
    From this function I will populate warehouse value in shipping tab whenever i am tabbing out from ordered item in Sales Order Form.
    But my cursor is unable to get the item id value (Ex: FOR cur_rec IN cus_l (l_item_id)). It is directly going to last return statement and displaying default value.
    Please help me out.
    FUNCTION custom_default_rule (
    p_database_object_name IN VARCHAR2,
    p_attribute_code IN VARCHAR2
    RETURN NUMBER
    AS
    l_line_type_rec oe_order_cache.line_type_rec_type;
    l_item_id NUMBER;
    p_org_id NUMBER;
    CURSOR cus_l (p_item_id IN NUMBER)
    IS
    SELECT a.organization_id, b.element_name, b.element_value
    FROM mtl_parameters a,
    mtl_descr_element_values b,
    mtl_system_items_b c
    WHERE b.inventory_item_id = c.inventory_item_id
    AND a.organization_id = c.organization_id
    AND a.organization_id = c.organization_id
    AND c.inventory_item_id = p_item_id
    AND a.organization_id <> a.master_organization_id
    ORDER BY a.organization_id;
    CURSOR cur_org (p_org_id IN NUMBER)
    IS
    SELECT organization_code
    FROM mtl_parameters
    WHERE organization_id = p_org_id;
    BEGIN
    l_line_type_rec :=
    oe_order_cache.load_line_type
    (ont_line_def_hdlr.g_record.line_type_id);
    l_item_id := ont_line_def_hdlr.g_record.inventory_item_id;
    FOR cur_rec IN cus_l (l_item_id)
    LOOP
    IF cur_rec.element_name IN
    ('Frequency',
    'Emission Norms',
    'Voltage',
    'Duty Rating',
    'Phase',
    'Product Family'
    AND cur_rec.element_value IN
    ('50',
    'Dual',
    'Euro',
    '210',
    '230',
    '440',
    'Low',
    'Medium',
    'Heavy',
    'Three',
    'QSK60',
    'QSK15',
    'QSK10',
    'DQK50'
    THEN
    RETURN cur_rec.organization_id;
    If u want more info. abt function refer: Refer "PL/SQL API + Defaulting Rules in OM" in Forums.oracle.com
    (OR)
    PL/SQL API + Defaulting Rules in OM
    Please help me out. This is very urgent.
    Thanks & Regards,
    Sateesh Kumar

    Hi Suresh,
    I tried like below:
    create or replace FUNCTION custom_default_rule (
    p_database_object_name IN VARCHAR2,
    p_attribute_code IN VARCHAR2
    RETURN NUMBER
    AS
    l_line_type_rec oe_order_cache.line_type_rec_type;
         l_line_rec OE_AK_ORDER_LINES_V%ROWTYPE;
    l_item_id NUMBER;
    p_org_id NUMBER;
    CURSOR cus_l (p_item_id IN NUMBER)
    IS
    SELECT a.organization_id, b.element_name, b.element_value
    FROM mtl_parameters a,
    mtl_descr_element_values b,
    mtl_system_items_b c
    WHERE b.inventory_item_id = c.inventory_item_id
    AND a.organization_id = c.organization_id
    AND a.organization_id = c.organization_id
    AND c.inventory_item_id = p_item_id
    AND a.organization_id <> a.master_organization_id
    ORDER BY a.organization_id;
    CURSOR cur_org (p_org_id IN NUMBER)
    IS
    SELECT organization_code
    FROM mtl_parameters
    WHERE organization_id = p_org_id;
    BEGIN
    l_line_type_rec :=
    oe_order_cache.load_line_type(ont_line_def_hdlr.g_record.line_type_id);
         l_line_rec := ONT_line_Def_Hdlr.g_record;
    --      l_item_id := l_line_rec.inventory_item_id;
    -- FOR cur_rec IN cus_l (l_item_id)
         FOR cur_rec IN cus_l(l_line_rec.inventory_item_id)
    LOOP
    IF cur_rec.element_name IN
    ('Frequency',
    'Emission Norms',
    'Voltage',
    'Duty Rating',
    'Phase',
    'Product Family'
    AND cur_rec.element_value IN
    ('50',
    'Dual',
    'Euro',
    '210',
    '230',
    '440',
    'Low',
    'Medium',
    'Heavy',
    'Three',
    'QSK60',
    'QSK15',
    'QSK10',
    'DQK50'
    THEN
    RETURN cur_rec.organization_id;
    FOR cur_rec_org IN cur_org (p_org_id)
    LOOP
    RETURN cur_rec_org.organization_code;
    END LOOP;
    END IF;
    END LOOP;
    RETURN '204';
    EXCEPTION
    WHEN OTHERS
    THEN
    IF oe_msg_pub.check_msg_level (oe_msg_pub.g_msg_lvl_unexp_error)
    THEN
    oe_msg_pub.add_exc_msg ('OE_Default_PVT', 'CUSTOM_DEFAULT_RULE');
    END IF;
    RAISE fnd_api.g_exc_unexpected_error;
    END custom_default_rule;
    This function executed without errors. But it is displaying final return statement value (i.e., 204).
    I am not getting inventory_item_id value to my cursor.
    Please help me out...It is very urgent.
    Thanks & Regards,
    Sateesh Kumar S
    Message was edited by:
    user610830

Maybe you are looking for

  • HT1536 ERROR 2048

    i try to play some videos with QT end see a mesage : error 2048  could;n play this file mod.oo1. my camera is JVC and before 1 year the QT is wos ok. thanks

  • BW-APO Training

    Hi All, Curently I am in Hyderabad. Can any body suggest me good training institute for BW - APO. Regards, Madhu.

  • Info about configuring queuespace?s size in Tuxedo 6.5

    Hi I am looking for infomation about how I can calculate the size of a Tuxedo/Q queuespace. What are the parameters that I have to analyze in order to make a good "setting"? Merci

  • Jdev 11 Question

    Hi Sir/Madam, Since,I have develop master-details forms by Oracle Forms and have import & export excel function. Pls advise Jdev 11 can do it or not ? My meaning of Master-Details forms is no need seperate page to insert details records. if yes, wher

  • Trouble shooting and Error handling / Alert Messages

    Hi, SAP - Xi - File or File - XI - SAP 1. I am presumming that when SAP is down then alert management can send message to receiver . 2. But when XI is down can I send alert to receiver that XI is down, how can I acheive this. Please provide me the st