FORALL bulk delete statement requires lot of time even for 10 rows deletion

Hi,
when I execute FORALL bulk delete statement it requires lot of time even for 10 rows deletion.so how to avoid this problem? I checked SGA_TARGET and PGA_TARGET cureent_size and max_size are same for both. Is their memory problem?
I execute following code
DECLARE
TYPE t_id_tab IS TABLE OF test.c1%TYPE;
l_id_tab t_id_tab := t_id_tab();
Begin
select c1 bulk collect into l_id_tab from TEST where c1<=10;
dbms_output.put_line(DBMS_UTILITY.get_time);
FORALL i IN l_id_tab.first .. l_id_tab.last
delete from TEST where c1= l_id_tab(i);
dbms_output.put_line(DBMS_UTILITY.get_time);
commit;
End;
thanks in advance
Vaibhav
Edited by: Vaibhav on Oct 10, 2011 10:47 PM

hi
i am working on oracle 11g. Actually i have to test which is the faster method to delete 150000 records.
1st by using FOR Loop bunch of 10000 records
2nd by using FORALL delete
kindly find below FORALL delete code
DECLARE
TYPE t_id_tab IS TABLE OF test.c1%TYPE;
l_id_tab t_id_tab := t_id_tab();
Begin
select c1 bulk collect into l_id_tab from TEST where c1<=10;
dbms_output.put_line(DBMS_UTILITY.get_time);
FORALL i IN l_id_tab.first .. l_id_tab.last
delete from TEST where c1= l_id_tab(i);
dbms_output.put_line(DBMS_UTILITY.get_time);
commit;
End;
Edited by: Vaibhav on Oct 10, 2011 10:56 PM

Similar Messages

  • FORALL bulk delete statement it requires lot of time

    Hi,
    when I execute FORALL bulk delete statement it requires lot of time even for 10 rows deletion.so how to avoid this problem? I checked SGA_TARGET and PGA_TARGET cureent_size and max_size are same for both. Is their is memory problem?
    thanks in advance
    Vaibhav

    Please look at this:
    http://psoug.org/definition/LIMIT.htm
    If you add the LIMIT 100 part to your query your bulk collect / forall construction will process 100 records per part.
    This will avoid filling up undo tablespaces / PGA memory.
    There's also a discussion wether to use LIMIT 100 or LIMIT 1000 (or greater).
    It depends on what your system can handle, if it's a busy production statement I'd stick to a value of 100.
    Reason for this, is that your query will not take a huge amount of PGA.
    DECLARE
    TYPE t_id_tab IS TABLE OF test.c1%TYPE;
    l_id_tab t_id_tab := t_id_tab();
    Begin
    dbms_output.put_line(DBMS_UTILITY.get_time);
    select c1 bulk collect into l_id_tab
    from TEST where c1<=150000
    LIMIT 100;
    FORALL i IN l_id_tab.first .. l_id_tab.last
    delete from TEST where c1= l_id_tab(i);
    commit;
    dbms_output.put_line(DBMS_UTILITY.get_time);
    End;we use this construction to load data into our warehouse and bumped into the LIMIT 100 part while testing.
    It did speed up things significantly.
    Another (risky!) trick is to do:
    -- exclude table from archivelogging
    ALTER TABLE TEST NOLOGGING;
    -- execute plsql block
    <your code comes here>
    -- enable logging on the table
    ALTER TABLE TEST LOGGING;The risky part is that you get inconsistent archivelogging this way because your deletes arent logged.
    When you do a restore from backup and want to roll forward until the latest change in your archive logs, you'll miss these deletes and your table will not be consistent with the rest of your database.
    In our warehouse situation we dont mind, data gets refreshed every night.
    Hope this helps!
    Robin

  • Forall bulk delete is too slow to work,seek advice.

    I used PL/SQL stored procedure to do some ETL work. It is pick up refeshed records from staging table, then check to see whether the same record exists in target table, then do a Forall bulk deletion first, then do a Forall insert all refreshed records into target atble. the insert part is working fine. Only is the deleteion part, it is too slow to get job done. My code list below. Please advise where is the problem? Thansk.
    Declare
    TYPE t_distid IS TABLE OF VARCHAR2(15) INDEX BY BINARY_INTEGER;
    v_distid t_distid;
    CURSOR dist_delete IS
    select distinct distid FROM DIST_STG where data_type = 'H';
    OPEN dist_delete;
    LOOP
    FETCH dist_delete BULK COLLECT INTO v_distid;
    FORALL i IN v_distid.FIRST..v_distid.LAST
    DELETE DIST_TARGET WHERE distid = v_distid(i);
    END LOOP;
    COMMIT;
    end;
    /

    citicbj wrote:
    Justin:
    The answers to your questions are:
    1. why would I not use a single DELETE statement? Because this PL/SQL procedure is part of ETL process. The procedure is scheduled by Oracle scheduler. It will automatically run to refresh data. Putting DELETE in stored procedure is better to executed by scheduler.You can compile SQL inside a PL/SQL procedure / function just as easily as coding it the way you have so that's really not an excuse. As Justin pointed out, the straight SQL approach will be what you want to use.
    >
    2. The records in dist_stg with data_type = 'H' vary by each month. It range from 120 to 5,000 records. These records are inserted into target table before. But they are updated in transactional database. We need to delete old records in target and insert updated ones in to replace old ones. But the distID is the same and unique. I use distID to delete old one and insert updated records with the same distID into target again. When user run report, the updated records will show up on the report. In plain SQL statement, delete 5,000 records takes about seconds. In my code above, it take forever. The database is going without any error message. There is no trigger and FK associated
    3. Merge. I haven't try that yet. I may give a try.Quite likely a good idea based on what you've outlined above, but at the very least, remove the procedural code with the delete as suggested by Justin.
    >
    Thanks.

  • Update statement takes lot of time . Need help in optimization.

    Hi,
    I need to update values from table ARPU_2009_2010 to table TST_MICRO_SEG_PREP.
    Table ARPU_2009_2010 contains around 400000 records. Table TST_MICRO_SEG_PREP contains 374000 customers. I need to update values for each customer in the table TST_MICRO_SEG_PREP.
    Please find below query used:
    UPDATE TST_MICRO_SEG_PREP a
    SET
    ARPU_2009_2010_REV=(select case when cnt>0 then round((sum(total_rev)/cnt),2) else 0 end arpu from ARPU_2009_2010 b
    where a.subno=b.subno
    group by subno,cnt)
    This query takes more than 4 hours to execute. All tables used are having proper index.
    Kindly suggest an alternative for this.
    Thanks,
    Kabilan
    Edited by: 946887 on Jul 16, 2012 11:44 PM

    >
    UPDATE table1 X
    SET
    ARPU=(select case when cnt>0 then round((sum(total_rev)/cnt),2) else 0 end arpu from table2 Y
    where X.subno=Y.subno
    group by subno,cnt)
    >
    Are you intending to update all the 400K rows in table1 ? If not, then you have to rewrite the query.
    >
    This query takes more than 4 hours to execute.
    >
    Did you try to monitor what it's doing when the query is getting executed.
    >
    I tried to execute using stored procedures and sp also takes much time.
    >
    What makes you to think that stored procedure will execute more efficiently compared to a single DML statement ?
    >
    Kindly suggest an alternative for this.
    >
    As mentioned in the earlier post try to follow the guidelines. Only then it will be feasible to suggest some alternatives by looking at some metrics.
    Thanks
    Raj

  • Min max lot size time based for use with SNP and PPDS

    Hi all, Is there anyway to set up time based min and max lot sizes? ie we want to have a Max lot size which is small for use with the first 3 months of our plan which starts life in SNP and then converts to PPDS and into blocks for block planning, but months 4 to 36 we want to leave as large max lot sizes as there is no need to have the small max lot sizes for the different horizon.
    As far as I can see there is only a material/plant lot size  and Max lot size and no way to have a different setting in a different time period.
    Thanks
    j

    Hi John,
    As you know, in the product master, the lot size maintenance is time-independent, so that obviously can not be used for your scenario. As per my understanding, to meet this using standard functionality, you can maintain multiple product specific t-lanes (for STRs) and PDSs (planned orders) with required lot size ranges and validity dates (for short or long term horizon). But, again since the validity of t-lanes and PDSs will not be automatically rolling forward so the updating the validities will be a challenge.
    The other option could be to enhance the heuristic functionality at lot size selection step while creating order.
    Regards,
    Umesh

  • Can I have "All spots to process" checked at all times, even for new spot colors?

    When I check “All spots to process” in the pdf export settings and save my settings the settings remember that I've checked this option. But, if new spot color objects using new spot color swatches are added to the document (or another document) and I go into the pdf export settings the check mark has been changed into a dash (with the actual checkbox highlighted) – signifying that only some of of the spot colors will be changed to process colors during export. I absolutely fail to see how this could possibly be seen as a feature and not a bug … if the user has checked “ALL spots to process” wouldn't the user expect ALL spots to be converted to process colors, rather than just any spot colors that happened to be in the document that happened to be open when the user first checked that checkbox and saved that setting?
    Am I missing something here? What's the point of even having that checkbox as part of your saved export settings if it doesn't include any other spot colors than those used when saving the settings?
    What's the point of having settings if you can't trust them, and still need to manually "override" them every time?
    I see that some users have taken to writing scripts that instead turn all spot colors in the swatch panel to process colors, and while I commend them for creating that workaround, I'm still pissed at Adobe for not getting the function right.
    If this is a feature, who is it for? People who want to add just certain spot colors and turn those into process colors rather than turning all spot colors into process colors are surely better off doing that in the swatches panel, where they're in total control of what's what. And if they don't want to "permanently" change their spot colors to process colors, and prefer to (temporarily) convert them during exporting/printing only, they can do that in the ink manager. But when someone checks convert "All spots to process" couldn't we safely assume they really want ALL spot colors to be converted and not just some of them? I mean, the way that checkbox behaves now, it's like it's a button and not a checkbox. As in: hit the button "All spots to process" to switch all currently viewed spot colors to process colors in the ink manager, OR check the "All spots to process" checkbox to always convert ALL spot colors to process colors during exporting/printing.
    Anyone got any light to shed on this?
    And is there a way to actually get the advertised behavior, because if you have to run a script every time you export/print you might as well just manually select the checkbox every time instead, but either way it's just really unnecessary as far as I'm concerned … Adobe should get the feature right instead.
    If you save a setting and recall it, it shouldn't be possible for that setting to change into something else (in this case changing a checkmark to a dash).
    Clearly CMYK printing is the norm, so for most users it would make a lot of sense to have the "All spots to process" checked most of the time, and then you just go into the swatches panel or the ink manager and set things correctly for those print jobs that really do need spot colors.
    I myself am not one of those who add spot colors to my swatches unless I'm really using them as spot colors, but I often work with magazines and folders featuring adverts made by whoever, and typically there's always at least one advert that features spot colors, and therefore it would be very nice if the "All spots to process" feature actually worked as advertised without any required actions from me.
    We stopped sending ads back to the advertisers for adjustments a long time ago, unless we absolutely had to, because there were so many things wrong with so many ads that it was simply too much work to write back and explain everything to people who most of the time didn't even understand what we were talking about. We found that it was usually a LOT faster and easier to just adapt the ads ourselves, as long as it was something that could be worked out really quickly from within InDesign itself, which pretty much included most typical errors.
    But with this feature I find Adobe is trying to make my job harder rather than easier, and it's pissing me off. Arrrghh… ;-)

    But It's not a preference it's a shortcut
    It's a bad joke, is what it is. ;-)
    So, why in your opinion should it be presented the way it is? I keep saying in it's current functionality it shouldn't be presented the way it is (and that: if it is, it shouldn't work the way it does). If it's not a preference or even a proper checkbox, why present it that way?
    If you put it right next to the table at the top of the window (so that it's directly associated with that information, rather than information right above it) and just called the checkbox “Spot(s) to process” and had it only visually reflect the content of the sleected spot colors in the table, then I'd see your point with likening it to the “Hyphenate” checkbox.
    If a story has two selected paragraphs that uses two different hyphenation settings then the checkbox should present the way it does now, but if you hit the checkbox so that both paragraphs now use hyphenation and create a third paragraph inbetween the two previous ones it better inherit that setting and not turn off hyphenation for the new paragraph (unless of course there's a defined next paragraph style that switches to a style with hyphenation turned off). And if that checkbox said “Hyphenate all paragraphs” instead, then I would expect it to do just that, and not just the selected ones, and not just the current paragraphs but quite literally all paragraphs even newly created ones – otherwise it doesn't do what it says it does, and simply shouldn't be labeled that way.
    And seriously bad interface design aside, you'd have to rename “All spots to process” to “Switch all currently displayed spot swatches listed in the table above to process” to actually describe what that checkbox does. So even if you're a fan of the current functionality, as opposed to one that actually lets the user set and forget a setting like that, and think it's better that users manually check it repeatedly (which I'm not saying that you are, but you're not giving me any feedback suggesting you even see my point of view with any of this, so what do I know?), then why wouldn't you still support an interface that visually matches/signals that functionality better? If it's a “Select all” checkbox supplementing a table containing a column of checkboxes, then present it that way. Don't put it at the bottom of the window next to another checkbox that works just like a regular checkbox and label it “All spots to process” – because that way you are signalling a different behavior.
    Seriously, if I was to do design using the same mentality that Adobe uses when designing their user interfaces it wouldn't be long before I lost all clients. There's a lot to be said for de facto monopolies, I suppose. Oh no, there's nothing wrong with the design, just as long as you accept it on it's own terms and don't compare it to anything relevant, and just as long as you give people enough time to understand and accept it … and surrender to it.
    For real … I wouldn't win one single pitch that way.
    Today's threads have in many ways been a thorough reminder of the following quote from the second link I provided:
    Is there an Internet rule yet stating that even the most obviously indefensible mistake will eventually be defended by someone somewhere? Awful marketing efforts get explained as genius viral campaigns, broken features become solutions.
    And whether or not you're able to see my point of view or not is really besides the point too.
    The real point was, and remains to be:
    That for those who receive lots of ads or other external files that may or may not contain spot colors it would be far more useful to be able to set a checkbox to always convert all spots to process when exporting, than the current functionality is (and I'm not suggesting eliminating the current functionality, just change so it's presented like what it really is, and then just let that separate checkbox do what it says) … causing unnecessary manual action on the user's behalf shouldn't be the business of Adobe – preventing it should.
    And here's further reading on the subject of bad Adobe interface design for those who might feel so inclined. ;-)
    Cheers!

  • FORALL bulk insert ..strange behaviour

    Hi all..
    I have the following problem..
    I use a FORALL bulk Insert statement to insert a set of values using a collection that has only one row. The thing is I get a ' ORA-01400: cannot insert NULL into <schema>.<table>.<column>' error message, whereas the row has been inserted into the table!
    Any ideas why this is happening?

    Here is the sample code..
    te strange thing is that the cursor has 1 row and the array gets also 1 row..
    FUNCTION MAIN() RETURN BOOLEAN IS
    -- This cursor retrieves all necessary values from CRD table to be inserted into PDCS_DEFERRED_RELATIONSHIP table
    CURSOR mycursor IS
    SELECT key1,
    key2,
    column1,
    date1,
    date2,
    txn_date
    FROM mytable pc
    WHERE
    -- create an array and a type for the scancrd cursor
    type t_arraysample IS TABLE OF mycursor%ROWTYPE;
    myarrayofvalues t_arraysample;
    TYPE t_target IS TABLE OF mytable%ROWTYPE;
    la_target t_target := t_target();
    BEGIN
    OPEN mycursor;
    FETCH mycursor BULK COLLECT
    INTO myarrayofvalues
    LIMIT 1000;
    myarrayofvalues.extend(1000);
    FOR x IN 1 .. myarrayofvalues.COUNT
    LOOP
    -- fetch variables into arrays
    gn_index := gn_index + 1;
    la_target(gn_index).key1 := myarrayofvalues(x).key1;
    la_target(gn_index).key2 := myarrayofvalues(x).key2;
    la_target(gn_index).column1 := myarrayofvalues(x).column1;
    la_target(gn_index).date1 := myarrayofvalues(x).date1;
         la_target(gn_index).date2 := myarrayofvalues(x).date2;
    la_target(gn_index).txn_date := myarrayofvalues(x).txn_date;
    END LOOP;
    -- call function to insert/update TABLE
    IF NOT MyFunction(la_target) THEN
    ROLLBACK;
    RAISE genericError;
         ELSE COMMIT;
    END IF;
    CLOSE mycursor;
    END IF;
    FUNCTION MyFunction(t_crd IN t_arraysample) return boolean;
    DECLARE
    BEGIN
    FORALL x IN la_target.FIRST..la_target.LAST
    INSERT INTO mytable
    VALUES la_target(x);
    END IF;

  • How can i find missing backup on Time Machine (for Mac)?, How can i find missing backup on Time Machine (for Mac)?

    I backed up my computer on August 8th 2010 with Time Machine. I'm absolutely certain I did. But some reason there's a gap. I have a backup of my computer from just before that date, and just after that date, but August 8th is missing.
    Is there any way to recover my files from this date? What's happening? It can't have started automatically deleting files because I have backups from as far back as June 2010 (when I started using time machine).
    Thank you

    See:
    A  whole  lot  about  Time  Machine for help with TM problems.  Also you can select Mac Help from the Finder's Help menu and search for "time machine" to locate articles on how to use TM.  See also Mac 101- Time Machine.

  • How to Measure time taken for a some lines of code in a program?

    Hi
    I have one requirement to measure time taken for executing a  block of code in report . How we can find it ?
    Is there any way to code something in report to caluculate it ?
    Please send solution as early as possible
    thank u

    Ok.. try this code...
    DATA : t1 TYPE i,
    t2 TYPE i,
    delta(16) TYPE p.
    GET RUN TIME FIELD t1.
    PERFORM get_data. "your block of code
    GET RUN TIME FIELD t2.
    delta = t2 - t1.
    delta = delta / 1000000.
    WRITE :/ 'Time elapsed : ', delta , 'Secs'. "time in secs.

  • Can access a files time stamp for calculations?

    I'm using Diadem 10.2.  We have an issue with our data, where the first cycle is corrupt iff the time stamp between the current file and previous file is less than 30 minutes.  This indicates a manual intervention, and it results in skewed results.
    I want to discard the first cycle when the File time stamps are less than 30 minutes apart...but I can't find a way do it.
    We use runtime in the file as a time channel, begins at zero each file, so i can't use any channel values to do this?
    Thanks in advance for the help.  Completing this task will end my assignment with this team, and I'll be moving to a new position at the end of the week.  Not sure if Diadem is in my future or not, but I highly doubt it.
    Thanks in advance...

    Hi Jeff,
    What type of data files are these?  If it's a custom file format, did you write the DataPlugin that reads the file into DIAdem?  Is the aquired date/time stored inside the data file somewhere?  For a TDM or TDMS file, this would typically be in the "File.DateTime" property, which shows up in the DataFinder as "File.Storage datetime" and in the Data Portal as "File.Storage date/time".
    The File created and modifed dates you get from Windows tend to change unexpectedly, so it would not be ideal to use those for a mission-critical process.
    I'm assuming that you're loading all the various data files into DIAdem, then concatenating them into contiguous channels that contain all the data from all the files.  You will then need to use e "Calculate Differences" ANALYSIS function to create a new channel that has the delta date/time values for each row. Once this is the case, you can use the Channel Calculator to find rows where the delta date/time value is invalid and set these rows equal to NoValues.
    Brad Turpin
    DIAdem Product Support Engineer
    National Instruments

  • Delete taking lot of time

    Hi
    I have a delete statement..which is taking lot of time. If I select this scenerio only 500 records are coming. But delete is taking lot of time.
    Please advise.
    delete from whs_bi.TRACK_PLAY_EVENT a
    where a.time_stamp >=to_date('5/27/2013','mm/dd/yyyy')
    and a.time_stamp <to_date('5/28/2013','mm/dd/yyyy')Thanks in adv.
    KPR

    Lets check the wait events.
    Open 2 sessions, 1 for running the update command and another for monitoring wait events. The session in which you want to run UPDATE, find the SID of that session ( SELECT userenv('SID') from dual ).
    Now run the update in one session (of which we have already found the SID).
    Run following query in another session
    select w.sid sid,
           p.spid PID,
           w.event event,
           substr(s.username,1,10) username,
           substr(s.osuser, 1,10) osuser,
           w.state state,
           w.wait_time wait_time,
           w.seconds_in_wait wis,
           substr(w.p1text||' '||to_char(w.P1)||'-'||
                  w.p2text||' '||to_char(w.P2)||'-'||
                  w.p3text||' '||to_char(w.P3), 1, 45) P1_P2_P3_TEXT
    from v$session_wait w, v$session s, v$process p
    where s.sid=w.sid
      and p.addr  = s.paddr
      and w.event not in ('SQL*Net message from client', 'pipe get')
      and s.username is not null
      and s.sid = &your_SIDWhile UPDATE is running in another session, run above query in second session for 5-6 times, with gap of (say) 10 seconds. If you can give us the output of monitoring query (from all 5-6 runs), that might throw more light on whats going under the hood.

  • Imac takes a lot of time to delete files

    my imac takes a lot of time to delete files, any idea why? thanks.

    Relaunch the Finder, then from the Finder menu bar, select
    Finder ▹ Preferences ▹ Advanced
    and uncheck the box marked Empty Trash securely. Try again to empty the Trash.

  • Lot of times my contacts has deleted from my Iphone 4S, Give me a solution, most of the time in my work time I have struggled more,

    Lot of times my contacts has deleted from my Iphone 4S, Give me a solution, most of the time in my work time I have struggled more,

    On your iPhone, go to settings> "mail contacts and calendars" and scroll down to the contacts section. Tap on "default account" and check what your default account is. change it to "on my iPhone" or "iCloud".
    If there are other accounts (such as Hotmail, gmail, etc) included in that list, check the contacts section on the respective website.
    A possible explanation for your issue is that (for instance) you once had gmail listed there, and all your contacts were syncing to gmail servers. You may have then added a different account which listed itself as the default contact syncing server, replacing the original information with what is on the new server (nothing)
    If you have an exchange server listed, then you might want to have a talk with your works it department...

  • Delete operation on indexed table taking lot of time

    We are working on 10g R2. We have a script for archiving a table which upon copying into table (say Y a non indexed table); the pre-decided records from a table (say X indexed table).
    After inserting the records into Y the records are deleted from X. There are close to 50 million records to be archived this way.
    While testing the script in development instance for a million records, we find that on avoiding the delete operation the time taken for the script to execute is ~ 45 minutes which otherwise (including the delete operation) takes 2 hours.
    How can we reduce this overhead time.
    Dropping the index and recreating them is not an option!

    My method is logical if you are planing to migrate at least 90% of all data from X to Y. If so
    For the new X table you only need at most 10% of the size of the current X table ( At my previous post by saying "the same as" i did not mean data, i mean only DDL except storage). Moreover, after renaming X to X_ and copying "data not to be copied to Y" from X_ to X, you may drop unnecessary indexes from X_ to free up some space.

  • Regarding bulk delete..

    Hi,
    I have areound 9400k reocrd should be deleted I am using for all delete with bulk collect . here i s the code for ref.
    declare
    2 type cust_array_type is table of number
    3 index by binary_integer;
    4 employee_array cust_array_type;
    5 v_index number;
    6 begin
    7 select empl_no bulk collect
    8 into employee_array from employee_history;
    9
    10 FORALL i IN employee_array.FIRST..employee_array.LAST
    11 delete from ord where empl_no = employee_array(i);
    12
    13 v_index := employee_array.FIRST;
    14 for i in employee_array.FIRST..employee_array.LAST loop
    15 dbms_output.put_line('delete for employee '
    16 ||employee_array(v_index)
    17 ||' deleted '
    18 ||SQL%BULK_ROWCOUNT(v_index)
    19 ||' rows.');
    20 v_index := employee_Array.NEXT(v_index);
    21 end loop;
    22 end;
    still data is not deleting pllease do advise

    user13301356 wrote:
    but normal delete is taking more time so to improve performance using bulk collect delete.
    so what is best approach to delete to go by bulk delete or normal delete.Look at it in simple terms...
    Method 1: Delete all Rows
    Method 2: Query all Rows then Delete all Rows
    which one, logically to you, is doing more work than the other?
    If your delete is taking a long time, that's because:-
    a) you haven't got suitable indexes to determine the records to be deleted on the target table
    b) you haven't designed the table to use parititions which could then simply be truncated (costed optional, licence required for partitioning)
    c) you are just deleting a lot of data and it's going to take time.
    No amount of PL/SQL coding around the basic SQL of a delete is going to improve the performance. You can't make any code delete the rows faster than a delete statement itself.

Maybe you are looking for

  • Come on Apple - this is not enough!

    Come on Apple - this account mess has to be streamlined! Really! My setup: 3 different Macs, 2 iPhones, one iTunes Library, my wife, me. We both use all 3 Macs and have our own accounts on each machine. We have both our own MobileMe account. (Wife ha

  • No values in the LOV when trying to print checks

    I am trying to print checks from Payables with XML Publisher, I have been able to get a concurrent job that shows the XML being generated and it is correct for the template/data definition I want to print on. when I run the XML Publish reports job an

  • With the last update of Nightly, Firefox always crash, what can i do?

    I have Nightly 64 bits on Windows, since the last update it always crash, even if i have re-install... What can I do?

  • Date conversion from selections creen

    i need to convert the date input received from selection screen  which is in the format (YYYYMMDD) into a format which is of one date field type in some table in order to check the conditions in where clause.. can any one pls help me with this Also h

  • Inventory Mgnt ODS

    Hello, I have a requirement to create an ODS which stores data similar to the standard cube. I need some views on how to load data and the design of the ODS. if anyone can point out some documents that would be helpful. Thanks Laura.