Bulk process advice

Here my sample design of the bulk process with limit of row inserted
declare
n_rows_to_process NUMBER := 1000;
CURSOR my_cursor IS
select XXXXX
BEGIN
OPEN my_cursor;
loop
FETCH my_cursor BULK COLLECT INTO
XXXX
LIMIT n_rows_to_process
FORALL i IN 1 .. PUR_CCN_TAB.COUNT
insert XXXX
commit;
-- return;
exit when my_cursor%NOTFOUND;
END LOOP my_cursor_loop;
CLOSE my_cursor;
End;
This is only for 1 cursor select. It complete inserted the row into temp table.
So , when I put 2 cursor into this process. It wouldn't run the second cursor because the 1 cursor it already exit the process.
Here my sample design for 2 cursor
declare
n_rows_to_process NUMBER := 1000;
CURSOR my_cursor IS
select XXXXX
CURSOR my_cursor2 IS
select XXXXX
BEGIN
OPEN my_cursor;
loop
FETCH my_cursor BULK COLLECT INTO
XXXX
LIMIT n_rows_to_process
FORALL i IN 1 .. PUR_CCN_TAB.COUNT
insert XXXX
commit;
-- return;
exit when my_cursor%NOTFOUND;
END LOOP my_cursor_loop;
CLOSE my_cursor;
BEGIN
OPEN my_cursor2;
loop
FETCH my_cursor2 BULK COLLECT INTO
XXXX
LIMIT n_rows_to_process
FORALL i IN 1 .. PUR_CCN_TAB.COUNT
insert XXXX
commit;
-- return;
exit when my_cursor2%NOTFOUND;
END LOOP my_cursor2_loop;
CLOSE my_cursor2;
End;
Is there any SQL command that can used like complete the 1 loop and continue the 2nd loop?
Can we join the 2 cursor in bulk process?

You are mixing up cursor and fetch with bulk collect, forall within a loop and an out-of-scope exit.
Forget the cursor, open, loop, fetch and exit.
just do: select bulk collect and the forall.
Your second round of select should then be just fine ;)

Similar Messages

  • Post-processing advice for printing a photobook

    Hi
    I'm planning on getting a photobook made using Snapfish here in the UK.
    My images were taken with my Canon EOS 400D, and I have Photoshop Elements.
    I have the majority in RAW format.
    I would like some advice as to the best way of going about processing these images to get the best results in the photobook.
    After I do my processing in Elements, is it best to resize the pics or let Snapfish's system do this for me automatically?
    If I do resize, at what point should I apply sharpening?
    Any other tips, advice, experiences all welcome.
    Many thanks
    Steven

    Steven,
    1. Check with this vendor. You may have to upload JPEG files
    2. I resize and/or crop to intended size in order to control this step. The machines will crop a bit as it is, in my experience. I learned this the hard way when I applied the text tool too near an edge, and a portion of the data was chopped off.
    3. Sharpen as the final step in your work flow.
    Ken

  • Web Service for bulk processing (inserts mostly)

    In our case what we have is an interface table that needs to be populated by the web service.
    On the web service provider side, we have created EO,VO and AM for this interface table and exposed the AM as a web service using the create and process Operation.
    We tested our web service to insert a single row and it works fine. Consumers could do this in a loop to take in multiple rows but that may not be very efficient.
    The web service consumer will be calling the web service based on the customer scheduling of the a job. So, at the time of the web service call, number rows to send (from consumers tables) could be any number (100 or 1k or 10k rows) based on the activity on that system.
    What will be the most performant way for the consumer to populate data into our SDO and call our web service?
    Are there examples that show how we can send bulk data using a web service?
    This web service can also be used by 3rd party application (other than our internal applications). They may send data in an XML format. How can that be included in this web service without having to do different things for different types of consumers (internal or external).

    I will not comment on "IIS recycle its workers process
    ". I don't think you have ever said its IIS is the web server used by yourt 3rd paty to host their web service. We (atleast I) don't know wheather your 3rd party's web service is hosted on IIS. It’s some "Web Server"
    could be IIS (if its Microsoft based) or Apache or something else.
    Come back to your question, when the error says "The request could not be
    understood by server due to malformed syntax", when you’re sending the message as expected by 3rd party service, then it has nothing to do with your end, error is returned by your 3rd party. And as you have mentioned you are tracing the
    messages thru Fiddler and you see messages are constructed same and fine as expected by your 3rd party. Don't confuse its nothing on your party, either first deployment or last. Check with your 3rd party
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • FI/AR:Bulk processing of UK cheques

    LS,
    our relation is processing customer payment using UK cheques. This process involves customers sending paper cheques, which ned to be matched with the outstanding balance of the customer.
    In the first step the paper cheques need to be posted to a customer payment account. In the second step the customer account need to be cleared with the invoice.
    My question is about bulk entry of these paper cheques to post to customer account: which transaction can be used for that? I can only find a manual transaction, which posts one cheque at a time, where hundreds need to be processed per day.
    Can someone point me in the right direction?
    various options have been discussed, like creating a Batch INput Map, but this would involve the IT or Finance department to start the batch input map, where the proces really needs to be completed in a simple way by Cutomer Service department. Finance will look after reconciling payments with invoices.
    any pointers, tips are highly appreciated,
    thanks
    Jaap
    Edited by: Jaap Heimans on Dec 18, 2007 5:10 PM

    Hi,
    This happens quite some times for me. I would assume you have set the activation to run in parallel. If so the number of background process are not available for the execution. Make sure you do not run any other background jobs and execute this activation step.
    Hope this helps.
    Ravi.

  • Seeking credit card payment processing advice

    Hello!
    My company has been doing ecommerce for years now using a
    custom CF shopping cart to collect the customer’s data, then
    we use CyberSource for the US and UK and a company called
    Sogenactif (for France and a few others) to do the credit card
    payment processing once the order has transferred to our internal
    accounting system. We have subsidiary offices all over the world,
    and supporting all of those currencies is quite a challenge. We
    have sites in Euros, Swiss Franc, Swedish Krones, Nowegian Krones,
    Yen, UK pounds, etc and setting up any more individual payment
    processors is not only a pain but unscalable and downright unruly.
    Does anyone know of a truly global payment processor? Does
    anyone have any helpful advice?
    Since we have a Merchant Account I’m not looking to use
    a Third Party Processor like PayPal…
    Thank you!!!

    It's nothing personal against PayPal, I'm currently
    investigating their PayFlow Gateway:
    https://www.paypal.com/cgi-bin/webscr?cmd=_profile-comparison which
    looks like a payment processor sans the customer interaction with
    PayPal. Unfortunately I can't seem to find any specifications on
    European currencies and communications with credit card banks.
    Since we have a merchant account we only need a service to
    authorize and bill users credit cards, no shopping basket, no
    processing through the Service's own bank.
    Has anybody used PayFlow gateway?

  • Aperture 3 Bulk process workflow

    I'm having a work-flow issue in aperture 3 that I hope you can suggest a
    way out of.
    My typical workflow:
    1)import
    2)Stack similar images
    3)Pick favorite from each stack
    4)Process pick from each stack.
    That all works great. But now I want to create alternate versions of
    each of my processed picks. For instance, I want to offer my "client"
    B&W versions of each, extra saturated versions of each, desaturated
    versions of each, etc. On a one by one basis this is very easy. But
    I'm looking for a to do it in bulk. I'm not looking to put a lot of
    energy into these extra versions, since they are sort of bonus
    versions anyway.
    Some approaches that haven't worked...
    1) Create another album, (say called B&W) and drag all my picks into
    that. Apply adjustments to all. Problem: modifies the same versions
    that appear in the original project too
    2) Export picks. Apply B&W to all. Export B&W picks. Undo apply of B&W
    to all. Repeat for each kind of modification I want (desaturated,
    etc). Problem: Inelegant, don't get to keep the modified versions.
    3)Select all the picks, duplicate version. Apply adjustments. Now I
    have both the pick version and the B&W version, as I wanted. Problem:
    Now the top of each stack is the alternate B&W version, not the
    original pick.
    What am I missing? I know lots of photographers offer clients a disk
    with their pick images, plus folders of B&W, sepia, etc. How would one
    do that in Aperture 3?
    Thanks in advance for your help.

    I'm having a work-flow issue in aperture 3 that I hope you can suggest a
    way out of.
    My typical workflow:
    1)import
    2)Stack similar images
    3)Pick favorite from each stack
    4)Process pick from each stack.
    That all works great. But now I want to create alternate versions of
    each of my processed picks. For instance, I want to offer my "client"
    B&W versions of each, extra saturated versions of each, desaturated
    versions of each, etc. On a one by one basis this is very easy. But
    I'm looking for a to do it in bulk. I'm not looking to put a lot of
    energy into these extra versions, since they are sort of bonus
    versions anyway.
    Some approaches that haven't worked...
    1) Create another album, (say called B&W) and drag all my picks into
    that. Apply adjustments to all. Problem: modifies the same versions
    that appear in the original project too
    2) Export picks. Apply B&W to all. Export B&W picks. Undo apply of B&W
    to all. Repeat for each kind of modification I want (desaturated,
    etc). Problem: Inelegant, don't get to keep the modified versions.
    3)Select all the picks, duplicate version. Apply adjustments. Now I
    have both the pick version and the B&W version, as I wanted. Problem:
    Now the top of each stack is the alternate B&W version, not the
    original pick.
    What am I missing? I know lots of photographers offer clients a disk
    with their pick images, plus folders of B&W, sepia, etc. How would one
    do that in Aperture 3?
    Thanks in advance for your help.

  • Patching and backup process advice

    Hello all ,
    I have solaris 8 and I had a few questions about patching and backup.
    1. Monthly I download the Sun 'Recommended and Security patchset", I then unzip and run patch_add.
    When I review the terminal output while this is running I get quite a few :"unable to install patch exit code 8".
    Should I be getting this?
    Is there another way of applying patches(other than individually)?
    2. What are good backup practices? Copy /etc and /var directories to a raid? Or is there a Sun tool for backups?
    Thanks for any advice!

    I believe this is a list of exit codes for patches. You'll most likely see a lot of return code 8 and 2. Not typically an issue.
    # Exit Codes:
    # 0 No error
    # 1 Usage error
    # 2 Attempt to apply a patch that's already been applied
    # 3 Effective UID is not root
    # 4 Attempt to save original files failed
    # 5 pkgadd failed
    # 6 Patch is obsoleted
    # 7 Invalid package directory
    # 8 Attempting to patch a package that is not installed
    # 9 Cannot access /usr/sbin/pkgadd (client problem)
    # 10 Package validation errors
    # 11 Error adding patch to root template
    # 12 Patch script terminated due to signal
    # 13 Symbolic link included in patch
    # 14 NOT USED
    # 15 The prepatch script had a return code other than 0.
    # 16 The postpatch script had a return code other than 0.
    # 17 Mismatch of the -d option between a previous patch
    # install and the current one.
    # 18 Not enough space in the file systems that are targets
    # of the patch.
    # 19 $SOFTINFO/INST_RELEASE file not found
    # 20 A direct instance patch was required but not found
    # 21 The required patches have not been installed on the manager
    # 22 A progressive instance patch was required but not found
    # 23 A restricted patch is already applied to the package
    # 24 An incompatible patch is applied
    # 25 A required patch is not applied
    # 26 The user specified backout data can't be found
    # 27 The relative directory supplied can't be found
    # 28 A pkginfo file is corrupt or missing
    # 29 Bad patch ID format
    # 30 Dryrun failure(s)
    # 31 Path given for -C option is invalid
    # 32 Must be running Solaris 2.6 or greater
    # 33 Bad formatted patch file or patch file not found
    # 34 The appropriate kernel jumbo patch needs to be installed
    Back up your system before adding any patches (ufsdump). It's recommended that these patches are added in run level one.
    There should be a script called something like "install_cluster" command in the 8_Recommended directory that you can use to add the patches.
    By default the patches create backout info in /var, but I usually disable this (yeah, livin' on the edge) with the "-nosave" option.

  • Bulk processing of applicants from one status to another

    Hi experts,
    Can any please explain how can we change the status of applicants from one status to another?
    ex:
    150 applicants status change to "Process"
    then change status from "Process" to "offer employement"
    then change status from "offer employement to "Prepare to hire"
    Regards.

    Ajay, thats my point exactly, there are only two options which you can execute for the group of applicants (Reject or put on hold) these do not fulfill the purpose, what i would want is to change status other then (Reject or put on hold) such as, short listed for test, interview, process, etc...
    How are these processed? would I have to make an ABAP application for this?? please suggest.
    Regards.

  • Script or bulk process to attempt delivery of journal NDRs

    I am working for a client who has ended up with a significant number of journal NDRs due to an issue that can potentially be easily solved. If the issue can be solved, I'd like to attempt delivery of all of the items again. Given the large number, it's unlikely
    I'd have time to attempt to re send each journal report.
    Outlook Web App has a banner for each email item displaying "to send this message again, click here". Which is very useful... per item. I've tried to access the email item using Exchange Web Services to see if this shows up as a method or action,
    but can't see to find anything obvious.
    Does anyone know of a way to do this using a script?

    simple example:
    #target indesign
    var destFolderPath = Folder.selectDialog('DestFolder').absoluteURI + '/'
    var currDoc = app.activeDocument;//already prepared document (csv connected) is open and frontmost
    currDoc.dataMergeOptions.createNewDocument = true;    
    var maxRange =  currDoc.dataMergeProperties.dataMergePreferences.recordRange.split('-')[1];//count of recordRanges
    //one file for each record
    for(var  i = 0; i < maxRange; i++)
    with(currDoc.dataMergeProperties.dataMergePreferences)
    recordSelection = RecordSelection.ONE_RECORD;
    recordNumber = i+1;
    currDoc.dataMergeProperties.mergeRecords();
    app.activeDocument.save(File(destFolderPath +(i+1) + '.indd'));
    app.activeDocument.close();

  • What is the best approach to process data on row by row basis ?

    Hi Gurus,
    I need to code stored proc to process sales_orders into Invoices. I
    think that I must do row by row operation, but if possible I don't want
    to use cursor. The algorithm is below :
    for all sales_orders with status = "open"
    check for credit limit
    if over credit limit -> insert row log_table; process next order
    check for overdue
    if there is overdue invoice -> insert row to log_table; process
    next order
    check all order_items for stock availability
    if there is item that has not enough stock -> insert row to
    log_table; process next order
    if all check above are passed:
    create Invoice (header + details)
    end_for
    What is the best approach to process data on row by row basis like
    above ?
    Thank you for your help,
    xtanto

    Processing data row by row is not the fastest method out there. You'll be sending much more SQL statements towards the database than needed. The advice is to use SQL, and if not possible or too complex, use PL/SQL with bulk processing.
    In this case a SQL only solution is possible.
    The example below is oversimplified, but it shows the idea:
    SQL> create table sales_orders
      2  as
      3  select 1 no, 'O' status, 'Y' ind_over_credit_limit, 'N' ind_overdue, 'N' ind_stock_not_available from dual union all
      4  select 2, 'O', 'N', 'N', 'N' from dual union all
      5  select 3, 'O', 'N', 'Y', 'Y' from dual union all
      6  select 4, 'O', 'N', 'Y', 'N' from dual union all
      7  select 5, 'O', 'N', 'N', 'Y' from dual
      8  /
    Tabel is aangemaakt.
    SQL> create table log_table
      2  ( sales_order_no number
      3  , message        varchar2(100)
      4  )
      5  /
    Tabel is aangemaakt.
    SQL> create table invoices
      2  ( sales_order_no number
      3  )
      4  /
    Tabel is aangemaakt.
    SQL> select * from sales_orders
      2  /
            NO STATUS IND_OVER_CREDIT_LIMIT IND_OVERDUE IND_STOCK_NOT_AVAILABLE
             1 O      Y                     N           N
             2 O      N                     N           N
             3 O      N                     Y           Y
             4 O      N                     Y           N
             5 O      N                     N           Y
    5 rijen zijn geselecteerd.
    SQL> insert
      2    when ind_over_credit_limit = 'Y' then
      3         into log_table (sales_order_no,message) values (no,'Over credit limit')
      4    when ind_overdue = 'Y' and ind_over_credit_limit = 'N' then
      5         into log_table (sales_order_no,message) values (no,'Overdue')
      6    when ind_stock_not_available = 'Y' and ind_overdue = 'N' and ind_over_credit_limit = 'N' then
      7         into log_table (sales_order_no,message) values (no,'Stock not available')
      8    else
      9         into invoices (sales_order_no) values (no)
    10  select * from sales_orders where status = 'O'
    11  /
    5 rijen zijn aangemaakt.
    SQL> select * from invoices
      2  /
    SALES_ORDER_NO
                 2
    1 rij is geselecteerd.
    SQL> select * from log_table
      2  /
    SALES_ORDER_NO MESSAGE
                 1 Over credit limit
                 3 Overdue
                 4 Overdue
                 5 Stock not available
    4 rijen zijn geselecteerd.Hope this helps.
    Regards,
    Rob.

  • Bulk email from the database

    Good morning,
    Running Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production... (we'll be upgrading to 11g soon, but for now, stuck with 9i)
    Our clients have requested an email subscription service to be implemented for our news web site. The anticipated load will be:
    one module to support the following:
    - 3000 emails, with up to 50 instant email notifications an hour... so 150,000 emails delivered per hour. Email is sent as soon as new article is posted. (articles are embargoed and go live once embargo has passed)
    The other module is a open subscription form with double opt-in for the general public... with choices of instant email notification, daily digest or weekly digest or news article postings. the anticipated load is difficult to estimate for this one, but we could expect 50,000+ subscriptions... with up to 500 article posting per day. end users will have the ability to customize their subscription based on audience, department etc...
    In summary, we're looking to implement a solution that can deliver hundred of thousands of emails a day.
    As an Oracle developer, I always look first to the DB for solution. I know in 10g and up, utl_mail is available... not so in 9i. I do have a sample mail package from OTN that seems to do the trick. Emails will need to be sent as HTML, since they will contain a small image reference.
    Some potential ideas I have considered so far:
    for instant email notifications:
    - setting up queue tables and DBMS_JOB to monitor every minute and send email out to recipients using PL/SQL mail procedure
    - using Oracle AQ to manage queues, publish the payload with complete article information, subscriber will dequeue messages and send out using same PL/SQL email procedure. Messages in the queue will have delay set to article embargo date, to ensure articles are not emailed ahead of publishing.
    I have never worked with Oracle AQ before, but it seems to offer some benefits and more intelligence than a custom solution. I have also considered setting up a new Oracle instance for sending out emails, to offload some of the work from the main instance feeding the news web site.
    the daily and weekly digest emails are not as big a concern at this point, since they will be processed during off-peak hours, and will run once day/week... the Oracle AQ solution I thought would be an elegant and scalable solution for the instant notifications...
    My major concern at this point is scalibility and performance... I will rely on bulk processing in SQL to collect data... looping through the arrays to send out emails, as well as building up the email objects and sending them out in time is a concern.
    Given the potential volume we'll be dealing with, is a solution in Oracle the proper way to go? Our organization don't have an enterprise solution to this of course, so we have to build it from scratch. Other environments/tools at our disposal, Oracle 10g (10.1.3) application servers running Java ... could use JavaMail... our current set-up is 2 load-balanced application servers, with multiple OC4J containers running on each.
    Thanks for any tips or advice you may have.
    Stephane

    Bill... thank you for taking the time to respond in such a detailed manner. This is greatly appreciated. I have passed this along to our DBAs and messaging experts for review.
    I'm tasked with modeling the DB and optimizing it so it can achieve the target performance levels.
    Billy  Verreynne  wrote:
    Each SMTP conversation (from UTL_SMTP or UTL_MAIL) requires a socket handle. This is a server resource that needs to be allocated. The o/s has limits on the number of server resources a single process can use and that a single o/s account can use. Does not help that you have a scalable design, scalable code, and the server cannot supply the resources required to scale. So server capability and config are quite important.We've gotten our Unix specialists to look into this... we've been told our current platform is limited, and no further upgrades will be allocated, since we'll be moving to a newer platform in the "near future".
    There's also the physical network itself - as that 100,000 mails will each have a specific size and needs to be transported from the client (Oracle db server) to the server (mail/smtp server). The network must have the capacity and b/w to support this in addition to the current loads it is handling.Our network analysts will be putting us on a segregated network (subnet) to avoid impacting the rest of the organization... although bandwitdh is shared at the end of the day, we'll be somewhat isolated, with perhaps even our own firewalls and load balancers.
    You will need to look at a couple of things to decide on how to put all of this together. How are e-mails created in the database? Can they be immediately transmitted after being committed? Is the actual Mime body of the e-mail created by the transaction, or it is normal row and column data that afterward needs to be converted into a Mime body for mailing?The scheduling of emails are tricky.... articles are sometimes posted (comitted to the DB), but still under embargo... we plan to use this embargo date (date on which article goes live/public) as the "delay" in the job scheduling. At that point, article is emailed to thousands.
    For instant notifications, all recipients get the same content, article content direct in email. There custom pieces to include, eg unsubscribe link with unique identifier, edit subscription etc... but these bits could be pre-generated and stored with the email subscriber info, and appended to the mail body. No images or other binary files are embedded or attached... so we're dealing with mostly text/html in the body.
    Scalability is a direct function of design. So you will need to design each component for scalability. For example, do you create the Mime body dynamically as part of the e-mail transmission code unit? Or do you have a separate code unit that creates and persists Mime bodies that then are transmitted by the transmission code unit? Instant notification html email composition is a bit simpler. It gets tricky with daily, weekly, monthly digests. Here we have to assemble thousands of custom email bodies, based on subscription options (up to 5 custom fields, to configure subscription content)... from there assemble each individual mail body based on current subscriber options, then send the email.
    Mail bodies are potentially different for each individual subscriber, given the various permutations of selections, so really don't see how a body can be persisted for re-use when emailing, each may be single use only. In this case, looking at assembling thousands of emails, then emailing each one in a loop.
    Do you for example queue a single DBMS_JOB per e-mail? There are overheads in starting a job. So do you pay this overhead per mail, or do you schedule a job to transmit a 100 e-mails and pay this overhead once per 100 mails?for instant notifications, we'd be queuing a job for every article posted. From there, every email subscribed to instant notifications, and matching their subscription configuration to the article, will be retrieved and email sent.
    So there are a number of factors to consider ito design, how to deal with Mime bodies, how to deal with exception processing, how to parallelise processing and so on. One factor will need to be on how to deal with catchup processing - as there will be a failure of some sorts at a stage that means processing is some hours behind. And this needs to be factored into the design.An email job that fails half-way through concerns us... how do we proceed where we left off etc... may have to keep track of job numbers etc...
    The other option we're considering, is clustered Oracle 10gR3 application servers... to process and send the emails, using JavaMail... there is still an issue with Oracle handling the query volume required to assemble the customized emails for each subscriber (which could reach 50,000 within a year or two)...
    I would not select that as an architecture. This moves the data and application away from one another - into different process boundaries and even across hardware boundaries.
    When using PL/SQL, both data processing (SQL layer) and conditional processing and logic (PL layer) are integrated into a single server process. There is no faster way or more scalable way of combining code and data in Oracle. It does not matter how capable that Java platform/architecture is. For that Java code to get SQL data means shipping that data across a JDBC connection (and potentially between servers and across network infrastructure).
    In PL/SQL, it means a simple context switch from PL to SQL to fetch the data.. and even that we consider "slow" and mitigate using bulk processing in PL/SQL in order to decrease context switching.
    The fact the the data path for a Java app layer is a lot longer than for PL/SQL, automatically means that Java will be slower.totally agree with this. We're having a meeting this morning with all parties to review and discuss the points you have raised, and see if the required resources can be allocated on the Unix side to accommodate the potential load.
    >
    I'm looking at leveraging materialized views (to pre-assemble content), parallelism (query and procedural), Advanced Queuing (seems complex)... AQ is not that complex.. and perhaps not needed. You will however need a form of parallelism in order to run a number of e-mail transmission processes in parallel. The question is how do you tell each unique process what e-mails to transmit, without causing serialisation between parallel processes?
    This can be home rolled parallelism as shown in {message:id=1534900} (from a technique posted by Tom on asktom that is now a 11.2 feature). You can also use parallel pipelined tables. Or use AQ. I'm pretty sure that a solid design will support any of these - modularising the parallel processing portion of it and allowing different methods to be used to test drive and even benchmark the parallel processing component.
    If using AQ, we're considering a separate Oracle instance in a different AIX partition perhaps, which could manage the email function. Our main instance (which feeds our public web site, and stores all data), would push objects onto the queue, and items would be dequeued on the other end in the other Oracle instance.
    It however sounds like a very interesting project. Crunching lots of data and dealing with high processing demands... that is the software engineer's definition of fun. :-)Indeed... I wouldn't consider myself as a software engineer at this point, but perhaps after this is done, I'll have earned my stripes. ;-)
    Edited by: pl_sequel on Jul 13, 2010 9:57 AM

  • Bulk size seems to have no effect

    Hi folks,
    Any idea why bulk size setting of a mapping seems not to have any effect?
    My settings are as adviced in the documentation:
    Bulk size: 1000 (default)
    Default Operating Mode: Row based
    Bulk processing code: selected
    Source database (remote) Oracle 8.1
    Target database & OWB database 10gR2
    Nevertheless when I execute a mapping TOAD doesn't show me any row count until the whole table has been loaded. As I've understood the load should be done in parts of 1000 rows, right? Could it be that a database setting prevents bulk size parameter to work like it should?
    Thanks,
    Ilmari

    Hi there,
    the script generated contained the elements you mentioned David, thanks.
    I was trying to commit every 1000 rows and to process approximately 10M rows.
    Wasn't able to solve it, it still doesn't commit in the meantime. However, probably some database parameters were changed and it doesn't end in an error. So not solved, but somehow got past it. Not ideal, but works.
    BR,
    Ilmari

  • How to use BULK COLLECT in Oracle Forms 11g

    Forms is showing error that "Feature is not support in Client Side Program" when i am trying to impliment Bulk collect in Forms 11g.
    i need to load full data from DB to my form becuase using cursor is very slow....
    Is there any method/Work around to achieve this ....

    declare
    type arr is table of emp%rowtype ;
    lv_arr arr;
    begin
    select * bulk collect in to lv_arr from emp;
    /*written code here to process the data and write in to file*/
    end;Unless you are inserting/updating the data you are holding in the array into a database table I don't think there is much peformance gain in using bulk-collect in conjunction with writing a file. Bulk processing will increase performance by minimizing context switches from the SQL to the PL/SQL engine, nothing more, nothing less.
    In any case bulk processing is not available in forms, if you really need to make use of it you need to do it in a stored procedure.
    cheers

  • Error Message in Bulk Exception.

    Hi
    When i use a normal Exception, the SQLERRM gives me the complete error along with the column name.
    Ex :
    ORA-01400: cannot insert NULL into ("BENCHMARK"."T6"."X")
    But
    When i Use Bulk_Exception, SQLERRM does not give me the complete error along with column name.
    Ex:
    ORA-01400: cannot insert NULL into ()
    Is it that Bulk_exception error messages are less informative. These error messages, does not help me in finding out which column is violation the NOT NULL constraint and hence they actually make no sense.
    Is there any way, through which i can get the complete error message as i get in the normal exception.
    I am using 9i version.
    Regards
    Nikhil

    I couldn't find exact solution to your problem, but one idea that came to mind, is resubmit statement with problematic rows (without using bulk processing) and save error messages.
    SQL> declare
      2    bulk_errors exception;
      3    pragma exception_init ( bulk_errors, -24381 );
      4   
      5    j number;   
      6   
      7    type numbers is table of number;
      8    ids numbers;
      9  begin
    10    select case when mod(abs(dbms_random.random),3)=0 then null else 1 end id bulk collect into ids
    11      from dual
    12    connect by level < 10;
    13   
    14    forall i in ids.first .. ids.last save exceptions
    15      insert into test01 values ( ids(i) );
    16    exception when bulk_errors then
    17       for i in 1..sql%bulk_exceptions.count loop
    18         j := sql%bulk_exceptions(i).error_index;
    19         begin
    20            insert into test01 values ( ids(j) );
    21         exception when others then
    22            dbms_output.put_line( 'Row #'||j||'. '||substr(sqlerrm,1,instr(sqlerrm,chr(10))-1) );
    -- substr to get only the first line
    23         end;
    24       end loop;
    25  end;
    26  /
    Row #1. ORA-01400: cannot insert NULL into ("SCOTT"."TEST01"."ID")
    Row #3. ORA-01400: cannot insert NULL into ("SCOTT"."TEST01"."ID")
    Row #6. ORA-01400: cannot insert NULL into ("SCOTT"."TEST01"."ID")
    Row #7. ORA-01400: cannot insert NULL into ("SCOTT"."TEST01"."ID")
    Row #8. ORA-01400: cannot insert NULL into ("SCOTT"."TEST01"."ID")
    Row #9. ORA-01400: cannot insert NULL into ("SCOTT"."TEST01"."ID")

  • [CS3] Is there a way to stop the modal alert on EVERY SINGLE ERROR during a bulk update?

    I've inherited quite a mess I'll admit -- I've got ~ 8000 pages each with different dreamweaver templates with the entire site being in a varying state of disrepair.  I need to perform a global change -- I'm thinking the way to go about this is to update the templates (thre are ~40 of them, not nested) and let the process run through. However, I've encountered difficulties.
    After about ~2300 files loaded into the site cache, dreamweaver crashes -- there is no error, it's an unhandled exception.... it consistently crashes at this point.  I'm not sure if this is a specific page causing the problem, or if it's that I'm trying to load 8K files into the site cache....  So anyway, with it crashing consistently trying to build the site cache, I basically press "stop" whenever it tries, and that seems to abort the building and the 'update pages' screen comes up and tries to update the files.
    My next problem is that there are countless errors in each of these pages and templates -- ranging from the 'template not found' when an old or outdated file is referencing a template that has been deleted -- to various mismatched head or body tags.  Of course, and this is probably the most annoying thing I've ever encountered,  this bulk process that should run over 1000s of files without interaction seems to feel the need to give me a modal alert for every single error.  The process stops until I press 'OK'
    I'm talking update 5-10 files, error... hit 'return', another 5-10 files are processed, another alert, hit 'return' -- rinse and repeat.  Oh, and I made the mistake one time of hitting 'return' one too many times -- oh yes, this will STOP the current update because default focus is on the 'Stop' button, for whatever reason. and if I want to get the rest of the files, I need to run it again -- from the start.
    Is there a way to silence these errors?   They're already showing up in the log, I wouldn't mind going through it once the entire site has been udpated to clean thing up ... but I'm updating quite literally thousands of pages here, I would wager that 1/3 of them have some form of an error on it... do I really need to press "OK" two thousand times to do a bulk update with this program?
    Any tips from the pros?

    This one might help.
    Allow configuration of Automatic Updates in Windows 8 and Windows Server 2012
    Regards, Dave Patrick ....
    Microsoft Certified Professional
    Microsoft MVP [Windows]
    Disclaimer: This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.

Maybe you are looking for