Bulk ALLOCATION

We are using the ALLOCATION logic to allocate data per Cost Pool to three other dimensions: Product, Region and Customer.
Per CostPool (1000 members) ALLOCATED to Product (4000 members), Region (200 members) and Customer (1000 members).
Extreme case, one Cost Pool gets ALLOCATED to 4000x200x1000 basemembers =  800 million records posted.
Extreme case for all Cost Pools = 800 billion records posted.
If we restrict the allocation based on a driver (where there is data. Sales Value for example) it narrows it down to 18 million records posted per Cost Pool.
Running this allocation for all Cost Pools = 18 billion records posted in total.
We have the allocation logic working at a speed of about 1 minute per 1 million records posted. Thus, ALLOCATING all Cost Pools will run for about 300 hours.  
How can we improve this?
(We are considering excluding the Customer dimension from the allocation but would only do so as a last resort.)

Hi, Meznert
I don't think BPC with MSAS can't handle 18 billion records properly. (Even 800 million records is not the small number.)
I think you need to resturcutre your application by region so that number might be decreased and resturcture hierachy of each dimensions.
Thank you.
James Lim.

Similar Messages

  • Missing Actions in Sales assistant of MSA

    Hi Gurus,
    We defined sales cycle, phases (4 phases) and assigned these to an opportunity transaction type. We assigned actions to these phases.
    In SAP GUI, we are able to see all the actions assigned for a phase but in MSA we are only able to see the actions that are assigned to the first phase of the sales cycle. For the remaining phases we are not able to see the assigned actions.
    Please help.
    Thanks.

    Hi Wolfhard,
    I synchronized the objects from CRM to CDB and then extracted details for the entire site. But the issue persists
    Following are the subscriptions for the site:
    Activities (by Employee)
    Activity Journal Customizing
    Activity Recurrence
    Adapter Test Publication (bulk)
    Allocations & Target Group Header
    Authorization 1 subscription
    Authorization 2 subscription
    Authorization subscription
    Business Content Provider
    Business transaction customizing subscription
    Classification subscription
    Client Registry
    Connection Handler
    Customer & Competitor Agreement Prio.
    Customer & Prospects (by Partner Number Range)
    Customer Hierarchy subscription
    Customizing  Objects (Mobile Sales specific) subscription
    Customizing Account Panning 01
    Customizing BP Hierarchies 01
    Customizing Listing
    Customizing Mobile Sales specific from R/3 subscription
    Customizing Objects 4.0 ECE
    Customizing Objects 40 SP2 subscription
    Customizing Objects 40 subscription
    Customizing Objects 50
    Customizing Objects II subscription
    Customizing Objects III subscription
    Customizing Objects IV
    Customizing Objects subscription
    Customizing ORGMAN  subscription
    Customizing Project Object
    Customizing Standard Text
    Customizing Survey Tool
    Customizing Tour Planning
    Customizing V
    Customizing_Objects_PPR
    Customizing_TTE subscription
    Employee Hierarchy
    Employee subscription
    Inbox Mapping subscription
    Key Account Management
    Layout Field Template
    Marketing Attribute Template
    Marketing Organisation
    Marketing profiles by BPs
    Marketing profiles values by BPs
    Opportunities (by Employee SG)
    Partner Group Hierarchy
    Product Customizing
    Products (by Product Type 01)
    Products (by Product Type 02)
    Products (by Product Type 03)
    Products (by Product Type 04)
    Products (by Product Type 05)
    Products (by Product Type 07)
    Promotions & Campaigns (by Object Type CAM)
    Selections & Reports
    Territories (bulk)
    Territory Hierarchy (bulk)
    User (by Employee) subscription
    Workbench (Language Independent)
    Please let me know if I missed any required subscription.
    Please help.
    Thanks.

  • Allocated ATP derivation for bulk loads and stealing

    Gurus,
    We are designing a solution where the requirement is to derive ATP for more than a million items every day. For around a 250,000 items, we need to know the maximum quantity available for an item, org, demand class (including Stealing).
    We need to know the availability for future dates (like Horizontal plan) which includes stealing. We are using Allocated ATP and we are in R12.2.3.
    Questions:
    1) As per the document (Doc ID 150908.1) ATP API Description R12 (ATP API MRP_ATP_PUB.Call_ATP) the API can be used derive multiple items in a single call. Is there a limit on the number of items can be passed in a single call?
    2) Can we derive ATP values including stealing in a single call? If so, which of the output record type can be referred to ATP_Rec_Typ, ATP_Period_Typ and ATP_Supply_Demand_Typ?
    3) Can we derive future availability of items in a single call? We saw the output of X_ATP_PERIOD which seem to give the horizontal plan. Can we get the availability with stealing in horizontal plan in a single ATP call itself?
    4) If API is not the suitable approach, can you please let us know the table/logic that can be used for derivation?
    Appreciate your help.
    Thanks,
    Ram.

    889367 wrote:
    What's the benefit of including "IF  :NEW.col1  IS NULL" statement? If I leave it out and someone tries to insert a value not using the trigger it changes the value for me. I can see that being good and bad, but it keeps them from not using the sequence.
    The benefit is that it allows you to manually assign the column value if you want to. Whether that is a 'benefit' for your use case or not is for you to decide.
    But I'm trying to decide when and if I should use this. I wouldn't consider it but we've got Informatica developers that insist on writing dynamic sql functions to pull values for sequences to use in inserts because they can't reference the nextval in the their workflows.
    Don't use Informatica to do something that can be done using Oracle. The strength and utility of an ETL tool is in doing things that the database either can NOT do or cannot do efficiently: for example pulling data from multiple databases and flat files for insert into a target database. The goal being to get ALL of the data into the target database as quickly and efficiently as possible. Then you can apply the full power of the target database to ALL of the data.

  • Allocated memory pool was not deleted! 1 GB memory leak is too much for me!

    Dear Sirs. I found that DB environment, that was configured to use 1 GB cache size, won't free it when closed! Why? First I tried to open and close environment and got the following:
    Detected memory leaks!
    Dumping objects ->
    {596} normal block at 0x01970040, 1048596 bytes long.
    Data: < > 14 00 10 00 DB DB DB DB 0B 00 10 00 01 00 00 00
    {578} normal block at 0x00397978, 464 bytes long.
    Data: < > D0 01 00 00 DB DB DB DB C7 01 00 00 01 00 00 00
    Object dump complete.
    I have and idea that BDB will reuse the memory, rite? OK, let's try to create the same environment and open it. After environment was opened, closed, opened again and again closed, I got the following:
    Detected memory leaks!
    Dumping objects ->
    {3663} normal block at 0x01B80040, 1048596 bytes long.
    Data: < > 14 00 10 00 DB DB DB DB 0B 00 10 00 01 00 00 00
    {3645} normal block at 0x00396E60, 464 bytes long.
    Data: < > D0 01 00 00 DB DB DB DB C7 01 00 00 01 00 00 00
    {596} normal block at 0x01970040, 1048596 bytes long.
    Data: < > 14 00 10 00 DB DB DB DB 0B 00 10 00 01 00 00 00
    {578} normal block at 0x00397978, 464 bytes long.
    Data: < > D0 01 00 00 DB DB DB DB C7 01 00 00 01 00 00 00
    Object dump complete.
    So memory was not reused, nor deallocated.
    By the way, you may be interested in other leak I found, but fixed, see
    Replication manager memory leak when setting local site information.
    This leak is more serious, I am not sure I will fix it quickly. Maybe I'm doing something wrong? Could you please suggest something?
    Thanks in advance!
    With regards,
    Vladislav.

    OK, the problem solved by fixing code in file 'log.c', method '__log_dbenv_refresh'.
    Just added the code that deallocates memory of bulk buffer.
    if (IS_ENV_REPLICATED(dbenv))
    if (lp->bulk_buf != INVALID_ROFF)
    __db_shalloc_free(&dblp->reginfo, lp->bulk_buf);
    lp->bulk_buf = INVALID_ROFF;
    lp->bulk_len = 0;
    lp->bulk_off = 0;
    It was allocated in the '__log_open' function, by the following code:
              lp->ready_lsn = lp->lsn;
              if (IS_ENV_REPLICATED(dbenv)) {
                   if ((ret = __db_shalloc(&dblp->reginfo, MEGABYTE, 0,
                   &bulk)) != 0)
                        goto err;
                   lp->bulk_buf = R_OFFSET(&dblp->reginfo, bulk);
                   lp->bulk_len = MEGABYTE;
                   lp->bulk_off = 0;
              } else {
                   lp->bulk_buf = INVALID_ROFF;
                   lp->bulk_len = 0;
                   lp->bulk_off = 0;
    Sorry for time taken to read my posts, I was really needy in quick help, but solved problems myself.

  • ICMP Timeout Alarm due to TCP Protocol Memory Allocation Failure ?

    Hello Experts ,
      >> Device uptime suggests there was no reboot
    ABCSwitch uptime is 28 weeks, 13 hours, 50 minutes
    System returned to ROM by power-on
    System restarted at 13:09:45 UTC Mon Aug 5 2013
    System image file is "flash:c2950-i6k2l2q4-mz.121-22.EA12.bin"
    >> But observed logs mentioning Memory Allocation Failure for TCP Protocol Process ( Process ID 43) due to Memory Fragmentation
    003943: Feb 18 02:14:27.393 UTC: %SYS-2-MALLOCFAIL: Memory allocation of 36000 bytes failed from 0x801E876C, alignment 0
    Pool: Processor Free: 120384 Cause: Memory fragmentation
    Alternate Pool: I/O Free: 682800 Cause: Memory fragmentation
    -Process= "TCP Protocols", ipl= 0, pid= 43
    -Traceback= 801C422C 801C9ED0 801C5264 801E8774 801E4CDC 801D9A8C 8022E324 8022E4BC
    003944: Feb 18 02:14:27.397 UTC: %SYS-2-CFORKMEM: Process creation of TCP Command failed (no memory).
    -Process= "TCP Protocols", ipl= 0, pid= 43
    -Traceback= 801E4D54 801D9A8C 8022E324 8022E4BC
    According to Cisco documentation for Troubleshooting Memory issues on Cisco IOS 12.1 (http://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/ios-software-releases-121-mainline/6507-mallocfail.html#tshoot4 ), which suggests the TCP Protocols Process could not be started due to Memory being fragmented
    Memory Fragmentation Problem or Bug
    This situation means that a process has consumed a large amount of processor memory and then released most or all of it, leaving fragments of memory still allocated either by this process, or by other processes that allocated memory during the problem. If the same event occurs several times, the memory may fragment into very small blocks, to the point where all processes requiring a larger block of memory cannot get the amount of memory that they need. This may affect router operation to the extent that you cannot connect to the router and get a prompt if the memory is badly fragmented.
    This problem is characterized by a low value in the "Largest" column (under 20,000 bytes) of the show memory command, but a sufficient value in the "Freed" column (1MB or more), or some other wide disparity between the two columns. This may happen when the router gets very low on memory, since there is no defragmentation routine in the IOS.
    If you suspect memory fragmentation, shut down some interfaces. This may free the fragmented blocks. If this works, the memory is behaving normally, and all you have to do is add more memory. If shutting down interfaces doesn't help, it may be a bug. The best course of action is to contact your Cisco support representative with the information you have collected.
    >>Further TCP -3- FORKFAIL logs were seen
    003945: Feb 18 02:14:27.401 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003946: Feb 18 02:14:27.585 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003947: Feb 18 02:14:27.761 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003948: Feb 18 02:14:27.929 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003949: Feb 18 02:14:29.149 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    According to Error Explanation from Cisco Documentation (http://www.cisco.com/c/en/us/td/docs/ios/12_2sx/system/messages/122sxsms/sm2sx09.html#wp1022051)
    suggests the TCP handles from a client could not be created or initialized
    Error Message %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    Explanation The system failed to create a process to handle requests  from a client. This condition could be caused by insufficient  memory.
    Recommended Action Reduce other system activity to ease  memory demands.
    But I am still not sure about the exact root cause is as
    1.The GET/GETNEXT / GET BULK messages from SNMP Manager (Here, IBM Tivoli Netcool  ) uses default SNMP Port 161 which is
       UDP and not TCP
    2. If its ICMP Polling failure from IBM Tivoli Netcool , ICMP is Protocol Number 1 in Internet Layer of TCP/IP Protocol Suite  and TCP is Protocol                 Number 6 in the Transport Layer of TCP/IP Protocol Suite .
    So I am still not sure how TCP Protocol Process Failure could have caused ICMP Timeout . Please help !
    Could you please help me on what TCP Protocol Process handles in a Cisco Switch ?
    Regards,
    Anup

    Hello Experts ,
      >> Device uptime suggests there was no reboot
    ABCSwitch uptime is 28 weeks, 13 hours, 50 minutes
    System returned to ROM by power-on
    System restarted at 13:09:45 UTC Mon Aug 5 2013
    System image file is "flash:c2950-i6k2l2q4-mz.121-22.EA12.bin"
    >> But observed logs mentioning Memory Allocation Failure for TCP Protocol Process ( Process ID 43) due to Memory Fragmentation
    003943: Feb 18 02:14:27.393 UTC: %SYS-2-MALLOCFAIL: Memory allocation of 36000 bytes failed from 0x801E876C, alignment 0
    Pool: Processor Free: 120384 Cause: Memory fragmentation
    Alternate Pool: I/O Free: 682800 Cause: Memory fragmentation
    -Process= "TCP Protocols", ipl= 0, pid= 43
    -Traceback= 801C422C 801C9ED0 801C5264 801E8774 801E4CDC 801D9A8C 8022E324 8022E4BC
    003944: Feb 18 02:14:27.397 UTC: %SYS-2-CFORKMEM: Process creation of TCP Command failed (no memory).
    -Process= "TCP Protocols", ipl= 0, pid= 43
    -Traceback= 801E4D54 801D9A8C 8022E324 8022E4BC
    According to Cisco documentation for Troubleshooting Memory issues on Cisco IOS 12.1 (http://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/ios-software-releases-121-mainline/6507-mallocfail.html#tshoot4 ), which suggests the TCP Protocols Process could not be started due to Memory being fragmented
    Memory Fragmentation Problem or Bug
    This situation means that a process has consumed a large amount of processor memory and then released most or all of it, leaving fragments of memory still allocated either by this process, or by other processes that allocated memory during the problem. If the same event occurs several times, the memory may fragment into very small blocks, to the point where all processes requiring a larger block of memory cannot get the amount of memory that they need. This may affect router operation to the extent that you cannot connect to the router and get a prompt if the memory is badly fragmented.
    This problem is characterized by a low value in the "Largest" column (under 20,000 bytes) of the show memory command, but a sufficient value in the "Freed" column (1MB or more), or some other wide disparity between the two columns. This may happen when the router gets very low on memory, since there is no defragmentation routine in the IOS.
    If you suspect memory fragmentation, shut down some interfaces. This may free the fragmented blocks. If this works, the memory is behaving normally, and all you have to do is add more memory. If shutting down interfaces doesn't help, it may be a bug. The best course of action is to contact your Cisco support representative with the information you have collected.
    >>Further TCP -3- FORKFAIL logs were seen
    003945: Feb 18 02:14:27.401 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003946: Feb 18 02:14:27.585 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003947: Feb 18 02:14:27.761 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003948: Feb 18 02:14:27.929 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003949: Feb 18 02:14:29.149 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    According to Error Explanation from Cisco Documentation (http://www.cisco.com/c/en/us/td/docs/ios/12_2sx/system/messages/122sxsms/sm2sx09.html#wp1022051)
    suggests the TCP handles from a client could not be created or initialized
    Error Message %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    Explanation The system failed to create a process to handle requests  from a client. This condition could be caused by insufficient  memory.
    Recommended Action Reduce other system activity to ease  memory demands.
    But I am still not sure about the exact root cause is as
    1.The GET/GETNEXT / GET BULK messages from SNMP Manager (Here, IBM Tivoli Netcool  ) uses default SNMP Port 161 which is
       UDP and not TCP
    2. If its ICMP Polling failure from IBM Tivoli Netcool , ICMP is Protocol Number 1 in Internet Layer of TCP/IP Protocol Suite  and TCP is Protocol                 Number 6 in the Transport Layer of TCP/IP Protocol Suite .
    So I am still not sure how TCP Protocol Process Failure could have caused ICMP Timeout . Please help !
    Could you please help me on what TCP Protocol Process handles in a Cisco Switch ?
    Regards,
    Anup

  • Field for resource allocation in project online

    Hi,
    I'm running Project Pro for Office 365. I'm trying to build some Business Intelligence reports. I would like to make a PivotChart in excel where the X-Axis is the time and the Y-axis is the allocation percentage. I would have a filter to select the resources
    i want to show the bar chart for. That way i can see over the next few weeks/months who is going to be too busy once they go over 100%. How/where do I get the field for a resource allocation in the OData? What is the name for it, is there a list somewhere
    of all the fields one can include into a custom OData?
    Thank you for your help.

    Jim,
    No, using the GUI. Resource Rates are per resource, and will have to be populated when creating the resource.
    However, you can bulk edit the resources in project pro and set the standard rate in one go.
    Cheers,
    Prasanna Adavi, Project MVP
    Blog:
      Podcast:
       Twitter:   
    LinkedIn:
      

  • Problem with getting bulk output in 2D char array in precompiler program

    In a ANSI dynamic SQL precompiler (Pro*C) program how do I obtain the bulk output of a column of string type (varchar or char) in a host variable array.
    I do not want to use reference semantics, instead want to use get descriptor directive. Is it possible to use 2 dimensional character array for getting the output?
    I am dynamically allocating memory for the host arrays.

    Consider asking this in the forum for C and C++ (OCI and OCCI).

  • DIFFERENCE BETWEEN THE FULL AND BULK-LOGGED RECOVERY MODEL

    DIFFERENCE  BETWEEN THE  FULL AND BULK-LOGGED  RECOVERY MODEL 

    In bulk logged recovery mode certain bulk operations are minimally logged. In FULL recovery mode these are fully logged. These bulk operations are as mentioned below
    1. SELECT INTO
    2. BULK IMport operations including BULK INSERT and BCP
    3. INSERT INTO SELECT command using the OPENROWSET(BULK) function
    4. Partial updates to columns having large value data type
    5. Using WRITE clause in UPDATE statements
    6. Index operations e.g CREATE INDEX, ALTER INDEX REBUILD , DROP INDEX
    In Bulk Logged Recovery Mode when you execute these operations SQL Server only logs the fact that these operation occured and information about space allocation. The actual change in the data is maintained in the BCM (Bulk
    Changed Map)
    Since the actual changes are not recorded in the log file, the log file size in the reletively less in size but this tradeoff comes with the price of increased backup time. This is so because during the log backup its not just the log being backup, but also
    the extents that are marked by the Bulk Changed Map as changed.
    SQLEnthusiast
    http://sqlsimplified.com/
    Please click the Mark as Answer button if a post solves your problem!

  • Bulk Fetch From an Oracle Sequence

    I am trying to get a range of sequence values from an Oracle sequence.
    I am using the option as show below using query
    SELECT SEQUENCE_NAME.NEXTVAL FROM SYS.DUAL CONNECT BY LEVEL <= 10.
    The above SQL gets 10 sequence value.
    I just wanted to to check, if the implementation below is safe in a Multi User Environment?
    Is the statement show below atomic. i.e. Multi parallel execution of the same function; Would it cause any inconsistencies?
    EXECUTE IMMEDIATE 'SELECT SEQUENCE_NAME.NEXTVAL ' ||
      'FROM SYS.DUAL CONNECT BY LEVEL <= ' || TO_CHAR(i_quantity)
      BULK COLLECT INTO v_seq_list;
    FUNCTION select_sequence_nextval_range(
       i_quantity      IN  INTEGER)
    RETURN INTEGER IS
      o_nextval INTEGER;
      v_seq_list sequence_list;
    BEGIN
      EXECUTE IMMEDIATE 'SELECT SEQUENCE_NAME.NEXTVAL ' ||
      'FROM SYS.DUAL CONNECT BY LEVEL <= ' || TO_CHAR(i_quantity)
      BULK COLLECT INTO v_seq_list;
      -- Get the first poid value.
      o_nextval := v_seq_list(1);
      RETURN o_nextval;
    END select_sequence_nextval_range

    Acquire Lock
    You acquire a lock on a sequence? That's news to me - please post the code that does that. I certainly hope you don' t mean you are directly accessing the SYS.SEQ$ table to lock the row for that sequence - it isn't nice to mess with Oracle's tables!
    For couple of JAVA/C applications the usage of sequence number is pretty big. Could be 100,000 for one single application processing.
    How does that correlate with your previous statement that you get 10 at a time?
    Sequences aren't designed for use cases that require gap-free sets of numbers or for use cases that require consecutive sets of numbers.
    We wanted to implement the range get of sequence using a different mechanism.
    For few other applications; we just need one sequence number for the application processing. So we use the select seq.nextval to get the value. So the same sequence number needs to serve the role of giving a single value as well as a consecutive range of values.
    Then you may need to consider using your own table to track the chunks that need to be allocated. You would use a scheme similar to what Greg.Spall discussed except you would keep the 'chunk' data in your own table.
    I'm not talking about using your own table to control actual one-by-one sequence number generation - that is a very bad idea. But if you need to work with large ranges that are allocated infrequently there is nothing wrong with using your own function and your own table to keep track of those allocations.
    The 'one by one' number generation would be handled by an actual sequence. The generation of a 'start value' and an 'end value' would be handled by accessing your custom table. Each row in that table would have 'start_value' and 'available_numbers' colulmns.
    Your function would take a parameter for how many numbers you need. For just one number the function would call the sequence.nextval and return that along with a count of '1'.
    For a range the function would:
    1. find a row in the table with an 'available_numbers' value large enough to satisfy the request,
    2. lock the row for update
    3. capture the 'start_value' for return to the user
    4. adjust both the 'start_value' and 'available_numbers' values to account for the range being allocated
    5. update the table and commit
    6. return the 'start_value' and 'number_allocated' to the user (number_allocated might be LESS than requested perhaps)
    The above is a viable solution ONLY if the frequency of allocation and the size of allocation avoids the serialization issues associated with trying to allocate your own sequence numbers.
    Those issues can be somewhat mitigated by having the table store multiple rows with each row having a large chunk of values that can be allocated. Then your function query can get the first 'unlocked' row and avoid serializing just because one row is currently locked.

  • Bulk phone import and export CUCM 8.5

    In preparation for a migration to CUCM 9.1.2 we need to update the owner ID field on around 7000 phones. The only way to do this through the bulk admin tool is to export phone all details, update the field and then re import all the phones. The export generates a .txt file. If I open this as an csv in excel it automatically chooses the column data types which messes with the number fields by stripping off leading 0s. If I try and import the data Excel says there are too many columns. How have others been able to work with the phone all details export?

    If there is a Cisco utility to do this I would be greatful. To be honest I'm getting rather annoyed with this process. So far I've ran into these 2 bugs:
    https://tools.cisco.com/bugsearch/bug/CSCsu97248
    https://tools.cisco.com/bugsearch/bug/CSCtl87980
    One of my colleagues has a created a script that goes through the exported file and fixes up the service parameters so we got around these. I piloted re-importing a couple of hundred devices and few spat out sql/db connection errors. I then tried to update another 400 phones the next night and although the validation goes fine, when I try and import i'm faced witha Memory allocation error:
    sql.SQLException: Memory allocation failed during query processing.
    I know this is most likely specific to our CUCM environment and we will be doing a restart tonight to hopefully resolve the issue (the TAC engineer suggested restarting the DB service). However it just seems like one issue after another to do a fairly simple task. The whole process of having to import every field on every phone to update one field is rediculous. I have a list of phone IDs and a list of users, why isn't there a way to define a custom file format and use the update option rather than having to re-insert all details?
    This blogger wrote a cURL AXL script to pretty much do what I'm suggesting:
    http://pandaeatsbamboo.blogspot.hk/2014/01/shell-script-to-update-device-user.html
    However I have no prior experience with working with the CUCM Database via SQL and if anything goes wrong I would be stuffed.

  • Error when Bulk load hierarchy data

    Hi,
    While loading P6 Reporting databases following message error appears atthe step in charge of Bulk load hierarchy data into ODS.
    <04.29.2011 14:03:59> load [INFO] (Message) - === Bulk load hierarchy data into ODS (ETL_LOADWBSHierarchy.ldr)
    <04.29.2011 14:04:26> load [INFO] (Message) - Load completed - logical record count 384102.
    <04.29.2011 14:04:26> load [ERROR] (Message) - SqlLoaderSQL LOADER ACTION FAILED. [control=D:\oracle\app\product\11.1.0\db_1\p6rdb\scripts\DATA_WBSHierarchy.csv.ldr] [file=D:\oracle\app\product\11.1.0\db_1\p6rdb\temp\WBSHierarchy\DATA_WBSHierarchy.csv]
    <04.29.2011 14:04:26> load [INFO] (Progress) - Step 3/9 Part 5/6 - FAILED (-1) (0 hours, 0 minutes, 28 seconds, 16 milliseconds)
    Checking corresponding log error file (see below) I see that effectively some records are rejected. Question is: How could I identify the source of the problem and fix it?
    QL*Loader: Release 11.1.0.6.0 - Production on Mon May 2 09:03:22 2011
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Control File:   DATA_WBSHierarchy.csv.ldr
    Character Set UTF16 specified for all input.
    Using character length semantics.
    Byteorder little endian specified.
    Data File:      D:\oracle\app\product\11.1.0\db_1\p6rdb\temp\WBSHierarchy\DATA_WBSHierarchy.csv
    Bad File:     DATA_WBSHierarchy.bad
    Discard File:  none specified
    +(Allow all discards)+
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Bind array:     64 rows, maximum of 256000 bytes
    Continuation:    none specified
    Path used:      Conventional
    Table WBSHIERARCHY, loaded from every logical record.
    Insert option in effect for this table: APPEND
    TRAILING NULLCOLS option in effect
    Column Name                  Position   Len  Term Encl Datatype
    PARENTOBJECTID                      FIRST     *  WHT      CHARACTER
    PARENTPROJECTID                      NEXT     *  WHT      CHARACTER
    PARENTSEQUENCENUMBER                 NEXT     *  WHT      CHARACTER
    PARENTNAME                           NEXT     *  WHT      CHARACTER
    PARENTID                             NEXT     *  WHT      CHARACTER
    CHILDOBJECTID                        NEXT     *  WHT      CHARACTER
    CHILDPROJECTID                       NEXT     *  WHT      CHARACTER
    CHILDSEQUENCENUMBER                  NEXT     *  WHT      CHARACTER
    CHILDNAME                            NEXT     *  WHT      CHARACTER
    CHILDID                              NEXT     *  WHT      CHARACTER
    PARENTLEVELSBELOWROOT                NEXT     *  WHT      CHARACTER
    CHILDLEVELSBELOWROOT                 NEXT     *  WHT      CHARACTER
    LEVELSBETWEEN                        NEXT     *  WHT      CHARACTER
    CHILDHASCHILDREN                     NEXT     *  WHT      CHARACTER
    FULLPATHNAME                         NEXT  8000  WHT      CHARACTER
    SKEY                                                      SEQUENCE (MAX, 1)
    value used for ROWS parameter changed from 64 to 21
    Record 14359: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 14360: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 14361: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 27457: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 27458: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 27459: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 38775: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 38776: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 38777: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 52411: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 52412: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 52413: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 114619: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 114620: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 127921: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 127922: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 164588: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 164589: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 171322: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 171323: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 186779: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 186780: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 208687: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 208688: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 221167: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 221168: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 246951: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 246952: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Table WBSHIERARCHY:
    +384074 Rows successfully loaded.+
    +28 Rows not loaded due to data errors.+
    +0 Rows not loaded because all WHEN clauses were failed.+
    +0 Rows not loaded because all fields were null.+
    Space allocated for bind array:                 244377 bytes(21 rows)
    Read   buffer bytes: 1048576
    Total logical records skipped:          0
    Total logical records read:        384102
    Total logical records rejected:        28
    Total logical records discarded:        0
    Run began on Mon May 02 09:03:22 2011
    Run ended on Mon May 02 09:04:07 2011
    Elapsed time was:     00:00:44.99

    Hi Mandeep,
    Thanks for the information.
    But still it doesnot seem to work.
    Actally, i have Group ID and Group Name as display field in the Hiearchy table.
    Group ID i have directly mapped to Group ID.
    I have created a Split Hierarchy of Group Name and mapped it.
    I have also made all the options configurations as per your suggestions, but it doenot work still.
    Can you please help.
    Thanks,
    Priya.

  • Problem with BULK COLLECT with million rows - Oracle 9.0.1.4

    We have a requirement where are supposed to load 58 millions of rows into a FACT Table in our DATA WAREHOUSE. We initially planned to use Oracle Warehouse Builder but due to performance reasons, decided to write custom code. We wrote a custome procedure which opens a simple cursor and reads all the 58 million rows from the SOURCE Table and in a loop processes the rows and inserts the records into a TARGET Table. The logic works fine but it took 20hrs to complete the load.
    We then tried to leverage the BULK COLLECT and FORALL and PARALLEL options and modified our PL/SQL code completely to reflect these. Our code looks very simple.
    1. We declared PL/SQL BINARY_INDEXed Tables to store the data in memory.
    2. We used BULK COLLECT into FETCH the data.
    3. We used FORALL statement while inserting the data.
    We did not introduce any of our transformation logic yet.
    We tried with the 600,000 records first and it completed in 1 min and 29 sec with no problems. We then doubled the no. of rows to 1.2 million and the program crashed with the following error:
    ERROR at line 1:
    ORA-04030: out of process memory when trying to allocate 16408 bytes (koh-kghu
    call ,pmucalm coll)
    ORA-06512: at "VVA.BULKLOAD", line 66
    ORA-06512: at line 1
    We got the same error even with 1 million rows.
    We do have the following configuration:
    SGA - 8.2 GB
    PGA
    - Aggregate Target - 3GB
    - Current Allocated - 439444KB (439 MB)
    - Maximum allocated - 2695753 KB (2.6 GB)
    Temp Table Space - 60.9 GB (Total)
    - 20 GB (Available approximately)
    I think we do have more than enough memory to process the 1 million rows!!
    Also, some times the same program results in the following error:
    SQL> exec bulkload
    BEGIN bulkload; END;
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    We did not even attempt the full load. Also, we are not using the PARALLEL option yet.
    Are we hitting any bug here? Or PL/SQL is not capable of mass loads? I would appreciate any thoughts on this?
    Thanks,
    Haranadh
    Following is the code:
    set echo off
    set timing on
    create or replace procedure bulkload as
    -- SOURCE --
    TYPE src_cpd_dt IS TABLE OF ima_ama_acct.cpd_dt%TYPE;
    TYPE src_acqr_ctry_cd IS TABLE OF ima_ama_acct.acqr_ctry_cd%TYPE;
    TYPE src_acqr_pcr_ctry_cd IS TABLE OF ima_ama_acct.acqr_pcr_ctry_cd%TYPE;
    TYPE src_issr_bin IS TABLE OF ima_ama_acct.issr_bin%TYPE;
    TYPE src_mrch_locn_ref_id IS TABLE OF ima_ama_acct.mrch_locn_ref_id%TYPE;
    TYPE src_ntwrk_id IS TABLE OF ima_ama_acct.ntwrk_id%TYPE;
    TYPE src_stip_advc_cd IS TABLE OF ima_ama_acct.stip_advc_cd%TYPE;
    TYPE src_authn_resp_cd IS TABLE OF ima_ama_acct.authn_resp_cd%TYPE;
    TYPE src_authn_actvy_cd IS TABLE OF ima_ama_acct.authn_actvy_cd%TYPE;
    TYPE src_resp_tm_id IS TABLE OF ima_ama_acct.resp_tm_id%TYPE;
    TYPE src_mrch_ref_id IS TABLE OF ima_ama_acct.mrch_ref_id%TYPE;
    TYPE src_issr_pcr IS TABLE OF ima_ama_acct.issr_pcr%TYPE;
    TYPE src_issr_ctry_cd IS TABLE OF ima_ama_acct.issr_ctry_cd%TYPE;
    TYPE src_acct_num IS TABLE OF ima_ama_acct.acct_num%TYPE;
    TYPE src_tran_cnt IS TABLE OF ima_ama_acct.tran_cnt%TYPE;
    TYPE src_usd_tran_amt IS TABLE OF ima_ama_acct.usd_tran_amt%TYPE;
    src_cpd_dt_array src_cpd_dt;
    src_acqr_ctry_cd_array      src_acqr_ctry_cd;
    src_acqr_pcr_ctry_cd_array     src_acqr_pcr_ctry_cd;
    src_issr_bin_array      src_issr_bin;
    src_mrch_locn_ref_id_array     src_mrch_locn_ref_id;
    src_ntwrk_id_array      src_ntwrk_id;
    src_stip_advc_cd_array      src_stip_advc_cd;
    src_authn_resp_cd_array      src_authn_resp_cd;
    src_authn_actvy_cd_array      src_authn_actvy_cd;
    src_resp_tm_id_array      src_resp_tm_id;
    src_mrch_ref_id_array      src_mrch_ref_id;
    src_issr_pcr_array      src_issr_pcr;
    src_issr_ctry_cd_array      src_issr_ctry_cd;
    src_acct_num_array      src_acct_num;
    src_tran_cnt_array      src_tran_cnt;
    src_usd_tran_amt_array      src_usd_tran_amt;
    j number := 1;
    CURSOR c1 IS
    SELECT
    cpd_dt,
    acqr_ctry_cd ,
    acqr_pcr_ctry_cd,
    issr_bin,
    mrch_locn_ref_id,
    ntwrk_id,
    stip_advc_cd,
    authn_resp_cd,
    authn_actvy_cd,
    resp_tm_id,
    mrch_ref_id,
    issr_pcr,
    issr_ctry_cd,
    acct_num,
    tran_cnt,
    usd_tran_amt
    FROM ima_ama_acct ima_ama_acct
    ORDER BY issr_bin;
    BEGIN
    OPEN c1;
    FETCH c1 bulk collect into
    src_cpd_dt_array ,
    src_acqr_ctry_cd_array ,
    src_acqr_pcr_ctry_cd_array,
    src_issr_bin_array ,
    src_mrch_locn_ref_id_array,
    src_ntwrk_id_array ,
    src_stip_advc_cd_array ,
    src_authn_resp_cd_array ,
    src_authn_actvy_cd_array ,
    src_resp_tm_id_array ,
    src_mrch_ref_id_array ,
    src_issr_pcr_array ,
    src_issr_ctry_cd_array ,
    src_acct_num_array ,
    src_tran_cnt_array ,
    src_usd_tran_amt_array ;
    CLOSE C1;
    FORALL j in 1 .. src_cpd_dt_array.count
    INSERT INTO ima_dly_acct (
         CPD_DT,
         ACQR_CTRY_CD,
         ACQR_TIER_CD,
         ACQR_PCR_CTRY_CD,
         ACQR_PCR_TIER_CD,
         ISSR_BIN,
         OWNR_BUS_ID,
         USER_BUS_ID,
         MRCH_LOCN_REF_ID,
         NTWRK_ID,
         STIP_ADVC_CD,
         AUTHN_RESP_CD,
         AUTHN_ACTVY_CD,
         RESP_TM_ID,
         PROD_REF_ID,
         MRCH_REF_ID,
         ISSR_PCR,
         ISSR_CTRY_CD,
         ACCT_NUM,
         TRAN_CNT,
         USD_TRAN_AMT)
         VALUES (
         src_cpd_dt_array(j),
         src_acqr_ctry_cd_array(j),
         null,
         src_acqr_pcr_ctry_cd_array(j),
              null,
              src_issr_bin_array(j),
              null,
              null,
              src_mrch_locn_ref_id_array(j),
              src_ntwrk_id_array(j),
              src_stip_advc_cd_array(j),
              src_authn_resp_cd_array(j),
              src_authn_actvy_cd_array(j),
              src_resp_tm_id_array(j),
              null,
              src_mrch_ref_id_array(j),
              src_issr_pcr_array(j),
              src_issr_ctry_cd_array(j),
              src_acct_num_array(j),
              src_tran_cnt_array(j),
              src_usd_tran_amt_array(j));
    COMMIT;
    END bulkload;
    SHOW ERRORS
    -----------------------------------------------------------------------------

    do you have a unique key available in the rows you are fetching?
    It seems a cursor with 20 million rows that is as wide as all the columnsyou want to work with is a lot of memory for the server to use at once. You may be able to do this with parallel processing (dop over 8) and a lot of memory for the warehouse box (and the box you are extracting data from)...but is this the most efficient (and thereby fastest) way to do it?
    What if you used a cursor to select a unique key only, and then during the cursor loop fetch each record, transform it, and insert it into the target?
    Its a different way to do a lot at once, but it cuts down on the overall memory overhead for the process.
    I know this isnt as elegant as a single insert to do it all at once, but sometimes trimming a process down so it takes less resources at any given moment is much faster than trying to do the whole thing at once.
    My solution is probably biased by transaction systems, so I would be interested in what the data warehouse community thinks of this.
    For example:
    source table my_transactions (tx_seq_id number, tx_fact1 varchar2(10), tx_fact2 varchar2(20), tx_fact3 number, ...)
    select a cursor of tx_seq_id only (even at 20 million rows this is not much)
    you could then either use a for loop or even bulk collect into a plsql collection or table
    then process individually like this:
    procedure process_a_tx(p_tx_seq_id in number)
    is
    rTX my_transactions%rowtype;
    begin
    select * into rTX from my_transactions where tx_seq_id = p_tx_seq_id;
    --modify values as needed
    insert into my_target(a, b, c) values (rtx.fact_1, rtx.fact2, rtx.fact3);
    commit;
    exception
    when others
    rollback;
    --write to a log or raise and exception
    end process_a_tx;
    procedure collect_tx
    is
    cursor tx is
    select tx_seq_id from my_transactions;
    begin
    for rTx in cTx loop
    process_a_tx(rtx.tx_seq_id);
    end loop;
    end collect_tx;

  • Bulk email from the database

    Good morning,
    Running Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production... (we'll be upgrading to 11g soon, but for now, stuck with 9i)
    Our clients have requested an email subscription service to be implemented for our news web site. The anticipated load will be:
    one module to support the following:
    - 3000 emails, with up to 50 instant email notifications an hour... so 150,000 emails delivered per hour. Email is sent as soon as new article is posted. (articles are embargoed and go live once embargo has passed)
    The other module is a open subscription form with double opt-in for the general public... with choices of instant email notification, daily digest or weekly digest or news article postings. the anticipated load is difficult to estimate for this one, but we could expect 50,000+ subscriptions... with up to 500 article posting per day. end users will have the ability to customize their subscription based on audience, department etc...
    In summary, we're looking to implement a solution that can deliver hundred of thousands of emails a day.
    As an Oracle developer, I always look first to the DB for solution. I know in 10g and up, utl_mail is available... not so in 9i. I do have a sample mail package from OTN that seems to do the trick. Emails will need to be sent as HTML, since they will contain a small image reference.
    Some potential ideas I have considered so far:
    for instant email notifications:
    - setting up queue tables and DBMS_JOB to monitor every minute and send email out to recipients using PL/SQL mail procedure
    - using Oracle AQ to manage queues, publish the payload with complete article information, subscriber will dequeue messages and send out using same PL/SQL email procedure. Messages in the queue will have delay set to article embargo date, to ensure articles are not emailed ahead of publishing.
    I have never worked with Oracle AQ before, but it seems to offer some benefits and more intelligence than a custom solution. I have also considered setting up a new Oracle instance for sending out emails, to offload some of the work from the main instance feeding the news web site.
    the daily and weekly digest emails are not as big a concern at this point, since they will be processed during off-peak hours, and will run once day/week... the Oracle AQ solution I thought would be an elegant and scalable solution for the instant notifications...
    My major concern at this point is scalibility and performance... I will rely on bulk processing in SQL to collect data... looping through the arrays to send out emails, as well as building up the email objects and sending them out in time is a concern.
    Given the potential volume we'll be dealing with, is a solution in Oracle the proper way to go? Our organization don't have an enterprise solution to this of course, so we have to build it from scratch. Other environments/tools at our disposal, Oracle 10g (10.1.3) application servers running Java ... could use JavaMail... our current set-up is 2 load-balanced application servers, with multiple OC4J containers running on each.
    Thanks for any tips or advice you may have.
    Stephane

    Bill... thank you for taking the time to respond in such a detailed manner. This is greatly appreciated. I have passed this along to our DBAs and messaging experts for review.
    I'm tasked with modeling the DB and optimizing it so it can achieve the target performance levels.
    Billy  Verreynne  wrote:
    Each SMTP conversation (from UTL_SMTP or UTL_MAIL) requires a socket handle. This is a server resource that needs to be allocated. The o/s has limits on the number of server resources a single process can use and that a single o/s account can use. Does not help that you have a scalable design, scalable code, and the server cannot supply the resources required to scale. So server capability and config are quite important.We've gotten our Unix specialists to look into this... we've been told our current platform is limited, and no further upgrades will be allocated, since we'll be moving to a newer platform in the "near future".
    There's also the physical network itself - as that 100,000 mails will each have a specific size and needs to be transported from the client (Oracle db server) to the server (mail/smtp server). The network must have the capacity and b/w to support this in addition to the current loads it is handling.Our network analysts will be putting us on a segregated network (subnet) to avoid impacting the rest of the organization... although bandwitdh is shared at the end of the day, we'll be somewhat isolated, with perhaps even our own firewalls and load balancers.
    You will need to look at a couple of things to decide on how to put all of this together. How are e-mails created in the database? Can they be immediately transmitted after being committed? Is the actual Mime body of the e-mail created by the transaction, or it is normal row and column data that afterward needs to be converted into a Mime body for mailing?The scheduling of emails are tricky.... articles are sometimes posted (comitted to the DB), but still under embargo... we plan to use this embargo date (date on which article goes live/public) as the "delay" in the job scheduling. At that point, article is emailed to thousands.
    For instant notifications, all recipients get the same content, article content direct in email. There custom pieces to include, eg unsubscribe link with unique identifier, edit subscription etc... but these bits could be pre-generated and stored with the email subscriber info, and appended to the mail body. No images or other binary files are embedded or attached... so we're dealing with mostly text/html in the body.
    Scalability is a direct function of design. So you will need to design each component for scalability. For example, do you create the Mime body dynamically as part of the e-mail transmission code unit? Or do you have a separate code unit that creates and persists Mime bodies that then are transmitted by the transmission code unit? Instant notification html email composition is a bit simpler. It gets tricky with daily, weekly, monthly digests. Here we have to assemble thousands of custom email bodies, based on subscription options (up to 5 custom fields, to configure subscription content)... from there assemble each individual mail body based on current subscriber options, then send the email.
    Mail bodies are potentially different for each individual subscriber, given the various permutations of selections, so really don't see how a body can be persisted for re-use when emailing, each may be single use only. In this case, looking at assembling thousands of emails, then emailing each one in a loop.
    Do you for example queue a single DBMS_JOB per e-mail? There are overheads in starting a job. So do you pay this overhead per mail, or do you schedule a job to transmit a 100 e-mails and pay this overhead once per 100 mails?for instant notifications, we'd be queuing a job for every article posted. From there, every email subscribed to instant notifications, and matching their subscription configuration to the article, will be retrieved and email sent.
    So there are a number of factors to consider ito design, how to deal with Mime bodies, how to deal with exception processing, how to parallelise processing and so on. One factor will need to be on how to deal with catchup processing - as there will be a failure of some sorts at a stage that means processing is some hours behind. And this needs to be factored into the design.An email job that fails half-way through concerns us... how do we proceed where we left off etc... may have to keep track of job numbers etc...
    The other option we're considering, is clustered Oracle 10gR3 application servers... to process and send the emails, using JavaMail... there is still an issue with Oracle handling the query volume required to assemble the customized emails for each subscriber (which could reach 50,000 within a year or two)...
    I would not select that as an architecture. This moves the data and application away from one another - into different process boundaries and even across hardware boundaries.
    When using PL/SQL, both data processing (SQL layer) and conditional processing and logic (PL layer) are integrated into a single server process. There is no faster way or more scalable way of combining code and data in Oracle. It does not matter how capable that Java platform/architecture is. For that Java code to get SQL data means shipping that data across a JDBC connection (and potentially between servers and across network infrastructure).
    In PL/SQL, it means a simple context switch from PL to SQL to fetch the data.. and even that we consider "slow" and mitigate using bulk processing in PL/SQL in order to decrease context switching.
    The fact the the data path for a Java app layer is a lot longer than for PL/SQL, automatically means that Java will be slower.totally agree with this. We're having a meeting this morning with all parties to review and discuss the points you have raised, and see if the required resources can be allocated on the Unix side to accommodate the potential load.
    >
    I'm looking at leveraging materialized views (to pre-assemble content), parallelism (query and procedural), Advanced Queuing (seems complex)... AQ is not that complex.. and perhaps not needed. You will however need a form of parallelism in order to run a number of e-mail transmission processes in parallel. The question is how do you tell each unique process what e-mails to transmit, without causing serialisation between parallel processes?
    This can be home rolled parallelism as shown in {message:id=1534900} (from a technique posted by Tom on asktom that is now a 11.2 feature). You can also use parallel pipelined tables. Or use AQ. I'm pretty sure that a solid design will support any of these - modularising the parallel processing portion of it and allowing different methods to be used to test drive and even benchmark the parallel processing component.
    If using AQ, we're considering a separate Oracle instance in a different AIX partition perhaps, which could manage the email function. Our main instance (which feeds our public web site, and stores all data), would push objects onto the queue, and items would be dequeued on the other end in the other Oracle instance.
    It however sounds like a very interesting project. Crunching lots of data and dealing with high processing demands... that is the software engineer's definition of fun. :-)Indeed... I wouldn't consider myself as a software engineer at this point, but perhaps after this is done, I'll have earned my stripes. ;-)
    Edited by: pl_sequel on Jul 13, 2010 9:57 AM

  • Dynamic BULK SELECT/MODIFY

    Hi,
    When using EXECUTE IMMEDIATE with a dynamically
    built SELECT/INSERT statement with varying
    tablename, fieldnames, fieldtypes and fieldcounts
    how do I make the BULK COLLECT INTO/USING parts
    dynamic too?
    Can I use RECTYPE arrays if the type depends on
    the dynamic statement? Kind of like (pseudo-code)
    procedure FETCH(p_Stmt VARCHAR2)
    valueType TABLE OF p_Stmt%REC_TYPE INDEXED BY
    BINARY_INTEGER;
    values valueType;
    LOOP
    EXECUTE IMMEDIATE p_Stmt
    BULK COLLECT INTO values LIMIT 1000;
    FOR values LOOP
    -- use values.a and values.b
    END LOOP
    END LOOP;
    FETCH('SELECT a,b FROM t');
    If I'm forced to use DBMS_SQL and bind OUT/IN arrays
    of different types do I need to pre-allocate arrays
    of each type against the chance that up to (say)
    a 100 DATE columns end up in need of beeing bound?
    Or can I reuse one array per type because the methods
    copy the arrays? Or is there a generic type that can
    be used to bind whatever type the column has?
    regards,
    Schenke

    Hi Justin,
    So I gathered (my question mentioned this option).
    I only had gotten the impression from some comments
    that DBMS_SQL may be slower than EXECUTE IMMEDIATE
    because of the extra layer of flexibility. Maybe
    there will be more parsing too.
    There seems to be no way to define the arrays needed
    for the DBMS_SQL-IN/OUT-binds dynamically. I don't
    think reusing the same array for different columns
    will work either. That means arrays must be defined
    against all possible columns or at least enough of
    each type with a dynamic allocation process (i.e.
    count the number of columns for each type and assign
    arrays with names like date1, date2, date3, ... to
    the 1., 2., 3... DATE column in the statement.)
    regards,
    Schenke

  • Error in Add/Replace Bulk Load component - illegal character in XML

    Has anyone ever seen the bulk load component complain about some illegal character in xml? I see this error and not sure what exactly the problem is:
    ERROR [SocketReader] - Received error message from server: Character is not legal in XML 1.0
    It's a very simple graph - reading data from clover data file and ingesting it straight into Endeca using the out of the box bulk load component.
    Thanks for your help!
    Edited by: 935345 on May 18, 2012 11:48 AM

    Assuming you are on EID 2.3, this transformation will apply the fix to all your string fields and print on your console the fields that had non-compliant XML 1.0 characters.
    //#CTL2
    string[] fields;
    // Transforms input record into output record.
    function integer transform() {
         $out.0.* = $in.0.*;
         for(integer i = $in.0.length() - 1; i >=0 ; i--) {
              if (getFieldType($in.0.*, i) == "string" && getFieldType($out.0.*, i) == "string") {
                   if (!isNull($in.0.*, i)) {
                        string originalValue = getStringValue($in.0.*, i);
                        string newValue = originalValue.replace("([^\\u0009\\u000a\\u000d\\u0020-\\uD7FF\\uE000-\\uFFFD]|[\\u0092\\u007F]+)","");
                        if (originalValue != newValue) {
                             fields[i] = getFieldName($in.0, i);
                        setStringValue($out.0.*, i, newValue);
         return OK;
    // Called during component initialization.
    // function boolean init() {}
    // Called during each graph run before the transform is executed. May be used to allocate and initialize resources
    // required by the transform. All resources allocated within this method should be released
    // by the postExecute() method.
    // function void preExecute() {}
    // Called only if transform() throws an exception.
    // function integer transformOnError(string errorMessage, string stackTrace) {}
    // Called during each graph run after the entire transform was executed. Should be used to free any resources
    // allocated within the preExecute() method.
    function void postExecute() {
         printErr("Fields with non-compliant XML 1.0 characters");
         for(integer i = 0; i < fields.length(); i++) {
              if (fields[i] != null) {
                   printErr(fields);
    // Called to return a user-defined error message when an error occurs.
    // function string getMessage() {}
    -- Alex                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Maybe you are looking for

  • Clearing an alert in OEM-GC that has a future date on, How do you delete?

    Hi to all, I have an alert that occurred regarding the count on targets on the agent and it has a future time stamp on it. I have tried to alter the alert metric so it en-compasses the amount of targets hoping that this would clear the alert and then

  • Tax Breakup in PLD

    Dear All, I require the following tax breakup in PLD Ex: Total................................................303743.40.......A Packing charges.................................2316............B Excise Duty@10%..............................30606......

  • Call sub-vi

    I used an open reference to open a sub-VI and run it in the Main VI, but the sub-VI only runs when I open the sub-VI manually in advance and then hit run on the Main VI. If I just run the Main VI without opening the sub-VI manually in advance, nothin

  • Command line authentication "/ap" is not working on the Mac

    I am trying to connect to my media server using the Mac version of FMLE, but my server has authentication and I cannot connect to it using the command line. I use the /ap command line option with the username:password string, it works on Windows, but

  • Cash sale should be billed by taking the reference of delivery document

    Dear gurus, By default while doing cash sale system will give sales order number and delivery doc. number. by taking reference of delivery i want to bill it. what  settings should be done to achieve this?