Too many collections?

Hello everyone! I would like to ask a question that is maybe related to architecture.
I am making a soccer db, which holds some information.
A part of the model is as follows (expressed in UML):
        1         1..*       1              1..*
League ------------- Team ------------------ PlayerI suppose this model is correct. However I need a user interface where users will be able to see a list of the leagues the teams and the players. They will also be able to change the information.
This means that I have to have collection classes for each class in the system?
For instance for the league class, I need a Collection of Leagues that will hold/populate from the db the information of each separate league. I will also need a collection of teams in order to show the information on screen. This would make the model like this:
      1           1..*   1       1    1       1..*  1   1       1   1..*
Leagues ------------League---------Teams--------Team ----Players------PlayerAre there any alternatives to this approach, or do I have to have collections of every class?
Thanks in advance for your answers!

Thank you very much for your answer jverd.
I also want to keep the model simple. I was going to implement 1-many relations with java collections (LinkedHashSet), since the classes are already created as you described by a CASE tool that I am using.
But it seems like it will need a lot of processing. For example in the first model if I want to display a list of all the players irrespective of the Team (all the players that exist in the system), I have to have a populateRelatedTeams function for each League (that will fetch the teams of the specific League) and if I call it for every League it will eventually fetch all of the teams in all of the Leagues. Then for every team I should have a populateRelatedPlayers that will fetch the players of the specific Team.
But this means a lot of select queries in the database.
The second option I think I have (if I do not use specific collections) is to create a method in the Team object which will get all the players, irrespective of the Team. Then all I have to do in my code is create a "dummy" Team object and call that method. This will only need a select statement from the db and I won't have to populate the whole class tree. But I have a feeling that I am mixing things if I do it this way. Or maybe not?

Similar Messages

  • Too many BPM data collection jobs on backend system

    Hi all,
    We find about 40,000 data collection jobs running on our ECC6 system, far too many.
    We run about 12 solutions, all linked to the same backend ECC6 system. Most probably this is part of the problem. We plan to scale down to 1 solution rather than the country-based approach.
    But here we are now, and I have these questions.
    1. How can I relate a BPM_DATA_COLLECTION job on ECC6 back to a particular solution ? The job log give me monitor-id, but I can't relate that back to a solution.
    2. If I deactivate a solution in the solution overview, does that immediately cancel the data collection for that solution ?
    3. In the monitoring schedule on a business process step we sometimes have intervals defined as 5 minutes, sometimes 60. Strange thing is that the drop-down of that field does not always give us the same list of values. Even within a solution I see that in one step I have the choice of a long list of intervals, in the next step in that same business process I can only choose between blank and 5 minutes.
    How is this defined ?
    Thanks in advance,
    Rad.

    Hi,
    How did you managed to get rid of this issue. i am facing the same.
    Thanks,
    Manan

  • Please organize osx launchpad icons in CS6 collection(there are too many adobe icons)

    Adobe applications create too many icons in launchpad on osx. Is there a way to automate organising them in a more elegant fashion? Launchpad gets full of red adobe icons(2 pages of them) especially if you have master collection installed. Please put them all in one folder or something.

    Thanks for your input here. We will look into it to see if there is an improvement we can make in the future.
    Pattie

  • Too many values when trying insert records by bulk collect

    Hi
    Can anyone advice on the bulk collect error please?
    Following is my code where I am getting too many values error...
    TYPE p_empid_type IS TABLE OF emp%ROWTYPE;
          v_empid               p_empid_type;
       BEGIN
          SELECT DISTINCT emp_id , 'ABC'
          BULK COLLECT INTO v_empid
                     FROM emp
                    WHERE empid IN (SELECT ord_id
                                       FROM table_x
                                      WHERE column_x = 'ABC');
          FORALL i IN v_empid.FIRST .. v_empid.LAST
             INSERT INTO my_table
                  VALUES v_empid(i);
          COMMIT;
    PL/SQL: ORA-00913: too many values in line - BULK COLLECT INTO v_empid

    Hello, since you're SELECTing a constant string, why not:
    TYPE p_empid_type IS TABLE OF INTEGER;
          v_empid               p_empid_type;
       BEGIN
          SELECT DISTINCT emp_id
          BULK COLLECT INTO v_empid
                     FROM emp
                    WHERE empid IN (SELECT ord_id
                                       FROM table_x
                                      WHERE column_x = 'ABC');
          FORALL i IN v_empid.FIRST .. v_empid.LAST
             INSERT INTO my_table
                  VALUES v_empid(i), 'ABC';
    Edit Untested: may not work
    This would be the best BULK COLLECT of all:
    INSERT /*+ APPEND */ INTO my_table
    SELECT DISTINCT emp_id , 'ABC'
      FROM emp
    WHERE empid IN (SELECT ord_id
          FROM table_x
         WHERE column_x = 'ABC');
          COMMIT;

  • Cannot create a calendar collection because there are too many already present in

    My OS X Server (Yosemite) error log is throwing this error:
    2014-10-22 00:03:07+0800 [-] [caldav-2]  [-] [twistedcaldav.storebridge.CalendarCollectionResource#error] Cannot create a calendar collection because there are too many already present in <twistedcaldav.directory.calendar.DirectoryCalendarHomeResource object at 0x10743a750>
    ...when I attempt to add more than 50 lists in the Reminders app. I had a similar issue in Mavericks. Is this "50" number available to modify in a config file somewhere?
    My plan is to manage GTD-type projects in my Reminders apps (on iOS and OS X), but this limit is keeping me from creating a list for EVERY project I have.

    Woohoo! I set the integer to "500" and I now have over 200 lists added, and syncing, to my Reminders app and related tools.
    Here's exactly what I did after reading the response from Linc Davis:
    Stop the Calendar service
    Create /Library/Server/Calendar and Contacts/Config/caldav-user.plist
    Edit contents of "caldav-user.plist", setting your desired integer value:
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
         <dict>
           <key>MaxCollectionsPerHome</key>    
           <integer>500</integer>
         </dict>
    </plist>
       4. Start the Calendar service
    Enjoy your many more lists.
    Note that a file created in a place other than ~/ may have permissioning issues. To get around this, I made a copy of /Library/Server/Calendar and Contacts/caldavd-system.plist, then renamed and edited that file with nano in terminal.

  • TOO many OPEN CURSORS during loop of INSERT's

    Running ODP.NET beta2 (can't move up yet but will do that soon)
    I don't think it is related with ODP itself but probably on how .Net works with cursors. We have a for/next loop that executes INSERT INTO xxx VALUES (:a,:b,:c)
    statements. Apparently, when monitoring v$sysstat (current open cursors) we see these raising with 1 INSERT = 1 cursor. If subsequently we try to perform another action, we get max cursors exceeded. We allready set open_cursor = 1000, but the number of inserts can be very high. Is there a way to release these cursors (already wrote oDataAdaptor.dispose, oCmd.dispose but this does not help.
    Is it normal that each INSERT has it's own cursor ? they all have the same hashvalue in v$open_cursor. They seem to be released after a while, especially when moving to another asp.net page, but it's not clear when that happens and if it is possible to force the release of the (implicit?) cursors faster.
    Below is a snippet of the code, I unrolled a couple of function-calls into the code so this is just an example, not sure it will run without errors like this, but the idea should be clear (the code looks rather complex for what it does but the unrolled functions make the code more generic and we have a database-independend datalayer):
    Try
    ' Set the Base Delete statement
    lBaseSql = _
    "INSERT INTO atable(col1,col2,col3) " & _
    "VALUES(:col1,:col2,:col3)"
    ' Initialize a transaction
    lTransaction = oConnection.BeginTransaction()
    ' Create the parameter collection, containing for each
    ' row in the list the arguments
    For Each lDataRow In aList.Rows
    lOracleParamters = New OracleParameterCollection()
    lOracleParameter = New OracleParameter("luserid", OracleDbType.Varchar2,
    _ CType(aCol1, Object))
    lOracleParamters.Add(lOracleParameter)
    lOracleParameter = New OracleParameter("part_no", OracleDbType.Varchar2, _
    CType(lDataRow.Item("col2"), Object))
    lOracleParamters.Add(lOracleParameter)
    lOracleParameter = New OracleParameter("revision", OracleDbType.Int32, _
    CType(lDataRow.Item("col3"), Object))
    lOracleParamters.Add(lOracleParameter)
    ' Execute the Statement;
    ' If the execution fails because the row already exists,
    ' then the insert should be considered as succesfull.
    Try
    Dim aCommand As New OracleCommand()
    Dim retval As Integer
    'associate the aConnection with the aCommand
    aCommand.Connection = oConnection
    'set the aCommand text (stored procedure name or SQL statement)
    aCommand.CommandText = lBaseSQL
    'set the aCommand type
    aCommand.CommandType = CommandType.Text
    'attach the aCommand parameters if they are provided
    If Not (lOracleParameters Is Nothing) Then
    Dim lParameter As OracleParameter
    For Each lParameter In lOracleParameters
    'check for derived output value with no value assigned
    If lParameter.Direction = ParameterDirection.InputOutput _
    And lParameter.Value Is Nothing Then
    lParameter.Value = Nothing
    End If
    aCommand.Parameters.Add(lParameter)
    Next lParameter
    End If
    Return
    ' finally, execute the aCommand.
    retval = cmd.ExecuteNonQuery()
    ' detach the OracleParameters from the aCommand object,
    ' so they can be used again
    cmd.Parameters.Clear()
    Catch ex As Exception
    Dim lErrorMsg As String
    lErrorMsg = ex.ToString
    If Not lTransaction Is Nothing Then
    lTransaction.Rollback()
    End If
    End Try
    Next
    lTransaction.Commit()
    Catch ex As Exception
    lTransaction.Rollback()
    Throw New DLDataException(aConnection, ex)
    End Try

    I have run into this problem as well. To my mind
    Phillip's solution will work but seems completey unnecessary. This is work the provider itself should be managing.
    I've done extensive testing with both ODP and OracleClient. Here is one of the scenarios: In a tight loop of 10,000 records, each of which is either going to be inserted or updated via a stored procedure call, the ODP provider throws the "too many cursor errors at around the 800th iteration. With over 300 cursors being open. The exact same code with OracleClient as the provider never throws an error and opens up 40+ cursors during execution.
    The applicaation I have updates a Oracle8i database from a DB2 database. There are over 30 tables being updated in near real time. Reusing the command object is not an option and adding all the code Phillip did for each call seems highly unnecessary. I say Oracle needs to fix this problem. As much as I hate to say it the microsoft provider seems superior at this point.

  • Toshiba DT01ACA050 too many bad sectors on first 5 months

    Hi Good day,
    I bought a toshiba internal drive 500gb(sealed) from my friend but after weird behavior on my pc I found out that it has too many bad sectors detected by HD tune pro and HDsentinel. He insisted that the drive is in good condition because it was 
    sealed thus he don't cover it for personal warranty and further instructed that I must be the one to RMA it but dont know how I live in the Philippines and don't have experience rma'ing a hard drive yet. Also it's weird coz it shows a different product model (Hitachi) instead of Toshiba DT model as seen on my hard drive cover.
    Win7 32bit
    foxconn h55 
    core - i3
    tru rated power supply 500w
    other hdd wd 500gb
    *Additional info 
    Hard Disk Summary
    Hard Disk Number,0
    Interface,"S-ATA Gen3, 6 Gbps"
    Disk Controller,"Standard Dual Channel PCI IDE Controller (ATA) [VEN: 8086, DEV: 3B20]"
    Disk Location,"Channel 1, Target 0, Lun 0, Device: 0"
    Hard Disk Model ID,Hitachi HDS721050DLE630
    Firmware Revision,MS1OA650
    Hard Disk Serial Number,MSK423Y20Y68LC
    Total Size,476937 MB
    Power State,Active
    Logical Drive(s)
    Logical Drive,H: [MUSIC-MOVIES-BACKUP]
    Logical Drive,H: [MUSIC-MOVIES-BACKUP]
    ATA Information
    Hard Disk Cylinders,969021
    Hard Disk Heads,16
    Hard Disk Sectors,63
    ATA Revision,ATA8-ACS version 4
    Transport Version,SATA Rev 2.6
    Total Sectors,122096646
    Bytes Per Sector,4096 [Advanced Format]
    Buffer Size,23652 KB
    Multiple Sectors,16
    Error Correction Bytes,56
    Unformatted Capacity,476940 MB
    Maximum PIO Mode,4
    Maximum Multiword DMA Mode,2
    Maximum UDMA Mode,6 Gbps (6)
    Active UDMA Mode,6 Gbps (5)
    Minimum multiword DMA Transfer Time,120 ns
    Recommended Multiword DMA Transfer Time,120 ns
    Minimum PIO Transfer Time Without IORDY,120 ns
    Minimum PIO Transfer Time With IORDY,120 ns
    ATA Control Byte,Valid
    ATA Checksum Value,Valid
    Acoustic Management Configuration
    Acoustic Management,Not supported
    Acoustic Management,Disabled
    Current Acoustic Level,Default (00h)
    Recommended Acoustic Level,Default (00h)
    ATA Features
    Read Ahead Buffer,"Supported, Enabled"
    DMA,Supported
    Ultra DMA,Supported
    S.M.A.R.T.,Supported
    Power Management,Supported
    Write Cache,Supported
    Host Protected Area,Supported
    Advanced Power Management,"Supported, Disabled"
    Extended Power Management,"Supported, Enabled"
    Power Up In Standby,Supported
    48-bit LBA Addressing,Supported
    Device Configuration Overlay,Supported
    IORDY Support,Supported
    Read/Write DMA Queue,Not supported
    NOP Command,Supported
    Trusted Computing,Not supported
    64-bit World Wide ID,0050A3CCCD7F5346
    Streaming,Supported
    Media Card Pass Through,Not supported
    General Purpose Logging,Supported
    Error Logging,Supported
    CFA Feature Set,Not supported
    CFast Device,Not supported
    Long Physical Sectors (8),Supported
    Long Logical Sectors,Not supported
    Write-Read-Verify,Not supported
    NV Cache Feature,Not supported
    NV Cache Power Mode,Not supported
    NV Cache Size,Not supported
    Free-fall Control,Not supported
    Free-fall Control Sensitivity,Not supported
    Nominal Media Rotation Rate,7200 RPM
    SSD Features
    Data Set Management,Not supported
    TRIM Command,Not supported
    Deterministic Read After TRIM,Not supported
    S.M.A.R.T. Details
    Off-line Data Collection Status,Successfully Completed
    Self Test Execution Status,Successfully Completed
    Total Time To Complete Off-line Data Collection,4444 seconds
    Execute Off-line Immediate,Supported
    Abort/restart Off-line By Host,Not supported
    Off-line Read Scanning,Supported
    Short Self-test,Supported
    Extended Self-test,Supported
    Conveyance Self-test,Not supported
    Selective Self-Test,Supported
    Save Data Before/After Power Saving Mode,Supported
    Enable/Disable Attribute Autosave,Supported
    Error Logging Capability,Supported
    Short Self-test Estimated Time,1 minutes
    Extended Self-test Estimated Time,74 minutes
    Last Short Self-test Result,Never Started
    Last Short Self-test Date,Never Started
    Last Extended Self-test Result,Never Started
    Last Extended Self-test Date,Never Started
    Security Mode
    Security Mode,Supported
    Security Erase,Supported
    Security Erase Time,98 minutes
    Security Enhanced Erase Feature,Not supported
    Security Enhanced Erase Time,Not supported
    Security Enabled,No
    Security Locked,No
    Security Frozen,Yes
    Security Counter Expired,No
    Security Level,High
    Serial ATA Features
    S-ATA Compliance,Yes
    S-ATA I Signaling Speed (1.5 Gps),Supported
    S-ATA II Signaling Speed (3 Gps),Supported
    S-ATA Gen3 Signaling Speed (6 Gps),Supported
    Receipt Of Power Management Requests From Host,Supported
    PHY Event Counters,Supported
    Non-Zero Buffer Offsets In DMA Setup FIS,"Supported, Disabled"
    DMA Setup Auto-Activate Optimization,"Supported, Disabled"
    Device Initiating Interface Power Management,"Supported, Disabled"
    In-Order Data Delivery,"Supported, Disabled"
    Asynchronous Notification,Not supported
    Software Settings Preservation,"Supported, Enabled"
    Native Command Queuing (NCQ),Supported
    Queue Length,32
    Disk Information
    Disk Family,Deskstar 7K1000.D
    Form Factor,"3.5"" "
    Capacity,"500 GB (500 x 1,000,000,000 bytes)"
    Number Of Disks,1
    Number Of Heads,1
    Rotational Speed,7200 RPM
    Rotation Time,8.33 ms
    Average Rotational Latency,4.17 ms
    Disk Interface,Serial-ATA/600
    Buffer-Host Max. Rate,600 MB/seconds
    Buffer Size,32768 KB
    Drive Ready Time (typical),? seconds
    Average Seek Time,? ms
    Track To Track Seek Time,? ms
    Full Stroke Seek Time,? ms
    Width,101.6 mm (4.0 inch)
    Depth,147.0 mm (5.8 inch)
    Height,26.1 mm (1.0 inch)
    Weight,450 grams (1.0 pounds)
    Required power for spinup,"3,300 mA"
    Power required (seek),7.0 W
    Power required (idle),5.0 W
    Power required (standby),2.0 W
    Manufacturer,Hitachi Global Storage Technologies
    Manufacturer Website,http://www.hgst.com

    Hi! Sense no one is replying. If your getting bad sectors; it's time to save your data and replace your HD. It's only a matter of time before your HD fails.
    Dokie!!
    PS I'm feeling a little crazy tonight. Nice friend you have (not)
    I Love my Satellite L775D-S7222 Laptop. Some days you're the windshield, Some days you're the bug. The Computer world is crazy. If you have answers to computer problems, pass them forward.

  • Getting Too many objects match the primary key oracle.jbo.Key...

    Hi,
    I am working on jDEV Version 11.1.1.2.0. In one of my page I am getting exception like "*Too many objects match the primary key oracle.jbo.Key......*".
    I have a Items EO and it's Child EO. I am using view criteria in Items VO and drag and dropped this as a Query panel (search criteria). On the right side top I am displaying results in a table. Below I have a Master form and Child table where users can add / edit the Model and it's child values. When user clicks on Save button I am calling BPELprocess (WSDL) which inserts into 3 other systems and return me the message. After Clicking the Save button and displaying the message if I search for the same model (which I created) then it will throw error like above. If I search for different Model it won't throw the error.
    For example I have created "TestModel" and If I type letter "T"in Input box and search then it will throw error. If I search for some other models which are not starting with letter "T" then it works fine.
    Any idea what may be the reason? Instead of calling services in the Save button I drag and dropped "Commit" button and tested then it works fine.
    This is really critical for my project. It would be great if someone can help me on this.
    Thanks
    MC

    JBO-27102: DeadViewRowAccessException
    Reason: Trying to access a ViewRow which is part of an obsolete/invalid collection. This could happen if a reference to the ViewRow is held by some business logic while the containing view object was removed.
    Solution: Find the referenced ViewRow either by re-querying or using findByKey methods to get a valid reference to the ViewRow. instead of create() can you try createInsert() or createAndInitRow()

  • I'm still using Snow Leopard because heard too many horror stories about upgrading to Lion. But I can't upgrade Safari unless I upgrade OS. Is Mavericks any better than Lion?

    I'm still using Snow Leopard OS because I heard too many horror stories about people upgrading their OS to Lion.  But I can't upgrade Safari anymore unless I upgrade the OS. I get messages constantly telling me I need to upgrade my web browser, and have increasing problems correctly viewing pages (like L.A. TImes, NY TImes) because Safari needs upgrading.
    I can't really tell what the advantages are to upgrade to OS Mavericks or OS Yosemite because it seems most are intended for mobile apps, and I only use my desktop.
    My last upgrade to SNow Leopard wiped out 2/3 of my iPhoto collection. No idea why and no fix. Macs used to be the epitome of compatibility for upgrades — not anymore.
    What are my options? What issues would I encounter if I upgrade OS to Mavericks or to Yosemite? WHat are likely problems, advantages and disadvantages?
    Thanks for any help sent my way.

    If you do want to upgrade.
    Check that your computer is compatible with Mountain Lion/Mavericks/Yosemite.
    To check the model number hold down the option/alt key, go to the Apple menu and select System Information.
    iMac (Mid 2007 or newer) model number 7,1 or higher
    Your Mac needs:
    OS X v10.6.8 or OS X Lion already installed
    2 GB or more of memory (More is better - 4 GB minimum seems to be the consensus)
    8 GB or more of available space
    Check to make sure your applications are compatible. PowerPC applications are no longer supported after 10.6.      
    Application Compatibility
    Applications Compatibility (2)
    Do a backup before installing.
    One option is to create a new partition (~30- 50 GB), install Mavericks, and ‘test drive’ it. If you like/don’t like it it, you can then remove the partition. Do a backup before you do anything. By doing this, if you don’t like it you won't have to go though the revert process.

  • Select from (too many) tables

    Hi all,
    I'm a proud Oracle Apex developer. We have developed an Interactive Report that is generated from many joined tables in a remote system. I've read that to improve performances we can do the following:
    1) Create a temporary table on our system that stores the app_user id and the colmun as a result of the query
    2) Create a procedure that does:
    declare
    param1:= :PXX_item
    param2:= :PXY_item.
    param3:= :V('APP_USER')
    insert into <our_table>
    (select param3, <query from remore system>)
    commit;
    3) Rediresct to a query page where IR reads from this temp table
    On "Exit" button there's a procedure that purge table data of that user (delete from temp where user=V('app_user'), so the temp table is only filled with necessary data.
    Do you see any inconvenience? Application will be used from about 500 users, about 50 concurrent users at a time.
    Thank you!

    1) We don't have a control on source syste, we can only perform query on itI was referring to a materialized view on the system where Apex is installed, not on the source database.
    2) There are many tables involvedI don't understand why this is a problem. Too much data I can see, but too many tables... not so much.
    3) Data has to be in real time, with no delayThis would a problem for MV or collections. The collections would store the data as of the initial query. Any IRs using the collection after the fact would be using stale data. If you absolutely have to have the data as of right now every time, then the full query must run on the remote system every time. Tuning that query is the only option to make it faster.
    4) There are many transactions on the source tables (they are the core of the source system) and so MV could not be refreshed so fastProbably could be with fast refresh enabled, but not necessarily practical to do so. As I indicated in 3, you have painted yourself into a corner here. You have indicated a need for a real-time query and that eliminates a number of possibilities for query-once use-many performance solutions.

  • Insert ORA-00913: too many values  --  urgent help

    Hi there
    Its pretty urgent, got  stuck up....
    To avoid the undo snapshot error, I am using this procedure to migrate the smaller chunks of  huge volume of  table data into new tables. This below code works well if the columns  are very less. And this procedure is not working if the tables columns are morethan 30 columns and throwing the error   PL/SQL: ORA-00913: too many values
    CREATE OR REPLACE PROCEDURE migration AS
       TYPE array_tp IS TABLE OF tranproc%ROWTYPE;
       l_array array_tp;
       CURSOR c IS
          select * from tranproc p where trunc(date)<=trunc(sysdate)-180;
       l_cnt1 NUMBER :=0;
       l_cnt2 NUMBER :=0;
       l_cnt3 NUMBER :=0;
    BEGIN
       OPEN c;
       LOOP
          FETCH c BULK COLLECT INTO l_array LIMIT 10000;
          EXIT WHEN l_array.COUNT = 0;
          l_cnt1 := c%ROWCOUNT;
          FORALL i IN 1 .. l_array.COUNT
             INSERT INTO TMP_Transpoc VALUES l_array(i);
          l_cnt2 := l_cnt2 + SQL%rowcount;
       END LOOP;
       l_cnt3 := c%ROWCOUNT;
       CLOSE c;
       END;
    16    22    PL/SQL: ORA-00913: too many values
    its falling
    line 16: INSERT INTO TMP_Transpoc VALUES l_array(i);
    Above table i.e tranproc  has around 80 columns .
    i am not pl/sql expert, kindly advise how to resolve it.. i am fine with  alternative approach, just i need a smaller chunk commit.

    Actually, Direct Path does not necessarily require NOLOGGING. If you successfully invoke Direct Path (look for LOAD AS SELECT or DIRECT LOAD INTO in the execution plan) then you are inserting into blocks above the high-water mark (HWM) and there is virtually no UNDO generated for the changes in the table segment.
    However, the index maintenance (if any) will require UNDO, and it may be a lot. If this is going into a new table, then you should be able to create the index after the table is populated.
    Also beware of the NOLOGGING advice. In many cases, an individual SQL statement can not disable logging. And yet, if you do bypass REDO logging, be very sure you understand the consequences for your ability to recover.

  • ORA-00939: too many arguments for function using Timezones in xquery

    Running on Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    CREATE TABLE "ORT"."SAMPLE"
       ( "THEDATE" DATE,
    "THETIMESTAMP" TIMESTAMP (6),
    "STARTTIMESTAMP" TIMESTAMP (6) WITH LOCAL TIME ZONE,
    "ENDTIMESTAMP" TIMESTAMP (6) WITH LOCAL TIME ZONE
    REM INSERTING into SAMPLE
    SET DEFINE OFF;
    Insert into SAMPLE (THEDATE,THETIMESTAMP,STARTTIMESTAMP,ENDTIMESTAMP) values (to_date('13-06-10 14:07:52','RR-MM-DD HH24:MI:SS'),to_timestamp('13-06-19 14:27:52.000000000','RR-MM-DD HH24:MI:SS.FF'),to_timestamp('13-06-19 10:34:04.586000000','RR-MM-DD HH24:MI:SS.FF'),to_timestamp('13-06-19 15:05:38.805000000','RR-MM-DD HH24:MI:SS.FF'));
    following query raises ora-00939
    SELECT XMLQUERY('for $v in fn:collection("oradb:/ORT/SAMPLE")
    let $date1 := $v/ROW/STARTTIMESTAMP/text()
    let $date2 := $v/ROW/ENDTIMESTAMP/text()
    return if ($date1 < $date2) then (concat($date1," date is less than ", $date2)) else (concat($date1," date is greater than ", $date2)) ' returning content) from dual;
    ORA-00939: too many arguments for function
    00939. 00000 -  "too many arguments for function"
    *Cause: 
    *Action:
    any ideas?

    Hi Odie,
    Not too familiar with XQuery rewrite, but i suspect by providing this hint, Oracle cannot optimize the query whatsoever.... tried this hint in my actual query and basically hangs.... I will attempt at opening an SR with Oracle...
    the other option i'm looking at is checking the date ranges outside of xquery, and using a mix of xmltable, xmlexists and the SQL XML functions to reconstruct my xml.

  • PL/SQL: ORA-00913: too many values

    I don't find out why i'mg getting an "ORA-00913: too many values" Error
    This Example works fine:
    DECLARE
    TYPE session_type IS TABLE OF v$session%ROWTYPE ;
    blocking_sessions session_type;
    BEGIN
    select * bulk collect into blocking_sessions from v$session where blocking_session is not null;
    END;
    But in this Example i'm getting an ORA-00913. Can anybody tell me what i'm doing wrong?
    DECLARE
    TYPE session_type IS TABLE OF v$session%ROWTYPE ;
    blocking_sessions session_type;
    BEGIN
    select distinct blocking_session bulk collect into blocking_sessions from v$session where blocking_session is not null;
    END;
    select distinct blocking_session bulk collect into blocking_sessions from v$session where blocking_session is not null;
    ERROR at line 7:
    ORA-06550: line 7, column 70:
    PL/SQL: ORA-00913: too many values
    ORA-06550: line 7, column 1:
    PL/SQL: SQL Statement ignored

    OK this one works also:
    DECLARE
    TYPE session_type IS TABLE OF NUMBER ;
    blocking_sessions session_type;
    BEGIN
    select distinct blocking_session bulk collect into blocking_sessions from v$session where blocking_session is not null;
    END;
    But when i'm selecting for about 20 columns of a table with 30 columns. Do I have to declare every single column?

  • Livecache issue LC10 BY0 Too many users (task limit)

    Hi,
    we have 2 application servers with one CI/DB, maxusertasks is 500
    we figured out a livecache pb, with the following error messages
    (db analyzer)
    LC10 BY0 > Too many users (task limit)
    User task 281 blocked in stae 'Prep-End(230)' since 902s, DB procedure : SAPAPO_CLEANUP application pid
    User task 281 blocked in stae 'Prep-End(230)' since 902s, DB procedure : SIM_SIMSESSION_CONTROL, pid 31685
    ...User task 281 blocked in stae 'Prep-End(230)' since 902s, DB procedure : APS_ACT_SCHEDULE, pid 3168
    before  we had the warining:  UTK5 is running 1804s, CPU 50% and CPU 100% (LC)
    we have checked knldiag/knldiagerr files and we are unable to find what is the root cause of the problem?
    How could we analyse this issue and find the treatment which triggers this issue?
    (trace: (SAPAPO/OM01)
    Thanks and Best regards,
    any help would be rewarded.
    Best regards,
    John
    Edited by: DCWE COREADMIN on Feb 11, 2011 6:54 PM

    Hello John,
    We need additional information about the liveCache version, OS of the liveCache server; more details about situation when the issue occurred. Run "x_cons <SID> sh all 10 10 > task_info.txt" when the issue occurred to collect more information.
    Check if the savepoint task is active - "x_cons <SID> show active".
    Is the issue occurred during high loading to te liveCache?
    Please review the SAP note 1391322 where the SPORADIC standstill in high load situation is also described, this issue is already fixed in the new liveCache releases.
    This situation occured sporadically. It will be helpful to login to your system for further analysis of the reported issue to get clear if your system run into this known problem because the savepoint was hanging, because of that you had  big amount of the user tasks which were hours in the state 'Prep-End' until the liveCache restart.
    < It could be the reason for "livecache issue LC10 BY0 > Too many users (task limit)", you reached the MAXUSERTASKS limit,
      the increasing of the liveCache parameter will not help you to solve the issue. >
    You checked/saw the Database Analyzer log :
    "User task 281 blocked in stae 'Prep-End(230)' since 902s, DB procedure : SAPAPO_CLEANUP application pid
    User task 281 blocked in stae 'Prep-End(230)' since 902s, DB procedure : SIM_SIMSESSION_CONTROL, pid 31685
    ...User task 281 blocked in stae 'Prep-End(230)' since 902s, DB procedure : APS_ACT_SCHEDULE, pid 3168
    You are SAP customer. As already recommended by Ivan, create the SAP message to the component "BC-DB-LVC" and get SAP support.
    Thank you and best regards, Natalia Khlopina

  • ORA-31186: Document contains too many nodes

    Hi all,
    DB version:
    BANNER                                                        
    Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - 64bi
    PL/SQL Release 10.1.0.5.0 - Production                          
    CORE     10.1.0.5.0     Production                                        
    TNS for HPUX: Version 10.1.0.5.0 - Production                   
    NLSRTL Version 10.1.0.5.0 - Production i am getting the "ORA-31186: Document contains too many nodes" error when listitem nodes come more then 70,000.
    my  code like below.
      l_xmldoc := '<listitems>
      <listitem><homePhone>6666446666</homePhone><mobile>9988776655</mobile><emailaddr><![CDATA[[email protected]]]></emailaddr><deviceid>1</deviceid></listitem>
      <listitem><homePhone>6666446666</homePhone><mobile>9988776656</mobile><emailaddr><![CDATA[[email protected]]]></emailaddr><deviceid>1</deviceid></listitem>
      <listitem><homePhone>6666446666</homePhone><mobile>9988776657</mobile><emailaddr><![CDATA[[email protected]]]></emailaddr><deviceid>1</deviceid></listitem> 
    </listitems>';
    SELECT EXTRACT(l_xmldoc,'/listitems/listitem')   into l_xmldoc FROM DUAL;--error raising here
      SELECT EXTRACTVALUE( VALUE(t),'/listitem/emailaddr')
                 ,EXTRACTVALUE( VALUE(t),'/listitem/mobile')
                 ,EXTRACTVALUE( VALUE(t),'/listitem/homephone')
                 ,EXTRACTVALUE( VALUE(t),'/listitem/deviceid')
    BULK COLLECTION INTO t_table
      FROM TABLE(XMLSEQUENCE(l_xmldoc)) t;Please help me to understand why this error occurs and how to resolve.
    Thanks,
    Ram

    >
    You can find a little bit extra info in the search errors section on OTN regarding doc's 10gR2:
    * ORA-31186: Document contains too many nodes
    Cause: Unable to load the document because it has exceeded the maximum allocated number of DOM nodes.
    Action: Reduces the size of the document.
    >
    Check this discussion -- ORA-24817 and ORA-31186 (undocumented errors)
    Somebody suggested - Increasing the shared_pool_size and the java_pool_size will fix this.
    Please check,
    Edited by: ranit B on Nov 27, 2012 4:37 PM

Maybe you are looking for

  • Single concurrent program for multiple operating units

    HI I am working on XML/BI publisher to generate AR invoice reports. We have single rdf report using which rtf templates are generated. There are 10 operating units (10 ORG_ID's) and 10 rtf templates, one for each operating unit. There are 4 different

  • Sharing iTunes library between two accounts on a single iMac

    My wife and I are using the same iMac but through 2 separate accounts, which is quite neat to separate files, preferences, etc. We have 'ripped' our home CD collection into the iTunes library of my account, and I am now looking for a way for my wife

  • Error while downloading Jar/Jad by air

    Hi All, I am using netbeans as my IDE while compiling I m getting Jar and Jad file only I have uploaded both the files to VPN server now I want to download Jar/Jad file yesterday it downloaded fine on worked well on Blackberry bold device but from to

  • $50 bonus scam

    Received a text today purportedly from Verizon. It goes to verizonwirelessbonus.com and is a scam. The domain was registered Sept 1. The info is below. >Even being a scam, we need to edit the personal information that was presented in this post< Mess

  • IPod Nano in Mini connector

    Will the iPod Nano plug into a dock or other accessory originally built for the iPod Mini?