Slow Concurrency w/ HASH on Cluster

Hi everyone,
I've been setting up a data management system using Berkeley DB and the python bindings (bsddb3) to run parallel tasks for ~ 80 million RNA sequences that have properties similar to the "Target RNA" schema below.
Target RNA
length, sequence, expression, mismatchPositions, etc.
Problem:
*10 Parallel processes that access separate records ("rows") and write to the database run very slowly.* Each job seems to run slower than the previous, perhaps indicating some sort of locking issue? For instance, if I do a test to update the length of 10 million sequences I'll run 10 separate jobs that access different records (0-1,000,000 for the first job, 1,000,001-2,000,000 for the second, etc.) like so:
#n is job number
    for i in range(n*1,000,000, (n+1)*1,000,000):
        db[str(i)] = str(int(db[str(i)]) + 1)And the runtime (s) for each job finishing are something like: 30, 50, 60, 70, etc. Running only one job to update the whole 10 million sequences will take ~ 120s. It seems to slow down the cluster even though the database is 200 MB total. Running 100 jobs that are read only works fine. I'd like to be able to run 30+ jobs concurrently so not being able to run 10 worries me.
Questions:
1. Each process is accessing different records, why would the speed of the nth job depend on the previous jobs? Doesn't bdb lock on records as opposed to entire DBs? Will manually using transactions speed this up?
2. In my posts elsewhere, there was mention of using redis to run 100 jobs running in parallel all accessing the same memory file, is this something berkeley db is not capable of doing because the records aren't stored in ram like redis?
3. How can I tell which part of the system is causing the problem? Is it a bdb problem or a network/filesystem problem. I'm fairly ignorant about filesystems/networks and don't know how to troubleshoot this type of problem. Any tips would be appreciated.
4. Is there flags I can pass to allow easier concurrency or db parameters I can change like page sizes to make it more efficient?
5. If the problem is due to my network/filesystem setup, which type of system set up is better for this type of concurrency?
System Setup/Stats
- bdb 4.8 with bsddb3 bindings
- 4 nodes (w/ Gigabit ethernet connections) using NFS (64 cores, 256 GB ram)
- "dd throughput": 60MB/s
- cache: 1GB for each db connection
- each db is a HASH type.
If any more info is needed I'll update ASAP.
Thanks.
Edited by: 852120 on Apr 13, 2011 1:57 PM
Edited by: 852120 on Apr 13, 2011 2:03 PM
Edited by: 852120 on Apr 13, 2011 2:06 PM

Hello,
Have you tried BTREE access method? I did a performance test a while back, and BTREE performed much better than HASH with high concurrency (I never figured out why, though).
--Michi                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Application Instance Slow & Concurrent Programme Queue

    My oracle application instance R12 is slow, what all the things needs to be checked?
    FYI,
    Linux OS.
    How to queue the concurrent programme after submitting its request?
    FYI,
    I am not asking for a particular concurrent programme priority. I want all the pending concurrent programme to be queued as per my requirement.

    Hi;
    My oracle application instance R12 is slow, what all the things needs to be checked?
    FYI,
    Linux OS.
    How to queue the concurrent programme after submitting its request?
    FYI,
    I am not asking for a particular concurrent programme priority. I want all the pending concurrent programme to be queued as per my requirement.How ofter you are purging cm request?
    How to Run the Purge Concurrent Request and/or Manager Data Program and Which Tables Does it Purge? [ID 154850.1]
    How to Optimize the Process of Running Purge Concurrent Request and/or Manager Data (FNDCPPUR) [ID 92333.1]
    Purge Concurrent Request and/or Manager Data Slow Performance [ID 789698.1]
    Regard
    Helios

  • Slow message delivery in HA Cluster

    Hello,
    I had setup a testing MQ cluster v4.1 with two brokers (A and B). Cluster is working fine, however when I connect with message cosumer to broker A and with producer to broker B, the produced messages are delivered one by one in about 5 seconds interval. Even whe I send 10 messages at once, consumer is receiving one message each 5 seconds, which is horribly slow. Is this normal? What could be the cause of this problem?
    When I connect both consumer and producer to the same broker (either A or B) the message delivery is ok, with messages arriving in about 40ms.
    Thank you,
    Jan.

    What is your cluster configuration ? How big are your messages ? Are both brokers running on the same machine ?
    What is you connection configuration both for consumer and producer ?
    Tom

  • Java app w/ hibernate is really slow when started as a cluster resource

    if I run the app by hand, it takes about 30 seconds to start. If I run it as a cluster resource, it takes more than 5 minutes to start, thus the cluster gives up and it fails to start (i.e. get the "(C748634) Resource group mc-rg failed to start on chosen node and might fail over to other node(s)" error).
    Here's a snippet of the start script:
    # CLASSPATH already defined, etc.
    java -server -Xmx256m -classpath ${CLASSPATH} path.class.start &
    Here are my cluster commands:
    clrg create -n node1,node2 mc-rg
    clrs create -d -g mc-rg -t SUNW.gds -p "Start_command=/bin/start.sh" -p "Network_Aware=false" mc-rs
    clrs set -p "Probe_command=/bin/probe.sh" mc-rs
    clrs enable mc-rs
    clrg manage mc-rg
    clrg online mc-rg
    clrs enable -g mc-rg +
    Any ideas? Thanks.
    Andy

    SUNW.gds is implementing "wait for online" during the start method. Thus while GDS is invoking Start_command, it is also periodically running the Probe_command. The Probe_command determines if the service is online (and therefor Start_command was successful).
    In your case this means for some reason "/bin/probe.sh" never indicates through return code 0 that the service is indeed running. Thus the Start_command is failing after it reached its START_TIMEOUT (default = 300 seconds).
    Recommend to verify your Probe_command that it really returns 0 if the service is running.
    For more informations about GDS (and a coding template), see
    [http://opensolaris.org/os/community/ha-clusters/ohac/GDS-template/|http://opensolaris.org/os/community/ha-clusters/ohac/GDS-template/]
    Regards
    Thorsten

  • WebLogicSession and X-WebLogic-Cluster-Hash cookies

              Hello, apologies up front if this has been posted before, I haven't
              been able to
              find any reference to it yet. First, how do I decode the
              WebLogicSession cookie
              between the HttpClusterSerlvet and the HTTP client to extract the IPs of the
              primary/backup cluster servers that have replicated my HTTP session?
              Second,
              how is the translation done between the WebLogicSession cookie
              (proxy<->client)
              and the X-WebLogic-Cluster-Hash (proxy<->cluster)?
              Thanks in advance for any responses,
              -Steven Vaughan
              

    I'm not exactly sure what you are asking. Are you trying to configure a
              hardware load balancer or something like that in front of the proxy?
              If you take a look at the cookie information (you can do so online by using
              safe cookies at anonymizer.com for example) you will see the IP addresses
              embedded in the cookie data. Do you need to know exactly what the
              information/offsets there are in that data? Could you explain what you are
              trying to accomplish?
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com
              +1.617.623.5782
              WebLogic Consulting Available
              "Steven Vaughan" <[email protected]> wrote in message
              news:[email protected]..
              >
              > Hello, apologies up front if this has been posted before, I haven't
              > been able to
              > find any reference to it yet. First, how do I decode the
              > WebLogicSession cookie
              > between the HttpClusterSerlvet and the HTTP client to extract the IPs of
              the
              > primary/backup cluster servers that have replicated my HTTP session?
              > Second,
              > how is the translation done between the WebLogicSession cookie
              > (proxy<->client)
              > and the X-WebLogic-Cluster-Hash (proxy<->cluster)?
              >
              > Thanks in advance for any responses,
              > -Steven Vaughan
              >
              

  • Concurrent program performance Issue

    Hi,
    We are currently experiencing performance issue in one of the concurrent program related
    to the HR module. The concurrent request is currently completing in 3 hrs time.
    We have obtained a trace for the concurrent program.
    Please help me analyze the cause of the performance issue from the trace file.
    Trace file below:
    BEGIN SLC_PYINF_USMONACCROH_PKG.SLC_421_HANDLE_OUTBOUND(:errbuf,:rc,:A0,:A1,
    :A2,:A3,:A4,:A5,:A6,:A7,:A8,:A9,:A10,:A11); END;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 76.08 9602.16 700828 1330818 663813 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 76.08 9602.16 700828 1330818 663813 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 3 0.00 0.00
    SQL*Net message from client 3 0.00 0.00
    PL/SQL lock timer 969 9.83 9485.16
    UPDATE HRAPPS.SLC_PYINF_USMONACCRO_STG SET PROCESS_STATUS = 2
    WHERE
    CONC_REQUEST_ID = :B2 AND SET_SEQUENCE_NUM = :B1 AND PROCESS_STATUS = 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 24.83 45.67 145127 695479 602714 560730
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 24.83 45.67 145127 695479 602714 560730
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    0 UPDATE SLC_PYINF_USMONACCRO_STG (cr=684898 pr=134556 pw=0 time=44759708 us)
    1135266 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=694708 pr=124937 pw=0 time=6874212 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 15622 1.43 13.94
    db file sequential read 25578 0.52 14.30
    latch: cache buffers lru chain 3 0.00 0.00
    DELETE FROM SLC_PYINF_USMONACCRO_ARC
    WHERE
    EXTRACT_DATE<TRUNC(SYSDATE)-60
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 7.41 15.05 87598 87668 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 7.41 15.06 87598 87668 0 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    0 DELETE SLC_PYINF_USMONACCRO_ARC (cr=87668 pr=87598 pw=0 time=15053606 us)
    0 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_ARC (cr=87668 pr=87598 pw=0 time=15053595 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 3 0.00 0.00
    db file scattered read 11025 0.61 13.21
    SELECT COUNT(*)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B1
    call count cpu elapsed disk query current rows
    Parse 2 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 10.14 10.23 116633 123540 0 2
    total 6 10.14 10.23 116633 123540 0 2
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=61770 pr=58317 pw=0 time=5290475 us)
    560730 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=61770 pr=58317 pw=0 time=1689204 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 15646 0.27 6.24
    db file sequential read 625 0.00 0.01
    SELECT COUNT(*)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B1 AND
    PROCESS_STATUS = 2
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 5.20 8.32 51482 69842 0 1
    total 3 5.20 8.32 51482 69842 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=69842 pr=51482 pw=0 time=8323369 us)
    560730 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=69842 pr=51482 pw=0 time=2811304 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 6514 0.30 6.09
    db file sequential read 114 0.00 0.02
    SELECT MAX(SET_SEQUENCE_NUM)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 5.34 6.63 58318 61770 0 1
    total 3 5.34 6.63 58318 61770 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=61770 pr=58318 pw=0 time=6639527 us)
    560730 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=61770 pr=58318 pw=0 time=2250410 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 7820 0.30 4.46
    db file sequential read 313 0.00 0.05
    SELECT COUNT(*)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B2 AND
    SET_SEQUENCE_NUM = :B1 AND PROCESS_STATUS = 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 4.99 4.88 58315 61770 0 1
    total 3 4.99 4.88 58315 61770 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=61770 pr=58315 pw=0 time=4887337 us)
    560730 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=61770 pr=58315 pw=0 time=1688451 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 7824 0.00 3.02
    db file sequential read 313 0.00 0.00
    SELECT COUNT(*)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B1 AND
    PROCESS_STATUS = 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 4.98 4.87 58318 61770 0 1
    total 3 4.98 4.87 58318 61770 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=61770 pr=58318 pw=0 time=4872548 us)
    560730 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=61770 pr=58318 pw=0 time=1688407 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 7821 0.00 2.98
    db file sequential read 312 0.00 0.00
    SELECT COUNT(*)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B1 AND
    PROCESS_STATUS = -1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 4.45 4.36 58317 61770 0 1
    total 3 4.45 4.36 58317 61770 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=61770 pr=58317 pw=0 time=4369473 us)
    0 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=61770 pr=58317 pw=0 time=4369425 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 7823 0.00 2.98
    db file sequential read 312 0.00 0.00
    SELECT COUNT(*)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B1 AND
    PROCESS_STATUS < 0
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 4.14 4.24 51481 61770 0 1
    total 3 4.14 4.24 51481 61770 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=61770 pr=51481 pw=0 time=4243020 us)
    0 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=61770 pr=51481 pw=0 time=4242968 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 6537 0.06 2.90
    db file sequential read 104 0.00 0.00
    DELETE FROM SLC_PYINF_USMONACCRO_GLI_ARC
    WHERE
    EXTRACT_DATE<TRUNC(SYSDATE)-60
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.63 2.52 7681 7689 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.63 2.52 7681 7689 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    0 DELETE SLC_PYINF_USMONACCRO_GLI_ARC (cr=7689 pr=7681 pw=0 time=2521592 us)
    0 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_GLI_ARC (cr=7689 pr=7681 pw=0 time=2521583 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 1 0.00 0.00
    db file scattered read 976 1.00 2.36
    UPDATE HRAPPS.SLC_PYINF_USMONACCRO_GLI_STG SET PROCESS_STATUS = 2
    WHERE
    CONC_REQUEST_ID = :B1 AND PROCESS_STATUS = 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 1.89 2.25 5863 16125 60963 52309
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 1.89 2.25 5863 16125 60963 52309
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    0 UPDATE SLC_PYINF_USMONACCRO_GLI_STG (cr=11787 pr=1273 pw=0 time=1332023 us)
    122679 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_GLI_STG (cr=16291 pr=5859 pw=0 time=48501241 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 745 0.01 0.76
    db file parallel read 1 0.00 0.00
    db file sequential read 5 0.00 0.00
    SELECT B.ATTRIBUTE1 ,B.ATTRIBUTE2 ,B.ATTRIBUTE3 ,T.FLEX_VALUE_MEANING ,
    T.DESCRIPTION
    FROM
    FND_FLEX_VALUES_TL T ,FND_FLEX_VALUES B WHERE B.FLEX_VALUE_ID =
    T.FLEX_VALUE_ID AND T.LANGUAGE = USERENV ('LANG') AND TRIM(UPPER
    (B.FLEX_VALUE)) = TRIM(UPPER (:B1 )) AND B.ENABLED_FLAG = 'Y' AND UPPER
    (B.VALUE_CATEGORY) = UPPER ('SLCHR_INTERFACE_CLEANUP')
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 0.25 0.86 1640 3286 0 2
    total 5 0.25 0.86 1640 3286 0 2
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    2 NESTED LOOPS (cr=3286 pr=1640 pw=0 time=866461 us)
    2 TABLE ACCESS FULL FND_FLEX_VALUES (cr=3280 pr=1637 pw=0 time=848331 us)
    2 TABLE ACCESS BY INDEX ROWID FND_FLEX_VALUES_TL (cr=6 pr=3 pw=0 time=18101 us)
    2 INDEX UNIQUE SCAN FND_FLEX_VALUES_TL_U1 (cr=4 pr=2 pw=0 time=9705 us)(object id 849241)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 4 0.00 0.02
    db file scattered read 208 0.30 0.71
    SELECT PHASE_CODE, STATUS_CODE, COMPLETION_TEXT, PHASE.LOOKUP_CODE,
    STATUS.LOOKUP_CODE, PHASE.MEANING, STATUS.MEANING
    FROM
    FND_CONCURRENT_REQUESTS R, FND_CONCURRENT_PROGRAMS P, FND_LOOKUPS PHASE,
    FND_LOOKUPS STATUS WHERE PHASE.LOOKUP_TYPE = :B3 AND PHASE.LOOKUP_CODE =
    DECODE(STATUS.LOOKUP_CODE, 'H', 'I', 'S', 'I', 'U', 'I', 'M', 'I',
    R.PHASE_CODE) AND STATUS.LOOKUP_TYPE = :B2 AND STATUS.LOOKUP_CODE =
    DECODE(R.PHASE_CODE, 'P', DECODE(R.HOLD_FLAG, 'Y', 'H',
    DECODE(P.ENABLED_FLAG, 'N', 'U', DECODE(SIGN(R.REQUESTED_START_DATE -
    SYSDATE),1,'P', R.STATUS_CODE))), 'R', DECODE(R.HOLD_FLAG, 'Y', 'S',
    DECODE(R.STATUS_CODE, 'Q', 'B', 'I', 'B', R.STATUS_CODE)), R.STATUS_CODE)
    AND (R.CONCURRENT_PROGRAM_ID = P.CONCURRENT_PROGRAM_ID AND
    R.PROGRAM_APPLICATION_ID= P.APPLICATION_ID ) AND REQUEST_ID = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 971 0.25 0.16 0 0 0 0
    Fetch 971 0.53 0.65 0 13605 0 971
    total 1943 0.78 0.81 0 13605 0 971
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    971 TABLE ACCESS BY INDEX ROWID FND_LOOKUP_VALUES (cr=17489 pr=0 pw=0 time=877481 us)
    2913 NESTED LOOPS (cr=16518 pr=0 pw=0 time=1643550 us)
    971 NESTED LOOPS (cr=11663 pr=0 pw=0 time=658551 us)
    971 NESTED LOOPS (cr=5837 pr=0 pw=0 time=95374 us)
    971 TABLE ACCESS BY INDEX ROWID FND_CONCURRENT_REQUESTS (cr=2924 pr=0 pw=0 time=63054 us)
    971 INDEX UNIQUE SCAN FND_CONCURRENT_REQUESTS_U1 (cr=1953 pr=0 pw=0 time=43874 us)(object id 240792)
    971 TABLE ACCESS BY INDEX ROWID FND_CONCURRENT_PROGRAMS (cr=2913 pr=0 pw=0 time=28198 us)
    971 INDEX UNIQUE SCAN FND_CONCURRENT_PROGRAMS_U1 (cr=1942 pr=0 pw=0 time=17956 us)(object id 849182)
    971 TABLE ACCESS BY INDEX ROWID FND_LOOKUP_VALUES (cr=5826 pr=0 pw=0 time=558105 us)
    971 INDEX RANGE SCAN FND_LOOKUP_VALUES_U1 (cr=4855 pr=0 pw=0 time=539171 us)(object id 906518)
    971 INDEX RANGE SCAN FND_LOOKUP_VALUES_U1 (cr=4855 pr=0 pw=0 time=172115 us)(object id 906518)
    SELECT MAX(LT.SECURITY_GROUP_ID)
    FROM
    FND_LOOKUP_TYPES LT WHERE LT.VIEW_APPLICATION_ID = :B2 AND LT.LOOKUP_TYPE =
    :B1 AND LT.SECURITY_GROUP_ID IN (0,
    TO_NUMBER(DECODE(SUBSTRB(USERENV('CLIENT_INFO'),55,1), ' ', '0', NULL, '0',
    SUBSTRB(USERENV('CLIENT_INFO'),55,10))))
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1945 0.11 0.11 0 0 0 0
    Fetch 1945 0.18 0.10 0 3890 0 1945
    total 3891 0.29 0.21 0 3890 0 1945
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1945 SORT AGGREGATE (cr=3890 pr=0 pw=0 time=142954 us)
    1945 FIRST ROW (cr=3890 pr=0 pw=0 time=96520 us)
    1945 INDEX RANGE SCAN (MIN/MAX) FND_LOOKUP_TYPES_U1 (cr=3890 pr=0 pw=0 time=89938 us)(object id 906517)
    INSERT INTO HRAPPS.SLC_HRINF_INT_SUMMARY (INT_SUMMARY_ID,
    INT_SUMMARY_CREATE_DATE ,INT_SUMMARY_LAST_UPDATE_DATE, INTERFACE_NAME ,
    HANDLER_CONC_REQUEST_ID, INT_CONC_REQUEST_ID ,SET_SEQUENCE_NUMBER,
    SET_RECORD_COUNT, INT_FROM_DATE ,INT_TO_DATE, INT_STATUS_1_STATE,
    INT_STATUS_1_MESSAGE ,INT_STATUS_1_STARTED, INT_STATUS_1_COMPLETED ,
    INT_STATUS_1_SUCCESS_COUNT, INT_STATUS_1_ERROR_COUNT ,INT_STATUS_2_STATE,
    INT_STATUS_2_MESSAGE ,INT_STATUS_2_STARTED, INT_STATUS_2_COMPLETED ,
    INT_STATUS_2_SUCCESS_COUNT, INT_STATUS_2_ERROR_COUNT ,INT_STATUS_3_STATE,
    INT_STATUS_3_MESSAGE ,INT_STATUS_3_STARTED, INT_STATUS_3_COMPLETED ,
    INT_STATUS_3_SUCCESS_COUNT, INT_STATUS_3_ERROR_COUNT ,INT_STATUS_4_STATE,
    INT_STATUS_4_MESSAGE ,INT_STATUS_4_STARTED, INT_STATUS_4_COMPLETED ,
    INT_STATUS_4_SUCCESS_COUNT, INT_STATUS_4_ERROR_COUNT ,INT_STATUS_5_STATE,
    INT_STATUS_5_MESSAGE ,INT_STATUS_5_STARTED, INT_STATUS_5_COMPLETED ,
    INT_STATUS_5_SUCCESS_COUNT, INT_STATUS_5_ERROR_COUNT )
    VALUES
    (:B7 , :B6 , :B6 , :B5 , :B4 , NULL , NULL, NULL, :B3 , :B2 , :B1 , NULL ,
    NULL, NULL , NULL, NULL , :B1 , NULL , NULL, NULL , NULL, NULL , :B1 , NULL
    , NULL, NULL , NULL, NULL , :B1 , NULL , NULL, NULL , NULL, NULL , :B1 ,
    NULL , NULL, NULL , NULL, NULL )
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.01 0.12 12 1 12 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.01 0.12 12 1 12 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 12 0.02 0.12
    Thanks & Regards,
    Rup

    Hi;
    Please check our previous topic
    Concurrent manager real time tune
    Oracle apps database
    tune concurrent manager
    Oracle apps database
    Concurrent Manager very slow
    Concurrent Manager very slow........
    Regard
    Helios

  • RAC CLUSTER FILES

    hi,
    i am planning to create RAC using ASM and not using HACMP/GPFS and i am planning to store OCR and voting Disk on
    raw disk devices.I want to know for the above case .i have to follow all the below doc or only 3.3.2 for storing OCR
    and voting disk on raw disk devices .Kindly help
    AIX version 5,3 and oracle 10.2.0.4
    ORACLE DOC LINK
    http://download.oracle.com/docs/cd/B19306_01/install.102/b14201/storage.htm#sthref587
    3.3.2 Configuring Raw Disk Devices for Oracle Clusterware Without HACMP or GPFS
    3.3.3 Configuring Raw Logical Volumes for Oracle Clusterware
    3.3.4 Creating a Volume Group for Oracle Clusterware
    3.3.5 Configuring Raw Logical Volumes in the New Oracle Clusterware Volume Group
    3.3.6 Importing the Volume Group on the Other Cluster Nodes
    3.3.7 Activating the Volume Group in Concurrent Mode on All Cluster Nodes
    Regards

    Hi,
    i have to follow all the below doc or only 3.3.2 for storing OCR and voting disk on raw disk devices .Kindly helpYou are right in your understanding. You only need to follow 3.3.2 because other parts are related to configuration of raw disks> raw volume group > raw logical volume which is in term related to HACMP which you are not using.
    Salman

  • Design considerations while accessing data of a different web application

    I am database developer for a web application (*WebApp1*) which supports around *6 million* merchant data across various tables in database (*DB1*).
    The users for the above mentioned application are less, around *50* who monitor the 6 million merchant data.
    Now a proposal has come up to develop a second web application for the merchants themselves.
    Thus the user base for this web application (*WebApp2*) is in millions.
    Now the requirement for WebApp2 is :
    On each login of individual merchants, they need to see their individual data (mostly read-only) which is stored in (*DB1*)
    along with other data which is stored in its own database (*DB2*). WebApp2 will have other functionalities too but those
    are independent of WebApp1.
    Now I can think of three approaches how to display DB1 data in WebApp2 :
    1. Creating a stored procedure and accessing DB1 data through dblink. (*Cons* - have seen access through dblink is slower as it serializes access).
    2. Replicate DB1 data to DB2 through batch jobs running every night. (*Cons* - Will store the same data in 2 places and will have the headache of data sync.)
    3. WebApp1 will expose webservice using DB2 data, so that on each login, WebApp2 users use the webservice to display DB1 data through webservice.
    (Please answer if there are any cons. beacuse of high user base ?)
    Need your suggestion and views. Also if there are any other way please let me know.
    Will it depend on any other factor ? Let me know if you need any further information.
    Cheers !!!

    Ah - this is a much bigger question! And may be outside the scope of this forum to answer. I will have a go though.
    The question really isn't can the database support 6,000,000 users. It is how many of the users will be making database requests at the same time, what is the average elapsed time of each request, and what do your users regard as an acceptable response time to each request?
    To use a (very) simple analogy, my company employs 30,000 people but the IT Service Desk that they call if they have a problem with their PC is only staffed by 30 people. This is because the Service Desk Manager has estimated that at any one time only 1% of all employees will need to call the Service Desk. On average each call takes 6 minutes to resolve and the response time agreed with the business to a call is 1 hour. So within an hour one Service Desk technician can answer 10 calls in an hour, so 30 technician can answer 300 calls an hour.
    Now you can apply this analogy to a computer system. The scalability of database systems is generally limited by the amount of available CPU. If we determine that the average database call uses 10ms of CPU, and the user will accept a response time of 1s, then each CPU can service 100 database calls within that response time. Multiply that by the number of available CPUs and you have a (very rough) idea of how far the database will scale. An server with 8 cores could support 800 concurrent requests. A cluster with 4 servers each with 8 cores could support 3200 concurrent requests.
    I would stress that this is very crude. Much more information would be required to determine what the limits of database scalability were. My example does not take account of other source of potential latency, such as the network or the storage subsystem if requests involve significant physical I/O. Or indeed the latency in the application itself.
    So, you need to answer these questions:
    - How many concurrent database requests need to be supported
    - What is the average amount of database CPU that each database request will consume
    - What is the maximum acceptable elapsed time for each database request
    On a more practical note, your application must use some form of connection pooling to limit the number of database connections required to service all the required database activity. You can scale the database horizontally using RAC and in extreme cases you can offload read activity entirely from the database using a middle-tier caching solution such a Coherence, GigaSpaces or GemFire.
    That's probably enough for now. Any more and you'll probably need to get some consultancy!

  • SQL Query (challenge)

    Hello,
    I have 2 tables of events E1 and E2
    E1: (Time, Event), E2: (Time, Event)
    Where the columns Time in both tables are ordered.
    Ex.
       E1: ((1, a) (2, b) (4, d) (6, c))
       E2: ((2, x) (3, y) (6, z))
    To find the events of both tables at the same time it is obvious to do & join between E1 and E2
    Q1 -> select e1.Time, e1.Event, e2.Event from E1 e1, E2 e2 where e1.Time=e2.Time;
    The result of the query is:
    ((2, b, x) (6, c, z))
    Given that there is no indexes for this tables, an efficient execution plan can be a hash join (under conditions mentioned in Oracle Database Performance Tuning Guide Ch 14).
    Now, the hash join suffers from locality problem if the hash table is large and does not fit in memory; it may happen that one block of data is read in memory and swaped out frequently.
    Given that the Time columns are sorted is ascending order, I find the following algorithm, known idea in the literature, apropriate to this problem; The algorithm is in pseudocode close to pl/sql, for simplicity (I home the still is clear):
    -- start algorithm
    open cursors for e1 and e2
    loop
      if e1.Time = e2.Time then
         pipe row (e1.Time, e1.Event, e2.Event);
         fetch next e1 record
         exit when notfound
         fetch next e2 record
          exit when notfound
      else
         if e1.Time < e2.Time then
            fetch next e1 record
            exit when notfound
         else
            fetch next e2 record
            exit when notfound
         end if;
      end if;
    end loop
    -- end algorithm
    As you can see the algorithm does not suffer from locality issue since it iterates sequentially over the arrays.
    Now the problem: The algorithm shown below hints the use of pipelined function to implement it in pl/sql, but it is slow compared to hash join in the implicit cursor of the query shown above (Q1).
    Is there an implicit SQL query to implement this algorithm? The objective is to beat the hash join of the query (Q1), so queries that use sorting are not accepted.
    A difficulty I foound is that the explicit cursor are much slower that implict ones (SQL queries)
    Example: for a large table (2.5 million records)
    create table mytable (x number);
    declare
    begin
    open c for 'select 1 from mytable';
    fetch c bulk collect into l_data;
    close c;
    dbms_output.put_line('couont = '||l_data.count);
    end;
    is 5 times slower then
    select count(*) from mytable;
    I do not understand why it should be the case, I read that this may be explained because pl/sql is interpreted, but I think this does not explain the whole issue. May be because the fetch copies data from one space to your space and this takes a long time.

    Hi
    A correction in the algorithm:
    -- start algorithm
    open cursors for e1 and e2
    fetch next e1 record
    fetch next e2 record
    loop
      exit when e1%notfound
      exit when e2%notfound
      if e1.Time = e2.Time then
         pipe row (e1.Time, e1.Event, e2.Event);
         fetch next e1 record
         fetch next e2 record
      else
         if e1.Time < e2.Time then
            fetch next e1 record
         else
            fetch next e2 record
         end if;
      end if;
    end loop
    -- end algorithm
    Best regards
    Taoufik

  • How do you create a reentrant subVI whose instance is accessible using a refnum?

    I am trying to create a hash table similar to the one create on the thread http://forums.ni.com/ni/board/message?board.id=170&thread.id=136037&view=by_date_ascending&page=2. That hash table uses unintialized shift registers to store the hash table data. The implementation only works for one hash table because if the function was made reentrant, there would be no way to access a specific hash table instance to perform operations. I am trying to write a hash table that is renentrant but with instaces accessible via a refnum. Currently, I am using a different subVI for each operation in a hash table class and passing a hash table cluster among the subVIs, but the bundling and unbundling induces too much of a performance hit.

    You can do this by opening a reference to the subVI and then running it using a call by reference node. You can see an example here. Just note that you have to set an option on the open primitive to prepare the VI for reentrant execution (I think it's 2 or 8, can't remember).
    If you want to see a more complete implementation, you can try downloading the OpenGOOP framework from OpenG, which uses this mechanism for data storage. Alternatively, you can use a queue to pass the data. If you have a single element in the queue and always make sure to dequeue and then re-enqueue, you should get a single copy of the data in memory.
    You can also use variant attributes to do the lookup table. You can see more here.
    Try to take over the world!

  • Fatal error when updating Payload with Java hw Worklist API

    Hi all,
    I am receiving an error when I want to update some non-String-type fields
    of a task payload. I access the fields in the payload with facade-classes, generated by Schemac.
    The fact is that I can read all the payload fields, but when I try to set values in non-String typed fields of the payload I'm getting a run-time error:
    java.lang.NullPointerException at EDU.oswego.cs.dl.util.concurrent.ConcurrentReaderHashMap.hash(ConcurrentReaderHashMap.java:308) at EDU.oswego.cs.dl.util.concurrent.ConcurrentReaderHashMap.get(ConcurrentReaderHashMap.java:427) at org.collaxa.thirdparty.dom4j.tree.NamespaceCache.get(NamespaceCache.java:82) at org.collaxa.thirdparty.dom4j.Namespace.get(Namespace.java:60) at com.collaxa.cube.xml.dom.DOMUtil.createElement(DOMUtil.java:382) at com.collaxa.cube.xml.dom.DOMUtil.createElement(DOMUtil.java:350) at com.collaxa.cube.xml.BaseFacade.setChildElementValue(BaseFacade.java:323) at nl.nak.www.ns.vocht.Userpayload.setAge(Userpayload.java:327) at nl.nak.gui.action.ProcessTaskAction.execute(ProcessTaskAction.java:107) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at nl.nak.gui.custom.CustomActionServlet.process(CustomActionServlet.java:35) at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:525) at javax.servlet.http.HttpServlet.service(HttpServlet.java:760) at javax.servlet.http.HttpServlet.service(HttpServlet.java:853) at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:810) at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:322) at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:790) at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:270) at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:112) at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:192) at java.lang.Thread.run(Thread.java:534)
    My XSD file for the payload looks like this:
    <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.comp.nl/ns/vocht"
         xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://www.comp.nl/ns/vocht">
    <element name="userpayload">
    <complexType>
    <sequence>
    <element name="name" type="xsd:string" />
    <element name="lastname" type="xsd:string" />
    <element name="age" type="xsd:int" />
    <element name="amount" type="xsd:decimal"
    </complexType>
    </element>
    </schema>
    The code where I make a connection to the
    WorkList service and how I retrieve the payload is listed below.
    Note that a specific task is set in the Session at in a previous step:
    try{
    String user="jcooper";
    String password="welcome";
    //maak een verbinding
    RemoteWorklistServiceClient client = new RemoteWorklistServiceClient();
    client.init();
    out.println("connectie geinitialiseerd");
    //authenticatie
    IWorklistContext ctx = client.authenticateUser(user, password);
    Userpayload ut = (Userpayload) UserpayloadFactory.createFacade(payload);
    ut.setName("tom");
    ut.setLastname("Cooper");
    //EXCEPTION THROWN HERE
    ut.setAge(1);
    ut.setAmount(new BigDecimal(2));
    taak.setPayload(ut.getRootElement());
    String action = "DONE";
    client.customTaskOperation(ctx, taak.getTaskId(), action);
    out.println("taak geapproved :: "+payload.toString())
    return null;
    catch(Exception e)
    //PRINT DEBUG INFORMATIE
    e.printStackTrace(new PrintWriter(out));
    sp.addActionError(errors,"nl.nak.view.standaard.errors.system",null);
    saveErrors(request,errors);
    return null; //mapping.findForward("failure");
    At run-time the null-pointer exception is thrown when the Age field is set.
    Can anynone help me with this problem?
    Thanks in advance!
    Tom Hofte
    Message was edited by:
    [email protected]

    If it is a stored procedure, the action should be EXECUTE and not UPDATE and the structure should be similar to this:
    <StatementName5>
    <storedProcedureName action=u201D EXECUTEu201D>
        <table>realStoredProcedureeName</table>
    <param1 [isInput=u201Dtrueu201D] [isOutput=true] type=SQLDatatype>val1</param1>
    </storedProcedureName > 
      </StatementName5>
    From help.sap
    Regards,
    Prateek

  • Should EntryProcessors be used for long-running operations?

    Hi Gene and All,
         a couple of other questions come from the seemingly unexhaustible list :-)
         - what happens if the caller of an InvocableMap.invokeAll or invoke method dies?
         - all entryProcessors complete regardless of the client being there or not
         - all already started process method calls complete, the unprocessed entries will not get processed
         - something else happens
         - what happens if the caller of an InvocableMap.aggregate method with a parallel-aggregator dies
         - all aggregate methods in the parallel-aggregator complete
         - the aggregate methods in the parallel-aggregator stop during processing
         - something else happens
         - should an entryprocessor or a parallel-aware entryaggregator implement a comparably long-running operation (e.g. jdbc access), or does that seriously affect performance of other concurrent operations within the cluster node or the entire cluster (e.g. becuase of blocking other events/requests)?
         - should the work manager be used instead for these kinds of things (e.g. jdbc access)?
         Thanks and best regards,
         Robert

    Robert,
         As soon as an EntryProcessor or EntryAggregator get delivered to the server nodes, it will get executed regardless of the requestor's state.
         In regard to long-running operations the only thing you have to be conscious about is a number of worker threads allocated for such a processing. Since there is a single client thread issuing a request, it would suggests allocation as many worker threads (across the cache server tier) as there are client thread (across the presentation/application tier).
         Regards,
         Gene

  • How to tune gc cr block 2-way and gc current grant 2-way

    Hi,
    I'm working on an Oracle Database 11g Release 11.1.0.6.0 - 64bit Production With the Real Application Clusters option, Standard Edition.
    During last days I'm suffering performance lack due to 2 main wait event classes:
    gc cr block 2-way
    gc current grant 2-way
    I noticed also this one:
    gc buffer busy acquire
    Do you have any suggestion on how to check and tune my RAC to reduce those wait events?
    Any usefull link?
    Thanks in advance,
    Samuel
    Edited by: Samuel Rabini on Mar 30, 2012 2:39 PM

    Hi;
    Please check below which could be helpful for your issue:
    RAC Database Running Slow to Hang With High Cluster Wait Due to Small Buffer Cache [ID 1280889.1]
    Regard
    Helios

  • Oracle 8 Enterprise/Parallel Server availability on Linux

    I have not downloaded the Linux port of Oracle 8 yet so please
    bear with me... I would like to know whether Oracle Parallel
    Server
    is included in the distribution for Linux. Specifically, I
    would like to build a system that has a failover node for a
    client with
    high-availability/reliability needs. Has anyone done this on
    Linux yet?
    I have looked at the Marketing blurb on Oracle Parallel Server
    and it looks like OPS should suit my purpose. Any comments?
    Caveats?
    Thanks in advance,
    Ty Haeber
    Quetzal Systems Inc.
    Vancouver, B.C.
    null

    Bud Alverson (guest) wrote:
    : The Beowulf project supports clustering with Linux. Redhat
    even
    : has a release called "Extreme" that supports it in a fashion.
    : In fact, Los Alamos has achieved the upper tier of the top 500
    : fastest supercomputers in the world using Linux (#114, see:
    : http://www.top500.org/top500.list.html). It's named Avalon and
    : has 140 Alphas clustered via Redhat Linux 5.0. Read about it
    at:
    : http://loki-www.lanl.gov/papers/sc98/
    To run Oracle Parallel Server (OPS) on a Linux-Cluster you would
    need (Virtual) Shared Disks with Concurrent Access from all
    Cluster-Nodes. As far as I know Beowulf does not provide
    (Virtual) Shared Disks (a la VSD/RVSD on RS/6000 SP). NFS, etc.
    will not work as substitute for Virtual Shared Disks since Raw
    Partitions (Logical Volumes) with Concurrent Access are needed.
    Maybe DB2 UDB EEE (a Shared-Nothing architecture) might become
    available more easily as MPP DBMS on Linux, since it does not
    need (Virtual) Shared Disks.
    : Bud Alverson
    : CrossNet Communications
    : Michael Daskaloff (guest) wrote:
    : : Ty haeber (guest) wrote:
    : : : I have not downloaded the Linux port of Oracle 8 yet so
    : please
    : : : bear with me... I would like to know whether Oracle
    Parallel
    : : : Server
    : : : is included in the distribution for Linux. Specifically, I
    : : : would like to build a system that has a failover node for a
    : : : client with
    : : : high-availability/reliability needs. Has anyone done this
    on
    : : : Linux yet?
    : : : I have looked at the Marketing blurb on Oracle Parallel
    : Server
    : : : and it looks like OPS should suit my purpose. Any
    : comments?
    : : : Caveats?
    : : : Thanks in advance,
    : : : Ty Haeber
    : : : Quetzal Systems Inc.
    : : : Vancouver, B.C.
    : : Dear friend,
    : : what I'd tell you might be wrong but you might consider my
    : words.
    : : I'm not very knoledgable about linux but I think there is
    : : currently no hardware and software to support clusters on
    : linux.
    : : I'm sure Oracle Corp. will do the most of them to make I
    : : Parallel Server (I think, yet there is no Enterprise Edition
    : for
    : : Linux) option for Linux as soon as there is such a
    combination
    : of
    : : hardware/software.
    : : Meanwhile if you really need I working and stable solution
    : : go get Dec Alpha or RS/6000. There is already Parallel Server
    : : for NT, which IMO currently runs only on Compaq hardware.
    : : However I think You'd better you UNIX for your Parallel
    Server
    : : enviroment.
    : : If you just need to reduce downtime You can you a feature of
    : : Oracle called 'Standby Database' which is available for all
    : : versions and Editions of Oracle8 (including that for Linux).
    null

  • CUCM EMCC Login question

    Hi All,
    I have a question in relation to EMCC which I cannot find a definitive answer on, I  have a number of clusters connected together with EMCC and when a user roams to a non-home cluster and logs in, the local logged in user on their office desk is not logged out of the cluster but the EMCC login works correctly and can work remotely.  Is there a way to force the locally logged in user to be logged out when an EMCC login event occurs?  If I login locally on the home cluster when logged in with EMCC it logs out the EMCC user which I would expect but I would like to achieve this both ways to stop the office phone ringing when a user is roaming and forgets to Logout.
    So would like to log out the locally logged in user on their home cluster when they login with EMCC on one of my other clusters.
    Thanks,
    Phil

    Hi Phil,
    EMCC multiple concurrent logins on different cluster using same user ID will always be allowed. Only way to control this enabling "Auto Logout" within specific time period.
    HTH,
    Regards,
    Mohammed Noor

Maybe you are looking for

  • Is there an easy way to count the number of albums I have in my library?

    Hi, there is probably a very easy way to find out how many albums I have in my iTunes library - but I haven't discovered it yet! (looking for something a bit like word count in MS Word document?) Powerbook G4 15 1.67   Mac OS X (10.4.3)   my first Ma

  • Save view is not working in portal

    Hi Gurus, I am getting one problem, when i execute the BI query report in portal after that i use the SAVE VIEW in that i have given the technical name and description and click on save after that i am not able to find the view which i have saved in

  • How to install os if cd rom is not working

    I had to install a new hard drive, but now I can't get the OS install disk to work

  • Mm/dd/yyyy format

    Ho can i can convert timestampadd(sql_tsi_day, -1, current_date) in mm/dd/yyyy format. can this be done without using Javascrpit? Thanks Bhupendra

  • CRM -- PI-- ECC using bdocs

    Hi, I wanted to know the solutions how we can have to intregate CRM Site details using "bdocs" with PI using XIF in CRM and send to ECC Server like CRM >PI>ECC. Anyone who has done such a scenario kindly share some pointers on the same.Also can we us