SESSION AND CURSOR CACHE

10.1.3.3.2
Can its below save in a table or where logs are saved file?
it needing for history information.
http://localhost:9704/analytics/saw.dll?Sessions
[SESSION]
User ID      Host Address      Session ID      Browser Info      Logged On      Last Access
[CURSOR CACHE]
D     User     Refs     Status     Time     Action     Last Accessed     Statement     Information
Thanks.
in advance.

Will Usage Tracking help you?
You will find it in the Documentation: Oracle® Business Intelligence Server Administration Guide: Chapter 10 Administering the Oracle BI Server Query Environment.
Regards,
Stefan Hess
http://download.oracle.com/docs/cd/E10415_01/doc/bi.1013/b31770.pdf

Similar Messages

  • Open cursors and shared cached cursors

    Hi
    In addm report i found below recommendation, before any change in parameter i want to know about those parameters, is there any thumb rule for this parameters,
    is there any drawback if i increase those parameters.
    FINDING 7: 2.1% impact (10693 seconds)
    Soft parsing of SQL statements was consuming significant database time.
    RECOMMENDATION 1: Application Analysis, 2.1% benefit (10693 seconds)
    ACTION: Investigate application logic to keep open the frequently used
    cursors. Note that cursors are closed by both cursor close calls
    and
    session disconnects.
    RECOMMENDATION 2: DB Configuration, 2.1% benefit (10693 seconds)
    ACTION: Consider increasing the maximum number of open cursors a
    session
    can have by increasing the value of parameter "open_cursors".
    ACTION: Consider increasing the session cursor cache size by
    increasing
    the value of parameter "session_cached_cursors".
    RATIONALE: The value of parameter "open_cursors" was "300" during the
    analysis period.
    RATIONALE: The value of parameter "session_cached_cursors" was "20"
    during the analysis period.
    Thanks and Regards
    Jafar

    Jaffy
    Your system suffers from soft parsing (according to ADDM), therefore:
    - Increasing the value of open_cursors has no impact on soft parsing (only up to 9.2.0.4 open_cursors had a direct impact on that for PL/SQL programs).
    - Increasing the value of session_cached_cursors might help reducing soft parsing. If it helps or not is really dependent from the application.
    ADDM is probably advising to increase open_cursors as well, because the database engine will keep cursors open even if the application closes them.
    HTH
    Chris
    PS: cursor_sharing might be helpful to reduce hard parses. It has no impact on soft parses... So, forget the hint about it.

  • 100 %CPU utilizationis , cache buffers chains and cursor: pin S

    Hi every one ,
    we have incident causing system response very slow with very bad response time, below top 5 wait events from AWR (RAC database)
    Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
    latch: cache buffers chains 122,492 198,139 1,618 16.8 Concurrency
    gc buffer busy 119,903 83,248 694 7.1 Cluster
    cursor: pin S 18,674,280 72,651 4 6.2 Other
    log file sync 639,867 66,673 104 5.7
    Commit latch free 143,519 54,239 378 4.6 Other
    Oracle support clearly identified the issue with latch cache buffer chains as SQL statement executed around 35000 times which is too high based on execution plan . and they suggest to tune SQL statements .
    my question is cursor: pin S wait on X and library cache lock related ot it is just a symptoms , and is document 742599.1 applicable to us or not as we have 10.2.0.5 (suggest disable automatic memory management)
    As I know high CPU utilization as result of latch: cache buffers chains , the cursor Pin S Wait should not .
    Thank you in advance

    Hi,
    All these 4 top events (excluding log file sync) are quite unusual and in your case, if all these are comming atop, these quite well be related. So, you can't say that cursor pin s wait on x should not be dealt saperatly, but, still you can try out suggestion in the note. First find out from v$sgasta about current allocation of shared pool, then after disabling automatic memory management, increase shared_pool significantly as compared to current value, and then monito the system
    Definitely you should tune your SQL also, as suggested by support.
    Salman

  • I can edit on Premiere Pro 6 files, but after a dozen keystrokes, the space bar and cursor keys stop working while the save function works

    I can edit on Premiere Pro 6 files, but after a dozen keystrokes, the space bar and cursor keys stop working while the save function, render workspace function, export file are still operational. How can I fix it so I may complete my assignment?
    Besides DaVinci Resolve, Adobe Creative Suite 6 is the only software on the machine. I am using Windows 7 Professional 64-bit Operating System on AMD FX 6100 six-Core Processor at 3.31gHz and 32 GB RAM memory. There are two SLI-bridged GTX680 NVidia cards.
    The software was very stable for the last six months, working with 720P proxy files from 2.5K masters (Blackmagic Design Camera). I am working on a feature-length project that exceeds 1000 edits. I have broken the file into 2 one hour segments.
    I have deactivated the software before reinstalling the entire OS from scratch. PP6 was very stable for 48 hours. Then the freezing space bar returns. After a dozen strokes into the project, same problem.  I have made cache files store next to originals, I have deleted preview files if they were corrupted and causing instability. Am I missing something?
    I have Microsoft Security Essentials for virus protection. I double checked the memory for damage/defect. Nothing says that the motherboard or other components are damaged.
    I am in film competition overseas and need to have deliverables in less than a month's time.  I lost the last two weeks troubleshooting and this crisis came at an inopportune moment of the project.
    Any assistance would be greatly appreciated.

    Still getting software freezes but found a way to mitigate for the mean time.
    Upon launching Adobe Premiere Pro, hit CTRL-ALT-DEL to launch TaskManager as well.
    You will want to highlight Adobe QT32 Server.exe
    Right click and select "End Process Tree"
    You will get considerable stability in the program, long enough to get timing of cuts done. Be sure to save often.
    If the program freezes, do not hit Save. You definitely want to avoid saving the corruption into your TimeLine
    CTRL-ALT-DEL to relaunch the TaskManager and highlight Adobe Premiere Pro.exe
    Right-click to "End Process"
    No need to reboot the whole system; just launch Premiere Pro again and continue with the session. Note that your work reverted to Last Save.
    Hope this helps until the bug is fixed.

  • Administration Tool Cache vs. Cursor Cache

    Hi everyone,
    Someone asked me what's the difference between the cache in the administration tool ( Manage->Cache) versus cursor cache (Settings -> Administrator -> Manage Sessions), and even though I've cleared them both many-a-time, I still am not sure the difference.
    Can someone explain to me the difference between the two?
    Thanks!
    -Joe

    Hi,
    The cache in the administration tool is a file based cache on the OBIEE server which stores the results of database requests. This means that if a user makes a request the OBIEE server first checks the cache to see if the query has already been run and cached, or if a superset of the query has been run and cached (i.e. a less restrictive query that the current query can be satisfied from). If it finds there is a cache entry then it will return the results from here instead of issuing any SQL to the database, thereby speeding up getting the results back to the user.
    The cache shown in the cusrsor cache is the cache on the presentation server, this is a cache of the content which is being returned to the user's browser, this means if the user goes back to see results for a query they have already made then the presentation server can just return the same content to them without even having to go to the OBIEE server again at all.
    So basically 2 levels of caching, one on the OBIEE server and one on the presentation server.
    Regards,
    Matt

  • Inconsistent cursor cache  error still presisting

    Hey I have asked this question before but no solution has been provided me. I have been facing a very serious problem. I have oracle 11g on windows 2003 on 64 bit machine. With 2 processor 8GB RAM. Sga/pga automatically features turn on and 6GB assign to memory_max_target, memory_target . My problem is when I run update 600 hundred statements in one go and every update statement update record between 1 to 30000 and after 20 records I use commit. But after 20 to 30 record oracle go down (shutdown) when i check alert log It advise to check Trace file and in trace file I got this error ORA-02103: PCC: inconsistent cursor cache (out-of-range cuc ref). I turn on my cache sharing to force but no way. Is there any other fast way to update records??
    cursor_sharing string force
    cursor_space_for_time boolean FALSE
    open_cursors integer 300
    session_cached_cursors integer 50
    my update statement are following
    Update jg_6july_dg0 Set Operator_code= '915724' where Operator_code= '015325';
    Update jg_6july_dg0 Set Operator_code= '915715' where Operator_code= '015323';
    .Update jg_6july_dg0 Set Operator_code= '915712' where Operator_code= '015374';
    Cursor Caching I think is not a problem either it is a oracle bug or I’m doing something wrong. I can’t believe that oracle does not have solution for such little problem. My question is why oracle shutdown?? Waiting for you reply

    sir no such error i got nothing. sir proces of rqasing SR is quite cumbersome. i'm stuck where SR first page ask Type of Problem it is hide how i set its value one more strange thing i have change the setting of SGA or PGA now the error in my alter log has been changed i have pasted here some part of alterlog plz check and told me what should i do now.
    Checkpoint not complete
    Current log# 2 seq# 1844 mem# 0: E:\APP\ADMINISTRATOR\ORADATA\ORCL\ONLINELOG\O1_MF_2_4VH0YMCK_.LOG
    Current log# 2 seq# 1844 mem# 1: E:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\ORCL\ONLINELOG\O1_MF_2_4VH0YMLC_.LOG
    Fri Apr 03 19:15:50 2009
    Errors in file e:\app\administrator\diag\rdbms\orcl\orcl\trace\orcl_lgwr_5088.trc (incident=164082):
    ORA-00494: enqueue [CF] held for too long (more than 900 seconds) by 'inst 1, osid 3604'
    Incident details in: e:\app\administrator\diag\rdbms\orcl\orcl\incident\incdir_164082\orcl_lgwr_5088_i164082.trc
    Killing enqueue blocker (pid=3604) on resource CF-00000000-00000000
    by killing session 545.1
    Killing enqueue blocker (pid=3604) on resource CF-00000000-00000000
    by terminating the process
    LGWR (ospid: 5088): terminating the instance due to error 2103
    Fri Apr 03 19:15:51 2009
    Errors in file e:\app\administrator\diag\rdbms\orcl\orcl\trace\orcl_j000_4256.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-02103: PCC: inconsistent cursor cache (out-of-range cuc ref)
    Fri Apr 03 19:15:52 2009
    Errors in file e:\app\administrator\diag\rdbms\orcl\orcl\trace\orcl_j001_4764.trc:
    ORA-02103: PCC: inconsistent cursor cache (out-of-range cuc ref)
    Instance terminated by LGWR, pid = 5088Fri Apr 03 19:25:08 2009
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =61
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up ORACLE RDBMS Version: 11.1.0.7.0.
    Using parameter settings in server-side spfile E:\APP\ADMINISTRATOR\PRODUCT\11.1.0\DB_1\DATABASE\SPFILEORCL.ORA
    System parameters with non-default values:
    processes = 500
    sessions = 555
    sga_max_size = 5G
    nls_length_semantics = "BYTE"
    resource_manager_plan = ""
    sga_target = 5G
    memory_target = 0
    memory_max_target = 7360M
    control_files = "E:\APP\ADMINISTRATOR\ORADATA\ORCL\CONTROLFILE\O1_MF_4VH0YL9L_.CTL"
    control_files = "E:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\ORCL\CONTROLFILE\O1_MF_4VH0YLF0_.CTL"
    db_block_size = 16384
    compatible = "11.1.0.0.0"
    db_files = 7000
    db_create_file_dest = "E:\app\Administrator\oradata"
    db_recovery_file_dest = "E:\app\Administrator\flash_recovery_area"
    db_recovery_file_dest_size= 2G
    undo_tablespace = "UNDOTBS1"
    undo_retention = 900
    sec_case_sensitive_logon = FALSE
    remote_login_passwordfile= "EXCLUSIVE"
    db_domain = ""
    dispatchers = "(PROTOCOL=TCP) (SERVICE=orclXDB)"
    audit_file_dest = "E:\APP\ADMINISTRATOR\ADMIN\ORCL\ADUMP"
    audit_trail = "DB"
    db_name = "orcl"
    open_cursors = 300
    pga_aggregate_target = 2112M
    enable_ddl_logging = FALSE
    aq_tm_processes = 0
    diagnostic_dest = "E:\APP\ADMINISTRATOR"
    Fri Apr 03 19:25:09 2009
    PMON started with pid=2, OS id=2752
    Fri Apr 03 19:25:09 2009
    VKTM started with pid=3, OS id=1252 at elevated priority
    VKTM running at (20)ms precision
    Fri Apr 03 19:25:09 2009
    DIAG started with pid=4, OS id=2596
    Fri Apr 03 19:25:09 2009
    DBRM started with pid=5, OS id=1436
    Fri Apr 03 19:25:09 2009
    PSP0 started with pid=6, OS id=5104
    Fri Apr 03 19:25:09 2009

  • ORA-02103: PCC: inconsistent cursor cache

    I have been hit by one error ORA-02103: PCC: inconsistent cursor cache (out-of-range cuc ref). it occur when user execute thousand of update statement in one go. I have place commit after 50 records. But problem is still there. It shutdown the oracle. I have to again startup oracle database. I’m running query from Toad. Server machine remotely connected with Toad using TNS. Why this error occur or plz guide what is the best way to update records. I have also used parallel Hint. Committing after 50 record reduce error occurrence but problem not solve completely

    As per ora description:
    Error: SQL 2103
    Text: Inconsistent cursor cache (out-of-range CUC ref)
    Cause: The precompiler generates a unit cursor entry (UCE) array. An element
    in this array corresponds to an entry in the cursor cache (CUC). While
    doing a consistency check on the cursor cache, SQLLIB found that the
    UCE array contains an ordinal value that is either too large or less
    than zero. This happens only if your program runs out of memory.
    Action: Allocate more memory to your user session, then rerun the program. If
    the error persists, call customer support for assistance.
    How user is connected dedicated or shared? How much memory is used during update? Is it enough?
    Is parameter open_cursors high enough?
    And as You can see from error description - call Oracle support. Raise SR to Oracle - they will investigate and ask dump files and look through them and propose a solution.

  • Obiee 10g - schedule to clear cursor cache

    hi, experts, I applied the command line to clear cache.
    I found that it only clears cache at bi server level (cache entries in rpd).
    it does not clear the cursor cache (those are viewed in Web , Manage Session)
    can I set any schedule to clear cursor cache?

    Hi,
    OBIEE Cursor Cache clear from Dashboard Java Script
    OBIEE Cursor cache is normally cleared from the Administration - Manage Sessions Screen...
    Here is a way that a piece of java can be embedded into a dashboard and call an xframe so you cant see it being called and clear the OBIEE Cursor cache - Presentation cache in effect.
    1. Just insert a text box into Dashboard - Tick the "Contains HTML Markup"
    2. Paste the script below into the text box
    <script language="javascript">
    document.write(
    "<iframe width=0px height=0px src=" +
    document.location.href.match(/^[^?]+/) +
    "?ManageSessions" +
    document.location.href.match(/&_scid=[^&]+/) +
    "&Action=CloseAllCursors&Done=saw.dll%3fSessions',{ensureFreshUrl:true});return false;')></iframe>"
    </script>
    Voila - when ever you click on the dashboard or refresh it the cursor cache will get cleared.
    Please refer the below links for more information on this.
    What Is Presentation Services Cache In Fact?
    http://prolynxuk.com/blog/?p=496
    how to seed n clear cache of obiee
    http://obiee101.blogspot.in/2008/03/obiee-manage-cache-part-1.html
    How to clear the cache daily automatically
    http://obiee10grevisited.blogspot.in/2012/02/cache-in-obiee.html
    Award points it is useful.
    Thanks,
    Satya

  • Cursor cache - Time. what is this time

    Administration -> manage session -> Cursor cache - > Time.
    I have a question about this time?
    I ran a report and viewed its log through(Administration -> manage session -> Cursor cache-> View Log) . This report had no previous cache entries, because I cleared them all.
    The time shown for this report under (Administration -> manage session -> Cursor cache - > Time) says 18 seconds.
    I am sure when I clicked on the tab that has this report it took less than 4 seconds for the page to load with this report on it.
    So I wasn't sure what this time actually is? When I look into the log ( for this particular report), one of the line at the end of the log says
    [2012-03-09T15:50:04.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-33] [] [ecid: d01cd216d41a2bc8:bf26dbb:13549056e05:-8000-00000000005b8cad] [tid: 44ded940] [requestid: 7ee0096] [sessionid: 7ee0000] [username: -2327690837] -------------------- Logical Query Summary Stats: Elapsed time 23, Response time 18, Compilation time 1 (seconds) [[
    But the report gave back results surely in less than 18 seconds, so what this time indicates?

    Hi I had this on my bank statement too. I believe TCCP stands for Town Centre Car Parks. I parked near the Royal Armouries for several hours which cost me £10. If it had said carpark on my statement it would have saved me a lot of exploring on google as I, like you had forgotten what the charge was for. Hope this helps.
    The carparks run by TCCP are as follows and you may have visited one of them and payed by debit card:
    Merrion Centre & First Direct Leeds Arena
    Templar Street & Edward Street, Leeds
    7 Whitehall Road, Leeds
    9 Whitehall Road, Leeds
    Clarence Dock, Leeds
    1 Port Street, Manchester
    33 Tariff Street, Manchester
    30 Tariff Street, Manchester
    21 Ducie Street, Manchester
    75 Dale Street, Manchester

  • Cursor Cache

    Hi All,
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    I will not be able to share the query due to company policy.
    OEM plan showing as Merge cartesain for the query, I know the plan is not correct, as the query has incorrect number of cardinality. I have SQL profile set on this query:
    OEM shows as :
    Data Source : Cursor Cache
    Additional Information : 'SYS_SQL_PROFXXXXXX' (X is some number)
    Here is what is happening:
    1. The table where the merge join is purged Daily (EOD i.e. 12 AM ), that means it has no rows.
    2.Morning around 4 am one process will populate this table, and the same process further uses this table in a query, the query plan has merge Join cartesain (MJC), and it comes out as the number of rows is very less.
    3. Next around 6am again that process is triggered, this time it has huge number of rows, and again the query picks up the same MJC plan, and this time query executes for hours, as it has incorrect cardinality. When I again run SQL advisory on this query, it shows up an optimized plan, I kill the process and re-run the process again, and it works fine (query is out within 3 seconds)
    Guess it still picks up the previous plan of merge join @6am where the number of rows are less, from the cursor cache, and the OEM also shows data source as Cursor Cache. Can we invalidate the session cache if this is the case.
    Please help how can we handle this one?

    I think you are addressing a common problem in datawarehouses... there are staging tables, some times empty, some times with millions of rows... so, maybe the statistics are not reallistic... What is the result of the following query:
    select num_rows, last_analyzed from dba_tables where table_name = '<your_table>';
    If this is the problem, you should to consider one of the following strategies:
    1) Analize the table when is "full" and assure that never runs an analize table (or a gather_schema_stats) over this table. This strategy works fine if all days the table is populated with similar data... but maybe you need to change a gather_schema_stats job schedule... you should be aware of when and how the statistics are updated
    2) Populate the table, then run a gather_table_stats over the table, wait for the completion of the gathertable_stats_, and finally trigger the 6am process... maybe you need to schedule the process before 6am because the statistics gather process
    I hope this helps
    Regards,
    Alfonso

  • Lgwr and cursor

    please i appreciate your help for these 2 questions:
    1- can i add more lgwr process like dbwr ? if yes why if no why
    2- regarding cursors: when a session issues a select statement ex:
    select * from hr.employees
    is the employees table blocks are in the buffer cache they are used otherwise the server process copy the blocks
    from the data file into the buffer cache..is that right so far ?
    lets say the same session is updating the employees records , are the same blocks moved to the shared pool to satisfy sharing
    the same blocks with other session, and by then only these data blocks are called a cursor and moved to the shared pool and shared
    by other sessions using cursor_sharing parameter setting
    please clarification is really appreciated
    regards

    Maoro wrote:
    1- can i add more lgwr process like dbwr ? if yes why if no why
    If I remember correctly, there is a metalink note which does talk about adding more slaves to the LGWR. Its indeed correct that you can't add more processes of LGWR to the system. Reason , IMHO,is that LGWR doesn't need to scan a lot of the data. Normally , the redo log buffer is actually having a very tiny size as compared to the other caches that are there in the SGA. In addition to that, the algorithm of log buffer is pushing the data out from this buffer much fastly as compared to other caches, 3seconds is the time out even when there is no other even triggering LGWR to write. So , there is no need actually to go for more than one LGWR. ORacle has done couple of changes in the latches though , in order to to make the working of redo buffer better.
    You may be not knowing but there are other enhancements done in the redo and undo management to make its working better. There is a concept of Private Redo and In Memory Undo from 10g onwards, targeted to make the things less contending for the standard caches.
    2- regarding cursors: when a session issues a select statement ex:select * from hr.employees
    is the employees table blocks are in the buffer cache they are used otherwise the server process copy the blocks
    from the data file into the buffer cache..is that right so far ?
    lets say the same session is updating the employees records , are the same blocks moved to the shared pool to satisfy sharing
    the same blocks with other session, and by then only these data blocks are called a cursor and moved to the shared pool and shared
    by other sessions using cursor_sharing parameter setting
    >
    1s statement is correct.
    Again, this is correct to say that buffer cache's data is not going to be in the shared pool and that' s how the things have been so far.I have given a link above, read that link which does show that Oracle has done some changes probably and now, they may use shared pool's buffers too for the data buffers. I haven't done the research for it but the note is from Tanel Poder and if he says something, its not just like that.
    If you have some other questions about how things work, feel free to post.
    HTH
    Aman....

  • Safari 5.1.7 resumes last session and freezes immediately

    Hello,
    Upon the start of Safari, it resumes the last session and immediately freezes, showing the spinning rainbow and failing to load pages. None of the toolbar options can be accessed. I can only force quit. Restarting the computed accomplishes nothing.
    I am running Mac OS 10.7.4 with Safari 5.1.7
    Thanks.

    Open System Preferences > General
    Deselect:  Restore windows when quitting and re-opening apps
    Now restart your Mac, launch Safari.
    From the Safari menu bar top of your screen, click Safari > Empty Cache
    See if that made a difference...

  • V$session and gV$session

    Can anyone explain v$session and gv$session ;

    Prefix G in GV$ means GLOBAL. The best way to find out the the difference between v$session and gv$session is to look in v$fixed_view_definition.
    V$SESSION
    select SADDR,
           SID,
           SERIAL#,
           AUDSID,
      from GV$SESSION
    where inst_id = USERENV('Instance')GV$SESSION
    select s.inst_id,
           s.addr,
           s.indx,
           s.ksuseser,
           s.ksuudses,
           s.ksusepro,
           s.ksuudlui,
           s.ksuudlna,
           s.ksuudoct,
           s.ksusesow,
           decode(s.ksusetrn, hextoraw('00'), null, s.ksusetrn),
           decode(s.ksqpswat, hextoraw('00'), null, s.ksqpswat),
           decode(bitand(s.ksuseidl, 11),
                  1,
                  'ACTIVE',
                  0,
                  decode(bitand(s.ksuseflg, 4096), 0, 'INACTIVE', 'CACHED'),
                  2,
                  'SNIPED',
                  3,
                  'SNIPED',
                  'KILLED'),
           decode(s.ksspatyp, 1, 'DEDICATED', 2, 'SHARED', 3, 'PSEUDO', 'NONE'),
           s.ksuudsid,
           s.ksuudsna,
           s.ksuseunm,
           s.ksusepid,
           s.ksusemnm,
           s.ksusetid,
           s.ksusepnm,
           decode(bitand(s.ksuseflg, 19),
                  17,
                  'BACKGROUND',
                  1,
                  'USER',
                  2,
                  'RECURSIVE',
           s.ksusesql,
           s.ksusesqh,
           s.ksusesqi,
           decode(s.ksusesch, 65535, to_number(null), s.ksusesch),
           s.ksusepsq,
           s.ksusepha,
           s.ksusepsi,
           decode(s.ksusepch, 65535, to_number(null), s.ksusepch),
           decode(s.ksusepeo, 0, to_number(null), s.ksusepeo),
           decode(s.ksusepeo, 0, to_number(null), s.ksusepes),
           decode(s.ksusepco, 0, to_number(null), s.ksusepco),
           decode(s.ksusepco, 0, to_number(null), s.ksusepcs),
           s.ksuseapp,
           s.ksuseaph,
           s.ksuseact,
           s.ksuseach,
           s.ksusecli,
           s.ksusefix,
           s.ksuseobj,
           s.ksusefil,
           s.ksuseblk,
           s.ksuseslt,
           s.ksuseltm,
           s.ksusectm,
           decode(bitand(s.ksusepxopt, 12), 0, 'NO', 'YES'),
           decode(s.ksuseft,
                  2,
                  'SESSION',
                  4,
                  'SELECT',
                  8,
                  'TRANSACTIONAL',
                  'NONE'),
           decode(s.ksusefm, 1, 'BASIC', 2, 'PRECONNECT', 4, 'PREPARSE', 'NONE'),
           decode(s.ksusefs, 1, 'YES', 'NO'),
           s.ksusegrp,
           decode(bitand(s.ksusepxopt, 4),
                  4,
                  'ENABLED',
                  decode(bitand(s.ksusepxopt, 8), 8, 'FORCED', 'DISABLED')),
           decode(bitand(s.ksusepxopt, 2),
                  2,
                  'FORCED',
                  decode(bitand(s.ksusepxopt, 1), 1, 'DISABLED', 'ENABLED')),
           decode(bitand(s.ksusepxopt, 32),
                  32,
                  'FORCED',
                  decode(bitand(s.ksusepxopt, 16), 16, 'DISABLED', 'ENABLED')),
           s.ksusecqd,
           s.ksuseclid,
           decode(s.ksuseblocker,
                  4294967295,
                  'UNKNOWN',
                  4294967294,
                  'UNKNOWN',
                  4294967293,
                  'UNKNOWN',
                  4294967292,
                  'NO HOLDER',
                  4294967291,
                  'NOT IN WAIT',
                  'VALID'),
           decode(s.ksuseblocker,
                  4294967295,
                  to_number(null),
                  4294967294,
                  to_number(null),
                  4294967293,
                  to_number(null),
                  4294967292,
                  to_number(null),
                  4294967291,
                  to_number(null),
                  bitand(s.ksuseblocker, 2147418112) / 65536),
           decode(s.ksuseblocker,
                  4294967295,
                  to_number(null),
                  4294967294,
                  to_number(null),
                  4294967293,
                  to_number(null),
                  4294967292,
                  to_number(null),
                  4294967291,
                  to_number(null),
                  bitand(s.ksuseblocker, 65535)),
           s.ksuseseq,
           s.ksuseopc,
           e.kslednam,
           e.ksledp1,
           s.ksusep1,
           s.ksusep1r,
           e.ksledp2,
           s.ksusep2,
           s.ksusep2r,
           e.ksledp3,
           s.ksusep3,
           s.ksusep3r,
           e.ksledclassid,
           e.ksledclass#,
           e.ksledclass,
           decode(s.ksusetim,
                  0,
                  0,
                  -1,
                  -1,
                  -2,
                  -2,
                  decode(round(s.ksusetim / 10000),
                         0,
                         -1,
                         round(s.ksusetim / 10000))),
           s.ksusewtm,
           decode(s.ksusetim,
                  0,
                  'WAITING',
                  -2,
                  'WAITED UNKNOWN TIME',
                  -1,
                  'WAITED SHORT TIME',
                  decode(round(s.ksusetim / 10000),
                         0,
                         'WAITED SHORT TIME',
                         'WAITED KNOWN TIME')),
           s.ksusesvc,
           decode(bitand(s.ksuseflg2, 32), 32, 'ENABLED', 'DISABLED'),
           decode(bitand(s.ksuseflg2, 64), 64, 'TRUE', 'FALSE'),
           decode(bitand(s.ksuseflg2, 128), 128, 'TRUE', 'FALSE')
      from x$ksuse s, x$ksled e
    where bitand(s.ksspaflg, 1) != 0
       and bitand(s.ksuseflg, 1) != 0
       and s.ksuseopc = e.indxAlexander Anokhin
    http://alexanderanokhin.wordpress.com/

  • Toplink session and UnitOfWork synchronization problem

    Dear forum readers,
    I am not sure i fully understand the way how toplink deals with caching. To me it seems, that i got some pretty scary results, which i am not sure how to interpret and to work around them.
    The following code snippet is part of a unit test:
    >>>>>>>>>>>> snip >>>>>>>>>>>>>>>
    1 public void test2() {
    2
    3 UnitOfWork uow = (UnitOfWork) SessionManager.getSessionManager().getSession().getUnitOfWork();
    4 Justitiabele justitiabele = findJustitiabele("findById", Justitiabele.class, new Long(551));
    5 ((JustitiabeleIdentiteit) justitiabele.getJustitiabeleIdentiteiten().iterator().next()).setMeisjesnaam("Kettner10");
    6 Justitiabele tmp = (Justitiabele) uow.registerObject(justitiabele);
    7 ((JustitiabeleIdentiteit) tmp.getJustitiabeleIdentiteiten().iterator().next()).setMeisjesnaam("Kettner10");
    8 uow.commitAndResume();
    9 }
    10
    11 public Justitiabele findJustitiabele(String queryName, Class objectClass, Object param) {
    12      SessionWrapper toplinkSessionWrapper = getSession();
    13      toplinkSessionWrapper.getClientSession().executeQuery(queryName, objectClass, param);
    14 }
    >>>>>>>>>>>>>>>> snip <<<<<<<<<<<<<<<<
    I am querying a particular object (line 4). Then i make some changes to that object (line 5). Cause the object is not registered in the UnitOfWork these changes shouldn't be persisted. So far so good. To achieve persistency i now register the object, and i make the same modifications to the toplink clone, expecting them after the commit to be persisted in the database.
    Contrary to my expectations, the changes were not persisted!!!
    Deleting line 5 (the modifications, before registering the object), leads to the desired result.
    Somehow the queried object seems to be a direct reference to the (client-) session cache. So when registering the object in the UnitOfWork, the (already modified) backupclone is copied from the session cache to the UnitOfWork. If the same changes are done to the working clone,there are no differences between backup- and working clone and no changes are made in the database.
    It gets even better: I tried to query the object again (before line 6) (even with a different UnitOfWork) before modifying it, in order to retrieve the original state of the object, but again i only was able to find the modified object.
    If the queried object indeed is a reference to some cache, i cannot understand, why that cache is not read only!!!
    Am i doing something wrong ?
    Is there a way to work around this problem?
    What are the consequences for transaction handling ? What about Isolation, when clients can see each others changes in a kind of writeable shared session???
    I try to work around that problem by registering every object, that is queried from the database in the UnitOfWork right after it was queried. This seems to me the only solution, though this is contrary to what the toplink developers guide says, namely, that only objects which are modified should be registered, due to performance reasons.
    I would be grateful to any help in understanding and working around this problem.
    Martin
    PS: Here's the log i got by running the test.:
    STDOUT >>>>>>>>>>>>>>>>>>>>>>>>>>>>
    C:\devtools\jdev\905\jdk\bin\javaw.exe -ojvm -classpath C:\ToplinkDemo\ToplinkDomein\classes;C:\ToplinkDemo\ToplinkDomein\classes\META-INF\ToplinkDomein;C:\devtools\jdev\905\toplink\jlib\source.jar;C:\devtools\jdev\905\lib\xmlparserv2.jar;C:\devtools\jdev\905\lib\xmlcomp.jar;C:\devtools\jdev\905\jdbc\lib\classes12.jar;C:\devtools\jdev\905\jdbc\lib\nls_charset12.jar;C:\devtools\jdev\905\toplink\jlib\toplink.jar org.dji.br.bl.domein.TestMain
    ServerSession(91)--Connection(92)--TopLink, version: OracleAS TopLink - 10g (9.0.4) (Build 031126)
    ServerSession(91)--Connection(92)--connecting session: djisession
    ServerSession(91)--Connection(92)--connecting(DatabaseLogin(
         platform=>Oracle9Platform
         user name=> "dji"
         datasource URL=> "jdbc:oracle:thin:@S-ORACLE01:1521:djipoc"
    ServerSession(91)--Connection(92)--Connected: jdbc:oracle:thin:@S-ORACLE01:1521:djipoc
         User: DJI
         Database: Oracle Version: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.4.0 - Production
         Driver: Oracle JDBC driver Version: 9.0.1.5.0
    ServerSession(91)--Connection(101)--TopLink, version: OracleAS TopLink - 10g (9.0.4) (Build 031126)
    ServerSession(91)--Connection(101)--connecting session: djisession
    ServerSession(91)--Connection(101)--connecting(DatabaseLogin(
         platform=>Oracle9Platform
         user name=> "dji"
         datasource URL=> "jdbc:oracle:thin:@S-ORACLE01:1521:djipoc"
    ServerSession(91)--Connection(101)--Connected: jdbc:oracle:thin:@S-ORACLE01:1521:djipoc
         User: DJI
         Database: Oracle Version: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.4.0 - Production
         Driver: Oracle JDBC driver Version: 9.0.1.5.0
    ServerSession(91)--Connection(103)--TopLink, version: OracleAS TopLink - 10g (9.0.4) (Build 031126)
    ServerSession(91)--Connection(103)--connecting session: djisession
    ServerSession(91)--Connection(103)--connecting(DatabaseLogin(
         platform=>Oracle9Platform
         user name=> "dji"
         datasource URL=> "jdbc:oracle:thin:@S-ORACLE01:1521:djipoc"
    ServerSession(91)--Connection(103)--Connected: jdbc:oracle:thin:@S-ORACLE01:1521:djipoc
         User: DJI
         Database: Oracle Version: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.4.0 - Production
         Driver: Oracle JDBC driver Version: 9.0.1.5.0
    ServerSession(91)--Connection(105)--TopLink, version: OracleAS TopLink - 10g (9.0.4) (Build 031126)
    ServerSession(91)--Connection(105)--connecting session: djisession
    ServerSession(91)--Connection(105)--connecting(DatabaseLogin(
         platform=>Oracle9Platform
         user name=> "dji"
         datasource URL=> "jdbc:oracle:thin:@S-ORACLE01:1521:djipoc"
    ServerSession(91)--Connection(105)--Connected: jdbc:oracle:thin:@S-ORACLE01:1521:djipoc
         User: DJI
         Database: Oracle Version: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.4.0 - Production
         Driver: Oracle JDBC driver Version: 9.0.1.5.0
    ServerSession(91)--Connection(107)--TopLink, version: OracleAS TopLink - 10g (9.0.4) (Build 031126)
    ServerSession(91)--Connection(107)--connecting session: djisession
    ServerSession(91)--Connection(107)--connecting(DatabaseLogin(
         platform=>Oracle9Platform
         user name=> "dji"
         datasource URL=> "jdbc:oracle:thin:@S-ORACLE01:1521:djipoc"
    ServerSession(91)--Connection(107)--Connected: jdbc:oracle:thin:@S-ORACLE01:1521:djipoc
         User: DJI
         Database: Oracle Version: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.4.0 - Production
         Driver: Oracle JDBC driver Version: 9.0.1.5.0
    ServerSession(91)--Connection(109)--TopLink, version: OracleAS TopLink - 10g (9.0.4) (Build 031126)
    ServerSession(91)--Connection(109)--connecting session: djisession
    ServerSession(91)--Connection(109)--connecting(DatabaseLogin(
         platform=>Oracle9Platform
         user name=> "dji"
         datasource URL=> "jdbc:oracle:thin:@S-ORACLE01:1521:djipoc"
    ServerSession(91)--Connection(109)--Connected: jdbc:oracle:thin:@S-ORACLE01:1521:djipoc
         User: DJI
         Database: Oracle Version: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.4.0 - Production
         Driver: Oracle JDBC driver Version: 9.0.1.5.0
    ServerSession(91)--Connection(111)--TopLink, version: OracleAS TopLink - 10g (9.0.4) (Build 031126)
    ServerSession(91)--Connection(111)--connecting session: djisession
    ServerSession(91)--Connection(111)--connecting(DatabaseLogin(
         platform=>Oracle9Platform
         user name=> "dji"
         datasource URL=> "jdbc:oracle:thin:@S-ORACLE01:1521:djipoc"
    ServerSession(91)--Connection(111)--Connected: jdbc:oracle:thin:@S-ORACLE01:1521:djipoc
         User: DJI
         Database: Oracle Version: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.4.0 - Production
         Driver: Oracle JDBC driver Version: 9.0.1.5.0
    ServerSession(91)--sequencing connected, state is ForcedToUseWriteAccessor_State
    ServerSession(91)--client acquired
    ClientSession(114)--acquire unit of work: 113
    ClientSession(114)--Execute query ReadObjectQuery(org.dji.br.bl.domein.justitiabele.Justitiabele)
    ServerSession(91)--Connection(101)--SELECT DJI_NUMMER FROM DJI.JUSTITIABELEN WHERE (DJI_NUMMER = 551)
    ServerSession(91)--Execute query ReadAllQuery(org.dji.br.bl.domein.justitiabele.JustitiabeleIdentiteit)
    ServerSession(91)--Connection(92)--SELECT INDICATIE_NONAMER, ACHTERNAAM, BRN_CODE, MEISJESNAAM, ID, ROEPNAAM, GEBOORTEPLAATS_BUITENLAND, TITEL_BUITENLAND, VOORNAAM, VOORLETTERS, JBE_DJI_NUMMER, DATUM_INGANG, DATUM_EINDE FROM DJI.JUSTITIABELEIDENTITEITEN WHERE (JBE_DJI_NUMMER = 551)
    ServerSession(91)--Execute query ReadObjectQuery(org.dji.br.bl.domein.justitiabele.Justitiabele)
    UnitOfWork(113)--Register the object org.dji.br.bl.domein.justitiabele.Justitiabele@82
    UnitOfWork(113)--Register the existing object org.dji.br.bl.domein.justitiabele.JustitiabeleIdentiteit@84
    UnitOfWork(113)--Register the existing object org.dji.br.bl.domein.justitiabele.Justitiabele@82
    UnitOfWork(113)--begin unit of work commit
    ClientSession(114)--Connection(103)--begin transaction
    UnitOfWork(113)--Execute query WriteObjectQuery(org.dji.br.bl.domein.justitiabele.Justitiabele@83)
    UnitOfWork(113)--Execute query WriteObjectQuery(org.dji.br.bl.domein.justitiabele.JustitiabeleIdentiteit@85)
    ClientSession(114)--Connection(103)--commit transaction
    UnitOfWork(113)--end unit of work commit
    UnitOfWork(113)--resume unit of work
    Process exited with exit code 0.

    Mark,
    The object returned from any query on the sessions is the object from the shared cache. Any changes made to this will change the shared cache.
    You must acquire a UnitOfWork and register the cached object into the UnitOfWork in order to get an isolated copy that can be modified within a transactional context (UnitOfWork) without other threads seeing these transient changes. The typical approach is to read through the session and register objects involved in a change prior to modifications.
    The is a UnitOfWork paper available on TopLink technical information page that may be useful to you:
    http://www.oracle.com/technology/products/ias/toplink/technical/index.html
    Doug

  • Error reading data from static cursor cache.

    Hi,
    Does anyone know what causes this error below. It just started happening out of the blue. I'm running Apache Tomcat 4.1 on Win 2000 server with MS SQL Server 2000 database.
    Error:
    java.sql.SQLException: [Microsoft][SQLServer JDBC Driver]Error reading data from static cursor cache.
    Thanks,
    TR

    hi,
    i had a similar sort of error, something along the lines of "error setting up static cursor cache" using the SQL JDBC drivers on Win2K. i deleted the file entries in the TEMP folder of c:\documents and settings\<user>\Local Settings\TEMP and everything was cool after it. i'm not sure what the exact issue is (probably something like maximum folder size had been reached). i ran the FileMon utility from www.sysinternals.com and it reported a DISK_FULL error on a temporary file being read by the process. to cut a long story short, everything is NOW cool.
    cheer,
    dara

Maybe you are looking for

  • Comm.jar not working in Applet but works in Eclipse

    Hello, Please help me to read serial port data from Java Applet. The below code working well and get data from weighing machine when we run in eclipse(Run Applet). But it now working when we use class file in Applet. I think its security issue, but i

  • Migrating from weblogic 8.1 to sun one server

    I need some documentation that could guide, in steps involved, in migrating an J2ee application from weblogic 8.1 to sun one application server. Please help.

  • How to disable Emergency Project in ATG BCC 11.1

    Hi, How to disable Emergency Project in ATG BCC 11.1? I don't want to hide and I want to grey out the link in BCC. Thanks, Prakash KS

  • How to get the IObject from the service order number

    Hi Experts, I am implementing the CRM_ORDER_STATUS Badi, in that I am getting the Service Order number but I want to get the IObject number, how to get it, which FM to use or any other table. Please let me know. Thanks in Advance for your help, Prave

  • Followed instructions, but my laptop crashed.

    A few weeks ago my Ipod would not update when I connected it to my laptop. It would give me a pop up telling me that I should download the updater. So I downloaded the updater. However it would only allow me to restore the Ipod, the "update" button c