Unique ID generator?

Is there a way to give an object a unique ID when inserting it into the document via an extension? I can use a JavaScript random number generator but didn't know if the Dreamweaver API has a built in function. Thanks.

Hi
Don't know if this is correct, but it looks as though it may be from the spry api.
PZ

Similar Messages

  • Unique ID during upload through WebADI integrator

    Hi All,
    Is there any unique ID generated for each upload through a custom Integrator (similar to fnd_global.conc_request_id for concurrent programs)?
    Thanks,
    Sumanth

    JE,
    Normally we use row id or integration id for this, we populate this value in the external id field (for the records created manually) so that we can do bulk update. My personal experience is row id's are more reliable than integration id's and it has always worked for me :) (my technical team also prefers to use the row id for WS as this value cannot be changed). What kind of linking problem did your people face by using row id?
    If you are looking for some other unique single value identifier, I dont think any other than row id, integration id and external unique id exists. Else you need to go with “On Demand predefined fields” option (First Name, Last Name, Email, Work Phone # for contact records).

  • How to have a unique requestId for a Page for multiple users

    Hi,
    Can someone please tell me how to get a unique RequestId whenever a page is visited. I need to have a unique RequestId generated for each user if they are on some specific
    page. Whenever a users refreshes a page or come to the page by clicking back button want to have a unique requestId always.
    Can I use sessionRequestId or something for HttpServletRequestObject that always gives me a unique Id.
    Thanks

    Unique for how long?
    Eternally unique ?
    unique until the server restarts?
    Most simplistic approach would be to have a static/singleton counter. It wouldn't work perfectly in high volume sites, but should work well enough for smaller ones. Including a timestamp in the key makes it fairly unique.
    This would handle both initial requests and refreshes of the page.
    There is nothing you can do about users hitting the back button. If the page gets served from the client cache, the server knows nothing about it, and can not issue a unique id.
    Why the requirement? What are you trying to accomplish by doing this?

  • Entity bean with another class as primary key generator class

    Hi All,
    I have a CMP entity bean in which I generate the primary key using my own class know as unique id generator.
    I use it in ejbCreate method in the following manner
    public Long ejbCreate(HospitalData hospitalData) throws CreateException{
              Long myid = new Long(UniqueIdGenerator.getId());
              System.out.println("My id generated ====== "+myid);
              this.hospitalid = myid;
              System.out.println("My id generated ====== "+this.hospitalid);
              System.out.println("Came in ejbCreate method of HospitalEJB");
              this.hospitalname = hospitalData.getHospitalname();          
              return null;
    Can you tell me how I map this primary key in my ejb-jar.xml and jbosscmp-jdbc.xml.
    Anyhelp would be appreciated.
    Thanks
    Sameer

    "Bhamu" <[email protected]> wrote in message
    news:9keuo4$[email protected]..
    I am trying to develop an entity bean attached to a key which have a
    composite key using Primary Key Class. When I use the findByPrimaryKey
    method of the bean it throws an exception as follows,I notice that you are using the reference CMP plugin.
    at com.netscape.server.ejb.SQLPersistenceManager.find(Unknown Source)I'm also willing to bet that you may have created your own primary key class
    with a single field, rather than using the fields type as a primitive
    primary key class. If you do this, then SQLPersistenceManager will break.
    It is badly written, and has some stupid assumptions. The one I remember
    the most is that if you only have one primary key field, then
    SQLPersistenceManager assumes you have used a primitive type such as
    java.lang.String or java.lang.Integer, to represent it, rather than creating
    your own <Enity>Pk.java file.
    SQLPersistenceManager works for toy examples, but in general I would say it
    is broken and unusable. Either use BMP, or splash out the money for Coco
    Base from Thought Inc. Currently the only CMP plugin for iPlanet App server,
    other than the reference implementation, that I know of.

  • How to return the newly generated sequence id for an INSERT statement

    A record is to be inserted into a table with a sequence for the primary key. The newly inserted sequence value is to returned on successful insertion. Is it possible to do all this in a single statement (say executeUpdate or any other) using java.sql.* ?
    E.g.: - A student record is to be inserted into the STUDENT table. There is a sequence (by name Student_ID_SEQ) on the primary key Student_ID. Student_ID_SEQ.nextval will generate the new sequence id which will be provided as input to the SQL statement (say statement.executeUpdate) along with other student attribute values. On insertion the created sequence id should be returned. And all this should happen in a single statement (single call to database). Stored Procedures can accomplish this. But is this feasible without the use of Stored Procedures?
    Thanks.

    a better aproach is to generate the auto key on the
    database side, not on the application side.That's his problem - since the database is supplying the key for the new record his application which executed the SQL has no way to identify the record that was just added. I just create the key on the app server and accept the likelihood of overlap (which is extremely small).
    Here is a more technical explanation:
    Table Person
       ID,
       Name,
       Phone Number,
       Age
    }The field ID is an autonumber, and all other fields are not unique.
    Now, when this code executes:
    PreparedStatement pst = conn.prepareStatement("Insert Into Person (Name, Phone Number, Age) Values ?, ?, ?");
    pst.setString(1, "John");
    pst.setString(2, "405-444-5555");
    pst.setInt(3, 44);
    pst.executeUpdate();How can the app determine the ID of the person just added since no query is possible which is guaranteed to select just the record that was inserted?
    Since I am generally against Stored Procedures I would develop a way to insure that your keys were unique and generate them inside the app server.

  • Unique records in ODS

    Hello
    I put all of the Info Objects avaiable in Info Source and my ODS still aggregates the data. How to make the records unique for ODS? I mean, is any possibility to add the unique identifier, generated / populated by Update Rules?
    Kooyot.

    HI kooyat--
    Under ODS settings you do have a checkbox with Unique records,..
    you enable that check box and check whether it brings the Uniques records or not.
    If this doesn't you  have to go for an unique identifier in Update rules.
    Regards,
    VIshwa.

  • Javascript API and server-generated URLs

    I'm creating a Shopify site for a client and part of the site is a complex Adobe Edge Animate animation. Shopify uses server-generated paths to files. My animation uses images and audio.
    It looks like the animation sets a base path set in the Publish settings then it sets everything itself in the Javascript runtime and associated files. With Shopify, I end up with image URLs that are like this one: "cdn//path//image.gif?234324". I don't have a absolute path with an absolute file name.
    Does anyone know of a way I can put these specific unique server-generated paths to the files needed for my animation in the files for my animation? Basically, can I customize and control the entire URL to all the files needed for my animation?
    I see how I can put any URL for assets in the edge.js file. My bigger problem, I think, is being able to customize the URLs for the edge.js and edgeActions.js files.
    If anyone has any ideas, I'd really, really appreciate it!

    I am getting close on this!  I'm hoping someone can help me figure out the last step.
    I am using an iframe and sandbox method to load the swf video player like this:
    <iframe id="playerFrame" src="player.html" sandboxRoot="http://localhost/air/" documentRoot="app:/"></iframe>
    So now the JavaScript API for the player actually works in player.html.   The BIG problem is that the videos are downloaded to the app-storage directory.  Well, based on AIR security, the iframe (non-application sandbox) content cannot access the application storage directory.  So now only videos that are in the app:/ location will load.  Unfortunately, this application downloads video from a central server and places them in the application storage directory.  Security also will not allow me to download to the app directory.  So I am in an endless circle!
    I can't use the API if I am in the application sandbox because there is no domain (sandboxRoot) available.
    The API works in the non-application sandbox, but I can't access the downloadable content.
    What am I missing here?  Surely, people have needed to interact with a SWF file using JavaScript and load dynamic content at the same time.

  • Custom Key JPA Generator

    Hi all,
    Using CE 7.1 is possible to define a GENERATOR for a Custom Key on CAF Business Object?
    Best regards
    Isaías Barroso

    Hi,
    CAF does not have any key generator.
    What you can do is, you can create a utility DC of java type. Create a Key Generator class that uses SAP's unique ID generator classes.
    com.sap.guid.GUIDGeneratorFactory
    com.sap.guid.IGUIDGenerator
    Method to get the key will look like
    public String getUUID() {
            return this.generator.createGUID().toHexString();
    Now use this class in your caf project to generate unique keys.
    Hope this helps,
    Ashutosh

  • In the privacy policy, it states that percentages and durations of books read are being collected to ensure that publishers can have a metered price model, prices depending on how the book was read. Give me an example of a company with such a price model?

    In your privacy policy, you state that the percentages and durations of books read are being collected to ensure that publishers can choose a metered price model. Prices which depends on the duration for which the book was read.
    Give me an example of a company with such a price model? Are the information being collected even where the companies have not asked for the information, even when the metered price models are not being used?
    Here is an extract från the privacy policy:
    What information does Adobe Digital Editions collect and how is it used?
    The following information may be collected when an eBook with DRM is opened in Adobe Digital Editions software. If an eBook does not have any DRM associated with it, then no information is collected.
    User GUID: The User GUID is a unique value assigned in place of your User ID and is used to authenticate you.
    Device GUID: The Device GUID is a unique value generated to identify your device. It is used to ensure that the eBook may be viewed on your device and that the number of devices permitted by the license is not exceeded.
    Certified App ID: This ID represents the application that is being used to view the eBook, in this case Adobe Digital Editions. It is necessary to ensure that only a certified application may display an eBook. This helps to minimize piracy and theft of eBooks.
    Device IP (Internet Protocol): This identifies the country you are located in when you purchase an eBook.  It is used by eBook providers for the enablement of localized pricing models. Only the country identifier of the Device IP is stored.
    Duration for Which the Book was Read: This information may be collected to facilitate limited or metered pricing models entered into between eBook providers, such as publishers and distributors. These models are based on how long a reader has read an eBook. For example, you may borrow an eBook for a period of 30 days. While some publishers and distributors may charge libraries and resellers for 30 days from the date of the download, others may follow a metered pricing model and charge them for the actual time you read the eBook.
    Percentage of the eBook Read: The percentage of the eBook read may be collected to allow eBook providers such as publishers to implement subscription pricing models where they charge based on the percentage of the eBook read.
    Information provided by eBook providers relating to the eBook you have purchased: The following information is provided by the eBook provider to enable the delivery of the eBook to your device:Date of eBook purchase/download
    Distributor ID and Adobe Content Server Operator URL
    Metadata of the eBook, such as title, author, language, publisher list price, ISBN number
    How is the information transmitted?
    The data is sent periodically to Adobe via a secure transmission using HTTPS.
    How is the information used?
    Adobe uses the information collected about the eBook you have opened in Adobe Digital Editions software to ensure it is being viewed in accordance with the type of DRM license that accompanies that eBook. The type of license is determined by the eBook provider. For more information on how each piece of data is used, please see above.

    In your privacy policy, you state that the percentages and durations of books read are being collected to ensure that publishers can choose a metered price model. Prices which depends on the duration for which the book was read.
    Give me an example of a company with such a price model? Are the information being collected even where the companies have not asked for the information, even when the metered price models are not being used?
    Here is an extract från the privacy policy:
    What information does Adobe Digital Editions collect and how is it used?
    The following information may be collected when an eBook with DRM is opened in Adobe Digital Editions software. If an eBook does not have any DRM associated with it, then no information is collected.
    User GUID: The User GUID is a unique value assigned in place of your User ID and is used to authenticate you.
    Device GUID: The Device GUID is a unique value generated to identify your device. It is used to ensure that the eBook may be viewed on your device and that the number of devices permitted by the license is not exceeded.
    Certified App ID: This ID represents the application that is being used to view the eBook, in this case Adobe Digital Editions. It is necessary to ensure that only a certified application may display an eBook. This helps to minimize piracy and theft of eBooks.
    Device IP (Internet Protocol): This identifies the country you are located in when you purchase an eBook.  It is used by eBook providers for the enablement of localized pricing models. Only the country identifier of the Device IP is stored.
    Duration for Which the Book was Read: This information may be collected to facilitate limited or metered pricing models entered into between eBook providers, such as publishers and distributors. These models are based on how long a reader has read an eBook. For example, you may borrow an eBook for a period of 30 days. While some publishers and distributors may charge libraries and resellers for 30 days from the date of the download, others may follow a metered pricing model and charge them for the actual time you read the eBook.
    Percentage of the eBook Read: The percentage of the eBook read may be collected to allow eBook providers such as publishers to implement subscription pricing models where they charge based on the percentage of the eBook read.
    Information provided by eBook providers relating to the eBook you have purchased: The following information is provided by the eBook provider to enable the delivery of the eBook to your device:Date of eBook purchase/download
    Distributor ID and Adobe Content Server Operator URL
    Metadata of the eBook, such as title, author, language, publisher list price, ISBN number
    How is the information transmitted?
    The data is sent periodically to Adobe via a secure transmission using HTTPS.
    How is the information used?
    Adobe uses the information collected about the eBook you have opened in Adobe Digital Editions software to ensure it is being viewed in accordance with the type of DRM license that accompanies that eBook. The type of license is determined by the eBook provider. For more information on how each piece of data is used, please see above.

  • Error with Links if using x3 primary keys

    Hi Folks:
    Here is the error code I'm receiving:
    ORA-01422: exact fetch returns more than requested number of rows
    Unable to fetch row.
    Background: I am using Application Express 3.2
    All of the pages I have created that rely on x2 primary keys (first_name, last_name) work fine.
    I have a table that has x3 primary keys: Table is called "time_off_awards". The x3 primary keys are: last_name, first_name, approval_date.
    I created a report that works properly and lists the awards for each person (each person can receive more than one award-on different dates).
    I also created an edit link that works properly IF *(only if)* each individual has only one award. If the individual has more than one award then I get the error above. When I set up the link I used all three keys. The x3 PK's should uniquely identify each row, but if the same last name/first name appear more than once (that is if the person has more than one award) I get the error. I thought at first maybe it was just not reading the 3rd key/part of the link (approval_date), but it shows properly if you move your cursor over the edit link.
    Here is a link to a screen pic of how I have my link set in Apex:
    [http://www.wczone.com/link_settings.gif]
    Here is a link to a pic of the report with some info:
    [http://www.wczone.com/report_link.gif]
    If needed, here is my table info:
    CREATE TABLE PERSONNEL.TIME_OFF_AWARDS (
    LAST_NAME VARCHAR2(40) NOT NULL,
    FIRST_NAME VARCHAR2(25) NOT NULL,
    APPROVAL_DATE DATE NOT NULL,
    HOURS_OFF NUMBER(3),
    CITATION VARCHAR2(1500),
    /* Keys */
    PRIMARY KEY (LAST_NAME, FIRST_NAME, APPROVAL_DATE),
    /* Foreign keys */
    CONSTRAINT TOA_PERSONNEL
    FOREIGN KEY (LAST_NAME, FIRST_NAME)
    REFERENCES PERSONNEL.MARC_PERSONNEL(LAST_NAME, FIRST_NAME)
    TABLESPACE PERSONNEL;
    Thanks for any help, I've tried looking at a couple of Apex books, but they didn't help much.
    Matt
    Edited by: user10495310 on Mar 4, 2009 8:21 AM

    Thank you everyone for the help and information you gave to me.
    Your ideas and advice helped me to think through the issues involved.
    The way i actually found to work around this issue was a little different.
    What I did was the following (which may only be usable with empty tables. If its possible to create a new column with a sequence and trigger on a table that already contains data it should work also):
    1. I removed the current PK's that were currently set.
    2. I added a single, unique PK (that used a sequence and trigger to automatically increment) to the table as was suggested in this thread and other APEX forum threads.
    3. I changed the link on the report so that it used the new PK, and also changed the PK used on the forms (under Processes - both the page rendering and page processing processes).
    The Difference:
    4. Next I changed the table (not by using APEX, but directly) from using the automatically generated ID as the PK, back to using the compound PK (x3 keys). I then added an constraint to make sure that the automatically generated column was unique. So now I have the compound PK that my supervisor wants us to use, and I'm able to use a unique, automatically generated key for APEX to use.
    I found also that if you already have a column that uses a unique/auto-generated key you can still use it with APEX without switching keys around.
    1. I added the new column to the sql in the reports source section so that the new column was searched (and then used 'hidden' so it wouldn't be displayed on the report users would see).
    2. You can still add the unique key under the processes on the form that is being linked too under the Primary key tabs. If its not a PK it won't show up in the pop up which is to the right of "Item Containing Primary Key Column Value" but it can be entered manually (i.e. p23_AUTO_ID) and it will work fine. You would also need to edit your form so that the auto ID that is being passed from the report is part of the form - but hidden if desired).

  • When Retention policy is set as REDUNDANCY

    DB version: 10.2.0.4
    Just trying to understand the concept of Redundancy.
    If i set my Retention Policy set to REDUNDANCY like
    CONFIGURE RETENTION POLICY TO REDUNDANCY 4;and all the backup files are stored in one location like '/u04/rmanbkp/' then multiple copies (4 of them) of the same datafile with different names (unique names generated using U% setting) will be created here. Right?
    The next day all these datafile copies will be obsolete. Right?
    Recovery Window retention policy is more widely used than Redundancy . Right?

    on the 5th day,the first (out of 4 copies) copy of backup becomes obseleteprovided that you do exactly one backup every day. If you do 6 backups a day -- e.g. every 4 hours -- you'd be obsoleting a backup less than 24 hours old !
    So, if you are DBA new to a site, ask first "how many backups we do, what is the RETENTION Policy" ?
    Hemant K Chitale

  • How to uncheck "Add as a new version to existing files" inside the "Add a document" dialog

    I am working on a publishing site collection using the enterprise wiki template. Currently when users want to insert an image inside the rich text editor, they will be prompted with the following:-
    And if the user insert a picture that already exists then it will replace the existing one, which might cause the picture to be displayed inside a Wiki page which it does not belong to !!
    So is there a way to do any of the following:-
    Give the new picture a unique auto generated name?
    To un-check the   “Add as a new version to existing files” by default?
    Or to always prevent replacing images, as this can cause many conflicts !!
    Can anyone advice ?please?

    You can do it using below script
    http://webcache.googleusercontent.com/search?q=cache:51tmEQHanZoJ:vegardstromsoy.blogspot.com/2011/05/jquery-to-override-sharepoint-ootb.html+&cd=4&hl=en&ct=clnk&gl=in
    $(document).ready(function() {
        if (document.title ==
    "Upload Document") {
            $("input[id$='OverwriteSingle']").attr("checked",false);
            $("input[id$='OverwriteMultiple']").attr("checked",false);
    http://hansiandy.wordpress.com/2010/10/19/sharepoint-20072010-tips-uncheck-add-as-a-new-version-to-existing-files-checkbox-on-upload-aspx-in-moss-2007sharepoint-2010/
    If this helped you resolve your issue, please mark it Answered
    but where i should add the following jQuery :-
    $(document).ready(function() {
    if (document.title == "Upload Document") {
    $("input[id$='OverwriteSingle']").attr("checked",false);
    $("input[id$='OverwriteMultiple']").attr("checked",false);
    inside the master page or insdie the upload.aspx page ?

  • My ugly statspack

    Hi there, if you have time - please have a look at my stats pack below and let me know any comments you have.
    I have spotted the following,
    There is a huge amount of rolling back. 72%!!!
    I cant do anything about this as its the frontend application. I have brought this up with the vendor.
    The Hard parsing is too high.
    15% seems to be hard parses.
    The Execute to Parse value is negative
    SQL statements are being parsed but not executed.
    The softparse value is way to low. It shoudl be as close to 100 as possible.
    db file sequential read is very high/expensive. why would this be?
    DB Name DB Id Instance Inst Num Release OPS Host
    P04 1508017556 p04 1 8.1.7.2.0 NO his
    Snap Id Snap Time Sessions
    Begin Snap: 343 11-Feb-05 11:04:46 1,255
    End Snap: 351 11-Feb-05 11:21:28 1,255
    Elapsed: 16.70 (mins)
    Cache Sizes
    ~~~~~~~~~~~
    db_block_buffers: 4000 log_buffer: 163840
    db_block_size: 4096 shared_pool_size: 105M
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 5,144.91 4,566.16
    Logical reads: 26,851.31 23,830.84
    Block changes: 36.34 32.25
    Physical reads: 1,179.03 1,046.40
    Physical writes: 40.40 35.86
    User calls: 141.68 125.74
    Parses: 89.38 79.33
    Hard parses: 17.61 15.63
    Sorts: 16.87 14.97
    Logons: 1.24 1.10
    Executes: 71.35 63.32
    Transactions: 1.13
    % Blocks changed per Read: 0.14 Recursive Call %: 68.80
    Rollback per transaction %: 72.90 Rows per Sort: 45.03
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 100.00 Redo NoWait %: 100.00
    Buffer Hit %: 95.61 In-memory Sort %: 99.53
    Library Hit %: 92.06 Soft Parse %: 80.30
    Execute to Parse %: -25.28 Latch Hit %: 99.95
    Parse CPU to Parse Elapsd %: % Non-Parse CPU:
    Shared Pool Statistics Begin End
    Memory Usage %: 75.93 77.33
    % SQL with executions>1: 75.74 74.74
    % Memory for SQL w/exec>1: 50.11 54.41
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    db file sequential read 493,594 0 .00
    db file scattered read 27,151 0 .00
    latch free 13,607 0 .00
    SQL*Net more data to client 10,357 0 .00
    direct path read 7,635 0 .00
    Wait Events for DB: P04 Instance: p04 Snaps: 343 -351
    -> cs - centisecond - 100th of a second
    -> ms - millisecond - 1000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (cs) (ms) /txn
    db file sequential read 493,594 0 0 0 437.2
    db file scattered read 27,151 0 0 0 24.0
    latch free 13,607 8,938 0 0 12.1
    SQL*Net more data to client 10,357 0 0 0 9.2
    direct path read 7,635 0 0 0 6.8
    file open 3,339 0 0 0 3.0
    direct path write 1,397 0 0 0 1.2
    log file parallel write 838 0 0 0 0.7
    db file parallel write 651 0 0 0 0.6
    log file sync 600 1 0 0 0.5
    control file parallel write 324 0 0 0 0.3
    control file sequential read 141 0 0 0 0.1
    buffer busy waits 47 0 0 0 0.0
    SQL*Net break/reset to clien 38 0 0 0 0.0
    refresh controlfile command 14 0 0 0 0.0
    enqueue 13 0 0 0 0.0
    LGWR wait for redo copy 7 4 0 0 0.0
    SQL*Net message to client 134,065 0 0 0 118.7
    SQL*Net message from client 134,020 0 0 0 118.7
    SQL*Net more data from clien 1,891 0 0 0 1.7
    Background Wait Events for DB: P04 Instance: p04 Snaps: 343 -351
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (cs) (ms) /txn
    log file parallel write 837 0 0 0 0.7
    db file parallel write 651 0 0 0 0.6
    control file parallel write 324 0 0 0 0.3
    db file scattered read 76 0 0 0 0.1
    db file sequential read 49 0 0 0 0.0
    control file sequential read 42 0 0 0 0.0
    latch free 39 39 0 0 0.0
    LGWR wait for redo copy 7 4 0 0 0.0
    rdbms ipc message 5,154 953 0 0 4.6
    pmon timer 399 308 0 0 0.4
    smon timer 4 4 0 0 0.0
    SQL ordered by Gets for DB: P04 Instance: p04 Snaps: 343 -351
    -> End Buffer Gets Threshold: 10000
    -> Note that resources reported for PL/SQL includes the resources used by
    all SQL statements called within the PL/SQL code. As individual SQL
    statements are also reported, it is possible and valid for the summed
    total % to exceed 100
    Buffer Gets Executions Gets per Exec % Total Hash Value
    7,478,028 1 7,478,028.0 27.8 2174624692
    SELECT distinct "RESOURCE_USAGE"."CRN", "RESOURCE
    _USAGE"."DATE_MOVED_IN", "RESOURCE_USAGE"."AGRMNT_T
    YPE_CODE", "HOMELESS_CASE"."CASE_STATUS",
    "DECISION"."DATE_OF_DECISION", "DECISION"."DECI
    SION_CODE", "PERSON"."TITLE", "PERSON"
    777,467 1 777,467.0 2.9 2743905745
    SELECT TRIGGER_TABLE.NUM1
    , ltrim(rtrim(to_char(TRIGGER_TABLE
    .NUM2, '99,990.00')))
    , RNT_PATCH.ARREARS_OFF_NAME
    , ltrim(rtr
    im(initcap(PERSON.TITLE) || ' ' || initcap(PERSON.FORENAME) || '
    ' || initcap(PERSON.MIDDLE_INIT) || ' ' || initcap(PERSON.PERSO
    N_SURNAME))) as person_name
    ,RNT_PATCH.ARREARS_OFF_NAME
    ,RNT_P
    703,637 1 703,637.0 2.6 1491351501
    select a.tablespace_name, a.bytes "total", (a.bytes - nvl
    (b.free,0)) "used", nvl(b.free,0) "free", round(nv
    l(b.free,0)/a.bytes*100) "%free" from (select sum(bytes) bytes,
    tablespace_name from dba_data_files group by tablespace_name) a
    , (select sum(bytes) free, tablespace_name from dba_free_space
    575,166 1 575,166.0 2.1 3381284524
    SELECT DISTINCT "BULL"."RNT_ACCOUNT"."ACCOUNT_NO",
    "BULL"."RNT_PROPERTY"."A_B_WEEK", "BULL"."RNT_PRO
    PERTY"."AREA_CODE", "BULL"."RNT_PROPERTY"."PATCH_CO
    DE", "BULL"."PROPERTY"."PROP_SUB_NUM",
    "BULL"."PROPERTY"."PROP_NAME", "BULL"."RNT_AREA"."
    236,552 47 5,033.0 0.9 1785190534
    SELECT count(*),max(wl_entry.total_points)
    FROM shortlist_index
    shortlist_type,
    wl_entry,
    wl_entry_status
    WHERE shor
    tlist_index.area=:sArea
    AND shortlist_index.bedsize=:sSize
    A
    ND nvl(shortlist_index.dwelling_type_code,'X')=nvl(NULL,'X')
    A
    ND shortlist_type.shortlist=:sSLCode
    AND shortlist_index.wl_co
    116,045 1 116,045.0 0.4 2415945105
    BEGIN STATSPACK.SNAP(i_snap_level=>5, i_modify_parameter=>'true'
    ); END;
    110,614 1 110,614.0 0.4 625421128
    INSERT INTO STATS$SQLTEXT ( HASH_VALUE,TEXT_SUBSET,PIECE,SQL_TEX
    T,ADDRESS,COMMAND_TYPE,LAST_SNAP_ID ) SELECT ST1.HASH_VALUE,SS.
    TEXT_SUBSET,ST1.PIECE,ST1.SQL_TEXT,ST1.ADDRESS,ST1.COMMAND_TYPE,
    SS.SNAP_ID FROM V$SQLTEXT ST1,STATS$SQL_SUMMARY SS WHERE SS.S
    NAP_ID = :b1 AND SS.DBID = :b2 AND SS.INSTANCE_NUMBER = :b3 A
    73,900 1,221 60.5 0.3 3013728279
    select privilege#,level from sysauth$ connect by grantee#=prior
    privilege# and privilege#>0 start with (grantee#=:1 or grantee#=
    1) and privilege#>0
    SQL ordered by Gets for DB: P04 Instance: p04 Snaps: 343 -351
    -> End Buffer Gets Threshold: 10000
    -> Note that resources reported for PL/SQL includes the resources used by
    all SQL statements called within the PL/SQL code. As individual SQL
    statements are also reported, it is possible and valid for the summed
    total % to exceed 100
    Buffer Gets Executions Gets per Exec % Total Hash Value
    69,636 3 23,212.0 0.3 3928236554
    SELECT "REP_ADHOC_CONTRACTS"."CONTRACT_REFERENCE" ,
    "REP_ADHOC_CONTRACTS"."CONTRACT_DESCRIPTION" , "REP_A
    DHOC_CONTRACTS"."MSR_NUMBER" , "REP_ADHOC_CONTRACTS"."
    MSR_VERSION" , "REP_ADHOC_CONTRACTS"."REPAIRS_AREA_COD
    E" , "REP_ADHOC_CONTRACTS"."DEFAULT_PRIORITY_CODE" ,
    53,446 144 371.2 0.2 1242614849
    SELECT "USER_DETAILS"."USER_ID", "USER_DETAILS"."USER_NAME" FROM
    "USER_DETAILS" ORDER BY "USER_DETAILS"."USER_ID"
    40,560 25 1,622.4 0.2 3416838880
    SELECT distinct(REP_NONADHOC_JOBS_VIEW.CONTRACT_REFERENCE),
    REP_NONADHOC_JOBS_VIEW.CONTRACT_DESCRIPTION,
    REP_NONADHOC_JOBS_VIEW.PRIORITY_CODE,
    REP_NONA
    DHOC_JOBS_VIEW.CONTRACT_STATUS,
    REP_NONADHOC_JOBS_V
    IEW.CONTRACT_STATUS_DATE,
    REP_NONADHOC_JOBS_VIEW.PL
    39,684 1 39,684.0 0.1 847511376
    SELECT "LSC_CH_UNIT"."PARENT_HUN",
    "LSC_CH_UNIT"."HUN",
    "LSC_CH_UNIT"."UPRN" ,
    "LSC_CH_UNIT"."HIERARCHY_UNIT_COD
    E",
    "LSC_CH_UNIT"."DATETIME_CREATED",
    "LSC_CH_UNIT"."CREATED
    _BY",
    "LSC_CH_UNIT"."MGMT_UNIT_NAME" ,
    substr(DECODE(p.PRO
    P_SUB_NUM,NULL, '', p.PROP_SUB_NUM || ', ') ||
    DECODE(p.
    39,577 1 39,577.0 0.1 2389167611
    SELECT "LSC_CH_UNIT"."PARENT_HUN",
    "LSC_CH_UNIT"."HUN",
    "LSC_CH_UNIT"."UPRN" ,
    "LSC_CH_UNIT"."HIERARCHY_UNIT_COD
    E",
    "LSC_CH_UNIT"."DATETIME_CREATED",
    "LSC_CH_UNIT"."CREATED
    _BY",
    "LSC_CH_UNIT"."MGMT_UNIT_NAME" ,
    substr(DECODE(p.PRO
    P_SUB_NUM,NULL, '', p.PROP_SUB_NUM || ', ') ||
    DECODE(p.
    37,699 3 12,566.3 0.1 3061673399
    SELECT distinct(REP_NONADHOC_JOBS_VIEW.CONTRACT_REFERENCE),
    REP_NONADHOC_JOBS_VIEW.CONTRACT_DESCRIPTION,
    REP_NONADHOC_JOBS_VIEW.PRIORITY_CODE,
    REP_NONA
    DHOC_JOBS_VIEW.CONTRACT_STATUS,
    REP_NONADHOC_JOBS_V
    IEW.CONTRACT_STATUS_DATE,
    REP_NONADHOC_JOBS_VIEW.PL
    34,680 2 17,340.0 0.1 3015476189
    SELECT distinct(REP_NONADHOC_JOBS_VIEW.CONTRACT_REFERENCE),
    REP_NONADHOC_JOBS_VIEW.CONTRACT_DESCRIPTION,
    SQL ordered by Reads for DB: P04 Instance: p04 Snaps: 343 -351
    -> End Disk Reads Threshold: 1000
    Physical Reads Executions Reads per Exec % Total Hash Value
    145,750 1 145,750.0 12.3 2174624692
    SELECT distinct "RESOURCE_USAGE"."CRN", "RESOURCE
    _USAGE"."DATE_MOVED_IN", "RESOURCE_USAGE"."AGRMNT_T
    YPE_CODE", "HOMELESS_CASE"."CASE_STATUS",
    "DECISION"."DATE_OF_DECISION", "DECISION"."DECI
    SION_CODE", "PERSON"."TITLE", "PERSON"
    22,181 1 22,181.0 1.9 3758504226
    select p1.pin , p1.title , initcap(p1.forename) , p1.middle_ini
    t , initcap(p1.person_surname) , p1.date_of_birth ,' ' address ,
    p1.disabled from person p1 where not exists ( select 1 fro
    m rnt_occupants o where o.pin = p1.pin ) AND P1.PERSON_SURNAME
    LIKE '%%' AND TO_CHAR(P1.DATE_OF_BIRTH,'dd/mm/yyyy') = '22/02/
    21,591 1 21,591.0 1.8 134893840
    select 0 as logid ,'ohms_full_lsc_account' as thetable, count(*
    ) as sourcecount from lsc_account
    21,586 1 21,586.0 1.8 1271590434
    select 0 as logid ,'ohms_full_lsc_account' as thetable, count(*
    ) as sourcecount from lsc_account charge_group_est
    17,514 47 372.6 1.5 1785190534
    SELECT count(*),max(wl_entry.total_points)
    FROM shortlist_index
    shortlist_type,
    wl_entry,
    wl_entry_status
    WHERE shor
    tlist_index.area=:sArea
    AND shortlist_index.bedsize=:sSize
    A
    ND nvl(shortlist_index.dwelling_type_code,'X')=nvl(NULL,'X')
    A
    ND shortlist_type.shortlist=:sSLCode
    AND shortlist_index.wl_co
    13,470 1 13,470.0 1.1 2415945105
    BEGIN STATSPACK.SNAP(i_snap_level=>5, i_modify_parameter=>'true'
    ); END;
    10,259 1 10,259.0 0.9 3381284524
    SELECT DISTINCT "BULL"."RNT_ACCOUNT"."ACCOUNT_NO",
    "BULL"."RNT_PROPERTY"."A_B_WEEK", "BULL"."RNT_PRO
    PERTY"."AREA_CODE", "BULL"."RNT_PROPERTY"."PATCH_CO
    DE", "BULL"."PROPERTY"."PROP_SUB_NUM",
    "BULL"."PROPERTY"."PROP_NAME", "BULL"."RNT_AREA"."
    10,145 1 10,145.0 0.9 1078526263
    select * from "BULL"."LSC_CH_UNIT_INHERIT" where datetime_creat
    ed > (sysdate - 32)
    9,406 1 9,406.0 0.8 734206759
    select 0 as logid ,'ohms_inc_lsc_ch_unit_inherit' as thetable,
    count(*) as sourcecount from lsc_ch_unit_inherit
    9,404 105 89.6 0.8 3201672093
    SELECT count ( *) FROM RNT_PROPERTY
    7,053 1 7,053.0 0.6 3337740287
    INSERT INTO STATS$SQL_STATISTICS ( SNAP_ID,DBID,INSTANCE_NUMBER,
    TOTAL_SQL,TOTAL_SQL_MEM,SINGLE_USE_SQL,SINGLE_USE_SQL_MEM ) SEL
    ECT :b1,:b2,:b3,COUNT(1),SUM(SHARABLE_MEM),SUM(DECODE(EXECUTIONS
    SQL ordered by Reads for DB: P04 Instance: p04 Snaps: 343 -351
    -> End Disk Reads Threshold: 1000
    Physical Reads Executions Reads per Exec % Total Hash Value
    ,1,1,0)),SUM(DECODE(EXECUTIONS,1,SHARABLE_MEM,0)) FROM V$SQLXS
    5,410 1 5,410.0 0.5 3436085714
    SELECT "PROPERTY"."UPRN",
    decode ( "PROPERTY"."P
    ROP_SUB_NUM" ,
    null ,'', "PROPERTY"."PROP_SUB_NUM" || ',' ) ||
    decode ( prop_name , null , '', prop_name || ',') ||
    decode
    ( prop_num , null , '', prop_num || ',') ||
    decode ( property.
    street_name , null , '', property.street_name || ',') ||
    decod
    5,253 1 5,253.0 0.4 3874720143
    INSERT INTO STATS$SQL_SUMMARY ( SNAP_ID,DBID,INSTANCE_NUMBER,TEX
    T_SUBSET,SHARABLE_MEM,SORTS,MODULE,LOADED_VERSIONS,EXECUTIONS,LO
    ADS,INVALIDATIONS,PARSE_CALLS,DISK_READS,BUFFER_GETS,ROWS_PROCES
    SED,ADDRESS,HASH_VALUE,VERSION_COUNT ) SELECT :b1,:b2,:b3,SUBST
    R(SQL_TEXT,1,31),SHARABLE_MEM,SORTS,MODULE,LOADED_VERSIONS,EXECU
    4,915 1 4,915.0 0.4 1152921379
    SELECT "LSC_CH_UNIT"."UPRN",
    Replace( Replace( RT
    RIM( LTRIM( ( NVL ("PROPERTY"."PROP_SUB_NUM" ,
    '') || ', '
    || NVL ("PROPERTY"."PROP_NAME" , '') || ', ' ||
    NVL ( "PROP
    ERTY"."PROP_NUM" , '') || ', ' ||
    NVL ("PROPERTY"."STREET_N
    AME" , '') || ', ' ||
    NVL ("PROPERTY"."ADDR_LINE2" , '') ||
    4,915 1 4,915.0 0.4 4174133807
    SELECT "PROPERTY"."UPRN",
    decode ( "PROPERTY"."P
    ROP_SUB_NUM" ,
    null ,'', "PROPERTY"."PROP_SUB_NUM" || ',' ) ||
    decode ( prop_name , null , '', prop_name || ',') ||
    decode
    ( prop_num , null , '', prop_num || ',') ||
    decode ( property.
    street_name , null , '', property.street_name || ',') ||
    decod
    4,913 1 4,913.0 0.4 3449421355
    SELECT "LSC_CH_UNIT"."UPRN",
    Replace( Replace( RT
    RIM( LTRIM( ( NVL ("PROPERTY"."PROP_SUB_NUM" ,
    '') || ', '
    || NVL ("PROPERTY"."PROP_NAME" , '') || ', ' ||
    NVL ( "PROP
    ERTY"."PROP_NUM" , '') || ', ' ||
    NVL ("PROPERTY"."STREET_N
    AME" , '') || ', ' ||
    NVL ("PROPERTY"."ADDR_LINE2" , '') ||
    4,883 1 4,883.0 0.4 2265599884
    SELECT "LSC_CH_UNIT"."UPRN",
    Replace( Replace( RT
    RIM( LTRIM( ( NVL ("PROPERTY"."PROP_SUB_NUM" ,
    '') || ', '
    || NVL ("PROPERTY"."PROP_NAME" , '') || ', ' ||
    NVL ( "PROP
    SQL ordered by Executions for DB: P04 Instance: p04 Snaps: 343 -351
    -> End Executions Threshold: 100
    Executions Rows Processed Rows per Exec Hash Value
    4,214 441 0.1 955191413
    select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$,
    spare1 from obj$ where owner#=:1 and name=:2 and namespace=:3 an
    d(remoteowner=:4 or remoteowner is null and :4 is null)and(linkn
    ame=:5 or linkname is null and :5 is null)and(subname=:6 or subn
    ame is null and :6 is null)
    4,032 4,032 1.0 2091761008
    select condition from cdef$ where rowid=:1
    2,728 2,974 1.1 2085632044
    select intcol#,nvl(pos#,0),col# from ccol$ where con#=:1
    1,351 1,007 0.7 2024737912
    SELECT TI_ACCOUNT_NO FROM RNT_TI_ACCOUNT WHERE ACCOUNT_NO = :
    b1
    1,328 1,639 1.2 117117207
    SELECT P.TITLE || ' ' || P.FORENAME || ' ' || RTRIM(P.PERSON_
    SURNAME) FULL_NAME FROM PERSON P,RNT_OCCUPANTS O WHERE O.UPR
    N = :b1 AND O.PIN = P.PIN AND O.PARTY = 'Y' AND O.END_DATE_OF
    _OCCUPANCY IS NULL ORDER BY DECODE(UPPER(P.TITLE),'MR',1,2)
    1,221 19,964 16.4 3013728279
    select privilege#,level from sysauth$ connect by grantee#=prior
    privilege# and privilege#>0 start with (grantee#=:1 or grantee#=
    1) and privilege#>0
    1,193 881 0.7 1121470926
    SELECT * FROM SYS.SESSION_ROLES WHERE ROLE = 'DBA'
    1,146 0 0.0 4032977774
    ALTER SESSION SET NLS_LANGUAGE= 'ENGLISH' NLS_TERRITORY= 'UNITED
    KINGDOM' NLS_CURRENCY= '£' NLS_ISO_CURRENCY= 'UNITED KINGDOM' N
    LS_NUMERIC_CHARACTERS= '.,' NLS_CALENDAR= 'GREGORIAN' NLS_DATE_F
    ORMAT= 'DD-MON-RR' NLS_DATE_LANGUAGE= 'ENGLISH' NLS_SORT= 'BINA
    RY' TIME_ZONE= '+00:00' NLS_DUAL_CURRENCY = '¿' NLS_TIME_FORMAT
    993 993 1.0 1645188330
    SELECT SYSDATE FROM DUAL
    987 987 1.0 214061835
    SELECT NVL(SUM(B.TOTAL_INDEBTEDNESS),0) FROM RNT_TI_ACCOUNT A,
    RNT_ACCOUNT_TOTALS B WHERE A.TI_ACCOUNT_NO = :b1 AND A.ACCOUNT
    NO = B.ACCOUNTNO
    720 720 1.0 1966425544
    select text from view$ where rowid=:1
    600 598 1.0 4059714361
    select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#
    ,iniexts,NVL(lists,65535),NVL(groups,65535),cachehint,hwmincr, N
    VL(spare1,0) from seg$ where ts#=:1 and file#=:2 and block#=:3
    554 5,356 9.7 395844583
    SQL ordered by Executions for DB: P04 Instance: p04 Snaps: 343 -351
    -> End Executions Threshold: 100
    Executions Rows Processed Rows per Exec Hash Value
    select name,intcol#,segcol#,type#,length,nvl(precision#,0),decod
    e(type#,2,nvl(scale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180
    ,scale,181,scale,182,scale,183,scale,231,scale,0),null$,fixedsto
    rage,nvl(deflength,0),default$,rowid,col#,property, charsetid,ch
    arsetform,spare1,spare2 from col$ where obj#=:1 order by intcol#
    544 1,138 2.1 4195740643
    select pos#,intcol#,col#,spare1 from icol$ where obj#=:1
    535 0 0.0 935016769
    ALTER SESSION SET NLS_DATE_FORMAT="DD-MON-RR"
    495 545 1.1 199702406
    select i.obj#,i.ts#,i.file#,i.block#,i.intcols,i.type#,i.flags,
    i.property,i.pctfree$,i.initrans,i.maxtrans,i.blevel,i.leafcnt,i
    .distkey, i.lblkkey,i.dblkkey,i.clufac,i.cols,i.analyzetime,i.sa
    mplesize,i.dataobj#, nvl(i.degree,1),nvl(i.instances,1),i.rowcnt
    ,mod(i.pctthres$,256),i.indmethod#,i.trunccnt,nvl(c.unicols,0),n
    491 2,728 5.6 1536916657
    select con#,type#,condlength,intcols,robj#,rcon#,match#,refact,n
    vl(enabled,0),rowid,cols,nvl(defer,0),mtime,nvl(spare1,0) from c
    def$ where obj#=:1
    461 459 1.0 189272129
    select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.su
    bname,o.dataobj#,o.flags from obj$ o where o.obj#=:1
    448 97 0.2 114078687
    select con#,obj#,rcon#,enabled,nvl(defer,0) from cdef$ where rob
    j#=:1
    442 0 0.0 4114899968
    SELECT "OHMS_BROADCAST_UMSG"."MESSAGE_ID",
    "OHMS_BROADCAST
    _UMSG"."MESSAGE_DATE",
    "OHMS_BROADCAST_UMSG"."BROAD
    CAST_UNTIL",
    "OHMS_BROADCAST_UMSG"."RECIPIENT",
    "OHMS_BROADCAST_UMSG"."SENDER",
    "OHMS_BRO
    ADCAST_UMSG"."BROADCASTED",
    "OHMS_BROADCAST_MSG"."M
    419 3,113 7.4 1004464078
    select grantee#,privilege#,nvl(col#,0),max(nvl(option$,0)) from
    objauth$ where obj#=:1 group by grantee#,privilege#,nvl(col#,0)
    order by grantee#
    406 0 0.0 4261939565
    select col#, grantee#, privilege#,max(nvl(option$,0)) from objau
    th$ where obj#=:1 and col# is not null group by privilege#, col#
    , grantee# order by col#, grantee#
    395 -431 -1.1 3873590224
    INSERT INTO "SHORTLIST_INDEX" ( "AREA", "BEDSIZE", "WLRN", "WL_C
    ODE", "DWELLING_TYPE_CODE", "ELIG_TYPE", "SPECIAL_ELIG" ) VALUES
    ( :1, :2, :3, :4, :5, :6, :7 )
    365 365 1.0 2055820954
    SQL ordered by Executions for DB: P04 Instance: p04 Snaps: 343 -351
    -> End Executions Threshold: 100
    Executions Rows Processed Rows per Exec Hash Value
    select nvl ( trim ( view_secure_person_notes ) , 'N' ) from ohms
    Instance Activity Stats for DB: P04 Instance: p04 Snaps: 343 -351
    Statistic Total per Second per Trans
    CR blocks created 1,272 1.3 1.1
    DBWR buffers scanned 451,257 450.4 399.7
    DBWR checkpoint buffers written 0 0.0 0.0
    DBWR checkpoints 0 0.0 0.0
    DBWR free buffers found 440,530 439.7 390.2
    DBWR lru scans 751 0.8 0.7
    DBWR make free requests 751 0.8 0.7
    DBWR summed scan depth 451,257 450.4 399.7
    DBWR transaction table writes 4 0.0 0.0
    DBWR undo block writes 1,069 1.1 1.0
    SQL*Net roundtrips to/from client 130,711 130.5 115.8
    background checkpoints completed 0 0.0 0.0
    background checkpoints started 0 0.0 0.0
    background timeouts 989 1.0 0.9
    branch node splits 0 0.0 0.0
    buffer is not pinned count 17,880,509 17,844.8 15,837.5
    buffer is pinned count 1,826,236 1,822.6 1,617.6
    bytes received via SQL*Net from c 39,833,802 39,754.3 35,282.4
    bytes sent via SQL*Net to client 44,535,273 44,446.4 39,446.7
    calls to get snapshot scn: kcmgss 75,996 75.8 67.3
    calls to kcmgas 687 0.7 0.6
    calls to kcmgcs 208 0.2 0.2
    cleanouts only - consistent read 2,659 2.7 2.4
    cluster key scan block gets 2,820,733 2,815.1 2,498.4
    cluster key scans 8,889 8.9 7.9
    commit cleanout failures: block l 778 0.8 0.7
    commit cleanout failures: buffer 11 0.0 0.0
    commit cleanout failures: cannot 0 0.0 0.0
    commit cleanouts 2,992 3.0 2.7
    commit cleanouts successfully com 2,203 2.2 2.0
    consistent changes 3,219 3.2 2.9
    consistent gets 26,838,637 26,785.1 23,772.0
    current blocks converted for CR
    cursor authentications 15,764 15.7 14.0
    data blocks consistent reads - un 3,159 3.2 2.8
    db block changes 36,412 36.3 32.3
    db block gets 66,383 66.3 58.8
    deferred (CURRENT) block cleanout 875 0.9 0.8
    dirty buffers inspected 929 0.9 0.8
    enqueue conversions 304 0.3 0.3
    enqueue releases 4,752 4.7 4.2
    enqueue requests 4,775 4.8 4.2
    enqueue timeouts 10 0.0 0.0
    execute count 71,489 71.4 63.3
    free buffer inspected 7,785 7.8 6.9
    free buffer requested 1,155,014 1,152.7 1,023.0
    hot buffers moved to head of LRU 70,348 70.2 62.3
    immediate (CR) block cleanout app 2,659 2.7 2.4
    immediate (CURRENT) block cleanou 750 0.8 0.7
    index fast full scans (full) 4 0.0 0.0
    leaf node splits 68 0.1 0.1
    logons cumulative 1,238 1.2 1.1
    logons current
    messages received 4,774 4.8 4.2
    messages sent 4,774 4.8 4.2
    no buffer to keep pinned count 8,586,056 8,568.9 7,605.0
    Instance Activity Stats for DB: P04 Instance: p04 Snaps: 343 -351
    Statistic Total per Second per Trans
    no work - consistent read gets 10,061,376 10,041.3 8,911.8
    opened cursors cumulative 30,036 30.0 26.6
    opened cursors current
    parse count (hard) 17,644 17.6 15.6
    parse count (total) 89,558 89.4 79.3
    physical reads 1,181,389 1,179.0 1,046.4
    physical reads direct 28,870 28.8 25.6
    physical writes 40,482 40.4 35.9
    physical writes direct 29,077 29.0 25.8
    physical writes non checkpoint 40,415 40.3 35.8
    pinned buffers inspected 243 0.2 0.2
    prefetched blocks 629,572 628.3 557.6
    prefetched blocks aged out before 108 0.1 0.1
    recursive calls 313,097 312.5 277.3
    redo blocks written 10,833 10.8 9.6
    redo buffer allocation retries 0 0.0 0.0
    redo entries 19,913 19.9 17.6
    redo log space requests 0 0.0 0.0
    redo size 5,155,196 5,144.9 4,566.2
    redo synch writes 565 0.6 0.5
    redo wastage 216,232 215.8 191.5
    redo writes 836 0.8 0.7
    rollback changes - undo records a 2,827 2.8 2.5
    rows fetched via callback 2,749,955 2,744.5 2,435.7
    session logical reads 26,905,014 26,851.3 23,830.8
    session pga memory 124,228,384 123,980.4 110,034.0
    session pga memory max 124,700,400 124,451.5 110,452.1
    session uga memory 8,609,408 8,592.2 7,625.7
    session uga memory max 41,183,948 41,101.7 36,478.3
    sorts (disk) 79 0.1 0.1
    sorts (memory) 16,821 16.8 14.9
    sorts (rows) 761,067 759.6 674.1
    summed dirty queue length 949 1.0 0.8
    switch current to new buffer
    table fetch by rowid 6,249,832 6,237.4 5,535.7
    table fetch continued row 81,864 81.7 72.5
    table scan blocks gotten 1,247,154 1,244.7 1,104.7
    table scan rows gotten 27,251,960 27,197.6 24,138.1
    table scans (long tables) 3,660 3.7 3.2
    table scans (short tables) 4,719 4.7 4.2
    total file opens 3,353 3.4 3.0
    transaction rollbacks 175 0.2 0.2
    transaction tables consistent rea 7 0.0 0.0
    transaction tables consistent rea 59 0.1 0.1
    user calls 141,961 141.7 125.7
    user commits 306 0.3 0.3
    user rollbacks 823 0.8 0.7
    write clones created in foregroun 15 0.0 0.0
    Tablespace IO Stats for DB: P04 Instance: p04 Snaps: 343 -351
    ->ordered by IOs (Reads + Writes) desc

    Mark:
    I second John's comments about the execute to parse ratio, almost certainly another thing you want to bring up with the vendor.
    A couple of other things I noticed. Your buffer cache seems to be only 16Mb, a larger buffer may help. If you are really tight for memory, you could probably trim some from the SGA which is large in relation to the buffer cache.
    Most of your expensive sql (by buffer gets and physical reads) seems to only be executed one time over the snapshot period. Which might indicate that the period is atypical. If these statements are coming from the application, and not from reports running, I would look at the use of bind variables in the application. The high hard parse numbers also tend to indicate that there is a lot of unique sql generated in the database. Depending on the nature of the application, and the activity over the reporting period, this may be valid, but I would really look at the use of bind variables.
    The most used sql (number of executions) seems to be primarily recursive sql (issued by Oracle on your behalf). A lot of that sql seems to be looking at the data dictionary tables that define the objects in the database, that is, the type of sql I would expect to see for hard parses, or if someone is using the DESCRIBE method of whatever database interface the application is using a lot.
    Without the timing information, it is impossible to say how long the waits for db_sequential_read are, but sequential reads, despite the name, are actually caused by single block reads associated with index scan/table access by rowid. The scattered reads, which also seem a little high, are the result of full table scans.
    Before running your next set of snapshots, as a dba user do:
    ALTER SYSTEM SET timed_statistics = TRUE;and set timed_statistics = TRUE in your init.ora file as well.
    John

  • Migrating Reporting Services to new Server - Subscriptions are not transfering

    Hello,
    I have an instance of SQL Server 2008R2 running on Windows Server 2008.  It is setup to be a reporting server.  There are many subscriptions that are scheduled and run on this server.  We are wanting to move to Windows Server 2012 and SQL Server
    2012.  So, we have built out a new VM and I have exported from the current server the ReportServer and ReportServerTempDB and have imported them to the new server.  I have resolved the one Orphaned user that happened and went to look for the subscriptions
    so that I could disable them so they wouldn't run.  I could not find any.
    select * from msdb.dbo.sysjobs where enabled = 1 and category_id = 100.
    no rows...
    I had read from other posts to let it sit for a few days and they will appear.  I have waited 2 weeks.
    So, what am I missing?  I would prefer to do a clean install and migrate the data over rather than upgrading the OS and SQL.
    Thanks

    Hi Sql Dude,
    Per my undersranding that you can't find any informamation related to the subscription in the sydjobs table after migration, right?
    You issue can be caused by many factors.Please check details information below:
    Please check if you can see all the subscription in the report manager and can create new subscriptions. The ReportServer database used by SSRS to store the subscriptions maintains a record of the subscription owner (as well as audit fields) which track
    the user accounts that have created/modified the subscriptions. 
    If you can't see subscription on the report manager and can create new subscriptions, the issue can be caused by "My subscriptions" had been created on the original non-domain server(Local User account).  Therefore, once the instance was migrated
    to a new server on the domain, said Local User was no longer available.  Every user with access to the ReportServer database has an entry created in the Users table and a unique GUID generated.
    To work around the issue, you can do a SQL Update query to changed the OwnerID and ModifiedByID fields on the Subscriptions table to relate to the GUID of the equivalent user on the domain.Tip:
    Change the Owner of SQL Reporting Services Subscription
    If you can see all the subsription on the report manager but can't find any job, please try to edit and update the subscription to see if it will recreate the job again and please also try to provide more details information in the log file to see if you
    got some error message, the path like:
    C:\Program Files\Microsoft SQL Server\MSRS11.SQLEXPRESS\Reporting Services\LogFiles
    If above didn't help, please reference to the similar thread below:
    Can't access SSRS 2008 R2 subscriptions after migration
    If you still have any problem, please feel free to ask.
    Regards,
    Vicky Liu
    Vicky Liu
    TechNet Community Support

  • Output html filename

    friends,
    i use forms6i and reports 6i. from forms6i and IIS i run the forms in a browser.
    i put a button in the forms and on which the code is to run_product...
    i make virtual directory for temp folder in forms..
    in which the html files of report output will be stored.
    now,the question is:
    if someone press the buton from form...in browser
    the report output will come and will be in html...
    good...exact repot...which needed...
    but the html file name is unique and generated by report...that is like s7p or s7p_1.....
    but i want this filename as one of the textvalue of report...
    like...in report there is srno field...and i want to make this html file..named as the value of the srno field.
    thanks..

    Hello,
    If you want to specify the name of the file generated, you have to use
    the DESTYPE, DESNAME and DESFORMAT parameter.
    It is possible to change the DESNAME in some reports triggers (not all !!!)
    :DESNAME := ....
    Then you will be able to display the file with the WEB.SHOW_DOCUMENT builtin.
    Regards

Maybe you are looking for

  • Payment to a terminated employee

    Is there a way we can pay a employee a certain wage (not basic pay) after he terminates? I understand that we can create an IT267 and run an offcycle as of a check date. Other than that, are there any other options? can this be done in regular payrol

  • How to import custom stationery into choices

    I'm not sure if I'm doing this correctly. We've created a html stationery template and it's on Safari (for convenience I put it on my top sites) When I want to write an email using the custom stationery, I have to open it in Safari, press command I,

  • Customer Master Partner Functions

    Hello, We are looking for standard extractor on Customer Master Partner Functions, which is based on table KNVP. Also extractor on Partner Functions Text from table TPART. I searched in SAP HELP with no result. someone has an idea? Thanks, Maya

  • Inter company Billing is taking a wrong value of the Bill

    Inter company Billing is taking a wrong value of the Bill

  • Changing NetBios Name  & Changing Workgroup

    Hello, I every time i try to change the NetBios name or type in a new Network group leopard dosen't save the changes! can anyone help? Airport > Extra Options > WinMenue