Test & Threshold.

Hi!!! All....
Pls go through these test and threshold values.
first value is Bad condition.
second value is Warning condition.
thrid value is Good condition.
eg:
Database Cache Hit Ratio (%) (Bad)<= 90 (Warning)<= 95 (Good) >= 95
Test and Threshold.
Database Cache Hit Ratio (%) <= 90 <= 95 >= 95
Dicitionary Cache Hit Ratio (%) <= 85 <= 90 >= 90
Library Cache Hit Ratio (%) <= 80 <= 85 >= 95
User Quota Free Space (%) <= 20 < 25 > 25
Row Chaining Chained (%) >= 5 >= 2 <= 1
Tablespcae Full Free Space (%) <= 20 < 25 > 25
Tablespcae Status Status NA OFFLINE ONLINE
Session Idle Time Idel time (Mins.) >= 10 > 5 <= 5
CPU Usage By Session CPU Usages (%) > 70 <= 60 < 50
Session IO Per User TOTAL IO (%) > 50 >= 40 < 40
Buffer Intensive SQL GET Per EXECS(#) > 100 > 50 < 10
Disk Intensive SQL READS Per EXECS(#)> 100 > 50 < 10
Latch Contention Get Ratio (%) <= 95 < 99 >= 99
RBS Contention HIT Ratio (%) <= 90 < 95 >= 99
Redo Contention Get Ratio (%) <= 90 < 95 >= 99
Soft Parse Percent Soft Parse (%) <= 80 < 85 >= 90
Memory Sort Percent Mem Sort (%) <= 80 <= 85 >= 90
DBWR Checkpoints Checkpoints (#) NA > 5 <= 5
Control File Status Status INVALID NA NULL
Cluster Extents Extents (%) >= 70 > 60 <= 60
Pls tell me if I am wrong or right.
Thankx in advance.
Vicky

Using OEM 9.2.0.1 and have various events registered
but noticed that the minimum threshold value for the
Broken Jobs event test is 1, whereas all the
documentation and help refers to it being 0. I only
noticed because we have 1 broken job and no alert for
it.
Has anyone come across this before as I can't find
any references to it.
Thanks
NeilI do not know about "all the documentation and help refers to it being 0" because I have not checked, but the minimum Critical Threshold is 1.
I do not know why anyone would want to set it to 0 because it is equivalent to not setting the Broken Job test

Similar Messages

  • Hold indicator value after threshold is exceeded

    I have a portion of my VI that uses an auto indexed for loop to view an array of 10 boolean values indicating whether a test threshold has been exceeded.  If a false value is recieved on any of these 10 channels, I need to have an indicator display the current temperature for that channel.  The problem I am having is that subsequent false values continue to update the temperature value rather than keeping the temperature reading when the threshold was exceeded.
    Looking over the discussion forums, it seems like maybe a shift register is how this is done.  Is adding 10 shift registers allowing me to look at the last value for each channel the best way to handle this?
    Solved!
    Go to Solution.
    Attachments:
    indicator_test.vi ‏74 KB

    Sounds like a job for a Feedback Node.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    indicator_test_BD.png ‏83 KB

  • Is there a way to enable & configure Volume Discounts via the Product Import Spreadsheet? Is there a way to enable more than 2 Quantity Thresholds?

    Is there a way to enable & configure Volume Discounts via the Product Import Spreadsheet? Is there a way to enable more than 2 Quantity Thresholds?

    Hi Michael,
    You can set the thresholds via an import file. The easiest way to do this (and this goes for all importable data, webapps included, 301 redirects and so on) is this:
    1. go into the Admin and create a single item, in you case create a test product and set the thresholds
    2. export that data - in your case export the product list
    3. take a look at how the data looks like in the export file so you can get an idea of how the format should be like in the import file
    Unfortunately you cannot set more than 2 thresholds, that is not possible at the moment.
    Thanks,
    Mihai

  • List View Threshold and Relevant Documents

    I have a Sharepoint Site that contains several libraries.  Some with infopath forms and others with pdf's.  On the site's main page I had added the relevant documents web part.  The relevant documents web part is used frequently by certain
    users to attach hyper links in infopath forms to pdf's stored in a library filled with pdf's.
    Recently one the sites users recieved the error message within the relevant documents web part saying: " The attempted operation is prohibited because it exceeds the list view threshold enforced by the administrator"
    Strange part was it was only affecting this user.  So I went to the central admin site and increased the list view threshold from 20,000 to 30,000 which solved the problem for now.  None of our libraries individually exceeded the 20,000 document
    view threshold limit.  However I believe all of them combined due which is what caused the relevant documents to stop working.  
    What I am wondering is if it is possible to filter which libraries the relevant documents list indexes/displays so that I can keep the lower view threshold limit to keep server load down?  I was not able to find any options in the web
    part settings for the relevant documents web part that would do this.  My backup option is to move certain libraries to a new site, which is not ideal.  Some pdf document libraries cannot be moved because the hyperlinks would all be broken.

    I instead used the greater than which looks like it worked in pulling only more recent documents.  The problem is most of the documents are in folders that were created years ago so if someone browses to the library certain folders will be hidden.
    I don't know if setting up private default views will fix this for them and allow me to drop the list view threshold back to 20,000.
    I will have to test this later today.

  • SharePoint Online list view threshold issues: "because it exceeds the list view threshold enforced by the administrator"

    SharePoint Online list view threshold issues: "because it exceeds the list view threshold enforced by the administrator"
    Office 365 SharePoint Online can be problematic when it comes to exceeding the list item threshold (e.g. 5,000).
    Examples of what happens after exceeding the threshold (e.g. 5,000 items):
    You can’t create new forms for the list in SharePoint Designer.
    You may have challenges with metadata fields in the forms (e.g. adding metadata values, editing metadata values, deleting the metadata column from the list).
    Cannot save the list as a template (i.e. you get the threshold error).
    Issue I'd like assistance with: how can I create a custom NewForm in SharePoint Designer
    when the list exceeds the threshold limit, given this is Office 365 SharePoint Online and I don't have access to increase that limit?
    As a control for my testing, I created another list with just a few custom columns with no list items --it worked fine for that list.
    I also tried clearing local AppData cache which didn't solve it. I'd need Central Admin on O365 SharePoint Online to increase the threshold which I don't have access
    to do. Errors received in SharePoint Designer:
    "Could not save the list changes to the server." After getting this, I tried to work around
    the create new forms issue by saving a copy of the original NewForm as NewForm2 and got the root error that I suspected was underlying it all:
    “Server error: the attempted operation is prohibited because it exceeds the list view threshold enforced by the administrator”.
    Any ideas for how to create a new list form in SD?

    Thanks Alex.
    I just found a couple new workarounds instead of using SharePoint Designer:
    Method 1: Add web parts to the form pages on the client side:
    Go to the list and execute one of these actions depending what form you want to edit: create a new item (NewForm), edit an item (EditForm), or display an item (DispForm).
    With the form you want to edit displayed, go to the gear icon and click "Edit Page".
    You should now see the web part page show up with "Add a Web Part" as an option.
    Add a Content Editor or Script Editor web part.
    Add your custom code to either one to manipulate the HTML objects using your favorite web languages.
    Method 2: Use InfoPath 2013.
       The InfoPath 2013 route appears to work.

  • Bitlocker with TPM and PIN testing?

    Good day all,
    We are about to deploy 10 Surface Pro 3's running Windows 8.1 Enterprise x64. We have enabled the TPM, enabled "Allow Enhanced PINS for Startup", "Pre-boot Keyboard" and turned on Bitlocker through the gui which recommended setting
    a PIN which I did.
    Everything seems to work as it should, how can I be convinced TPM and PIN are working together? I seem to be able to punch into Bitlocker many bad passwords without warning or asking me to reboot which is does for all other laptops without tpm.
    1. How many bad password attempts do I get with TPM by default before lockout?
    2. Where is my *.tpm recovery key?
    3. Why when the TPM locks out can I still gain entry by typing in the Bitlocker PIN (not recovery password)
    4. I want TPM to lockout after 5 incorrect attempts.
    To test the TPM working I disabled the TPM in the BIOS and on next reboot Bitlocker asked for the Recovery Password which to me proves Bitlocker private encryption keys are safely held in the TPM. Is this safe to presume TPM is working?
    here is the output from manage-bde and get-tpm status
       Size:                 59.11 GB
        BitLocker Version:    2.0
        Conversion Status:    Fully Encrypted
        Percentage Encrypted: 100.0%
        Encryption Method:    AES 128
        Protection Status:    Protection On
        Lock Status:          Unlocked
        Identification Field: Unknown
        Key Protectors:
            TPM And PIN
            Numerical Password
    TpmPresent          : True
    TpmReady            : True
    ManufacturerId      : 1229346816
    ManufacturerVersion : 5.0
    ManagedAuthLevel    : Full
    OwnerAuth           : u2uAKH0Sr+d98s+oGXLLU8DHUuc=
    OwnerClearDisabled  : True
    AutoProvisioning    : Enabled
    LockedOut           : False
    SelfTest            : {}

    Hi Paddy,
    "1. How many bad password attempts do I get with TPM by default before lockout?"
    It depends on the the TPM chips.
    "Some TPM chips may not store failed attempts over time. Other TPM chips may store every failed attempt indefinitely. Therefore, some users may experience increasingly longer delays when they mistype an authorization value that is sent to the TPM"
    "4.I want TPM to lockout after 5 incorrect attempts."
    We can set the group policy "Standard User Individual Lockout Threshold " in this path (Check the detailed information of this policy):
    Computer Configuration\Administrative Templates\System\Trusted Platform Module Services\
    Here is a link for reference("About TPM lockout" part and "Use Group Policy to manage TPM lockout settings"part ):
    Manage TPM Lockout
    https://technet.microsoft.com/en-us/library/dn466535.aspx
    "2. Where is my *.tpm recovery key?"
    When we set owner of the TPM ,we will be given a chance to save the TPM passowrd .
    And when the BitLocker recovery key is saved to a file, BitLocker also saves a TPM owner password file (.tpm) with the TPM owner password hash value.We also can save them to the AD (Check the group policy in the same path as before). Have you tried to save
    the recovery keys to a external media ?
    Here are links for reference:
    Reset the TPM Lockout (Check the first part)
    https://technet.microsoft.com/en-us/library/dd851452.aspx?f=255&MSPPError=-2147217396
    Windows Trusted Platform Module Management Step-by-Step Guide(Check "Step 2: Set ownership of the TPM" part )
    https://technet.microsoft.com/pt-pt/library/cc749022%28WS.10%29.aspx?f=255&MSPPError=-2147217396
    "3. Why when the TPM locks out can I still gain entry by typing in the Bitlocker PIN (not recovery password)"
    When the TPM is locked out, it is also possible that the user will enter the correct PIN, but the TPM will respond as if the incorrect PIN was entered for a period of time.
    Check the "When should I reset the TPM lockout" part .
    Reset the TPM Lockout
    https://technet.microsoft.com/en-us/library/dd851452.aspx?f=255&MSPPError=-2147217396
    " Is this safe to presume TPM is working?"
    From the output of the command line ,we can get the information that the TPM is working .It is not recommended to disable the TPM when the data is encrypted with TPM .
    Best regards

  • New iMac 750 GB Hard Drive Fail Smart Drive Test - Glitch or Problem

    Greetings!
    I have a new 24" imac with 750 GB hard drive which has had problems out of the box (system crashing several times a day/unable to re-start/application crashes/general sluggishness).
    Five re-installs of Leopard (upgrade/clean install/install and erase)in five weeks when I finally used the latest version of Drive Genius and TechTool Pro (4.6.1). Both claimed hard drive problems. Specifically, TechTool Pro reported smart drive failure (see report below, please).
    This seemed to confirm my suspicions that something was flawed with my iMac from the start.
    I have returned the computer for a new hard drive.
    The Apple-approved repair shop says they've installed a new 750 GB had drive sent by Apple and when they run TechTool Pro, they get the same smart drive failure report. The techie claims this is either a glitch with TechTool Pro, or with the hard drive, but that I have nothing to worry about.
    What's your opinion?
    Many thanks for taking the time to read this.
    SMART
    Saturday, December 29, 2007 1:18:12 AM US/Pacific
    S.M.A.R.T. Self-Checks <Failing!>
    Model: ST3750640AS Q
    Mount Point: /dev/disk0
    Capacity: 698.64 GB
    Writable: Yes
    Ejectable: No
    Removable: No
    Bus: Serial ATA
    Bus Location: Internal
    Revision: 3.BTH
    Serial Number: 5QD2JVNE
    disk0s2: Ma
    disk0s3: Pa
    disk0s4: Peppi
    S.M.A.R.T. stands for Self-Monitoring Analysis and Reporting Technology. This test checks and reports on the status of the S.M.A.R.T. routines built into your drive. These routines monitor important drive parameters as your drive is operating. An examination and analysis of these parameters can aid in the prediction of drive failure. This will allow you to back up your data before your drive fails and the data becomes inaccessible.
    S.M.A.R.T. Self-Checks
    Attribute Normal Worst Threshold Status
    1 Read Raw Error Rate
    100 253 6 Okay
    3 Spin Up Time
    95 92 0 Okay
    4 Start/Stop Count
    100 100 20 Okay
    5 Reallocated Sectors
    100 100 36 Okay
    7 Seek Error Rate
    78 60 30 Okay
    9 Power On Hours
    100 100 0 Okay
    10 Spin Retry Count
    100 100 97 Okay
    12 Power Cycle Count
    100 100 20 Okay
    187 Unknown
    100 100 0 Okay
    189 Unknown
    100 100 0 Okay
    190 Unknown
    41 37 45 Failing!
    194 Temperature
    59 63 0 Okay
    195 HW ECC Recovered
    63 61 0 Okay
    197 Current Pending Sector Count
    100 100 0 Okay
    198 Off-Line Scan Uncorrectable Sector Count
    100 100 0 Okay
    199 Ultra DMA CRC Error Count (Rate)
    200 200 0 Okay
    200 Write Error Count
    100 253 0 Okay
    202 DAM Error Count
    100 253 0 Okay
    S.M.A.R.T. Self-Checks <Failing!>
    Tests Completed
    Threshold levels are exceeded occasionally. You should consider backing up your data from the hard drive. You should continue to check the hard drive for failures.
    S.M.A.R.T. Self-Checks <Failing!>
    -end-

    Greetings, all!
    This has been my first posting, despite having owned 5 Macs over twelve years - which goes to show you how reliable they are, most of the time.
    I am very impressed with the thoughtful responses, and I really, really appreciate you taking the time to respond.
    Here's an update since my last posting. I picked up my repaired(?) iMac from the Apple-approved repair shop. The technician's report said (referring to the original hard drive Seagate ST3750640AS Q) "The drive has passed and boots into the OS without issue. Confirmed the smart status of the hard drive shows failed, but this is for an unknown attribute. This may not really indicate the drive is failing, but could be a possibility. Since all of the other tests have passed, I feel it is best to replace the drive under warranty for the customer.
    About the new hard drive, the technician wrote: I have run a third-party smart utility on the (new, replaced) hard drive again and I have found that it is failing with the same error. The Apple test shows the smart status as OK.
    I have researched this error and I have found that this attribute appears to be for temperature. Since both drives showed the exact same error, this attribute can be safely ignored for smart status. I was not able to replicate any issues with this machine other than the smart status failure so I do not think there is any other hardware failing in this machine.
    If the customer continues to have issues, I would recommend reinstalling the OS one more time. If the issue still persists after this, then the customer should bring in the computer when issues are still happening so we can try to determine the cause of the problem better.
    At first, my iMac was performing much better, but as I began to migrate data from back-up, I was not able to connect/re-establish my iphoto library, or re-establish my Apple mail accounts( both of which I have done countless times in the past without hassle).
    I decided to run Apple's Disk utility which reports it cannot repair the new drive.
    Just prior to my original posting, I had sent a copy of my posting to Micromat (TechTool Pro) asking for their opinion/advice.
    Their response was: The SMART routines are built into the hard drive by the drive manufacturer. They are proprietary and different for each drive manufacturer. TechTool Pro just reads the status of the built-in SMART parameters and reports their status. Basically, a threshold exceeded indicates that the drive has exceeded what the manufacturer thinks are proper operating parameters for it and it may be getting close to failing. For an interpretation of the seriousness of a specific attribute failure you would need to contact the drive manufacturer. A failure is a warning to be sure to keep good backups and consider replacing the drive. If you get a SMART failure on a drive that is under warranty, the drive manufacturer will typically replace the drive.
    Following are two links that might be of interest:
    http://www.ariolic.com/activesmart/docs/glossary.html
    http://www.ariolic.com/activesmart/docs/smart-attribute-meaning.html
    I also took your (Looby) advice and called Seagate. (I also visited their web site, which recommends replacement of their hard drive under these circumstances!)
    Within a couple of minutes, the tech person agreed that this is a serious concern and I should replace the drive.
    Regarding Wiil's post, Apple's Disk utility says: smart status verified.
    The SmartReporter utility reports the smart status is OK.
    The SmartUtility application says:smart status failed. ID: #190 Unknown attribute.
    For all other posters, this was not a New Egg purchase, which was not relevant to my post. Thank you just the same.
    I've left a message for the Apple-approved store and will ask for another drive, and that, if possible, it be tested before I bring my iMac in to reduce the inconvenience.
    I'm also wondering if I should switch to a 500 GB hard drive, although that would defeat the purpose of buying the larger drive for all the video I work with ( I have another 2 Terabytes storage with external drives).
    Any further comments welcome - and thank you all again for taking the time!

  • List view threshold and columns manage metadata problems

    Hi
    We have problem in our company since we have more files in library than is set in List view Threshold.
    I have created index on column "year" and create view filter: Year is equal 2015.
    and I get famous error: this view cannot be displayed because....
    I haven`t find any limitations on column type: manage metadata?

    Hi,
    As I understand, you encountered the issue after you created view filter.
    Every column type has the default number that can be created. You should go to check the number of your column created is less than the number by default.
    Per my test, I can achieve it without exceeding list view threshold.
    Check things below:
    1. You can increase the number of the list view threshold, and after that, you can try again to see the same situation will occur.
    2. Go to create another managed metadata column “year”, and create index on column "year", and create view filter in the same list to see the issue will occur.
    3. Check ULS log to see the details about the cause of the issue.
    Best regards,
    Sara Fan

  • Two Threshold Analog to Digital

    I was asked to develop some code that would take a signal in on an analog, then convert it to a digital, then perform frequency, duty cycle, and signal integrity testing on it.  The built in NI functions for performing these tasks were insufficient because we needed to be able to detect a single drop out of a cycle.  With a real world signal I realize there maybe noise and a having a single threshold to convert from a analog to a digital may show transitions that aren't there and so I planned on developing some kind of debounce code.
    Instead someone mentioned using two thresholds, one for the low and one for the high, and to only consider the signal transitioning if it goes above the high, after going below the low.  
    Attached is my attempt at that method.  This VI simulates a sine wave with a bunch of noise then does a single theshold to show how imperfect it can be.  Then using that same signal it does a two level theshold which works much better but has a slight shift in the time domain, and the beginning will contain unknown values because neither transition has occured with the first sample.
    Any pointers or suggestions to improve my implementation is appreciated.  Thanks.
    EDIT: This does use an OpenG function from the Array palette.
    Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
    If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously.
    Solved!
    Go to Solution.
    Attachments:
    Test AI to Digital With 2 Levels.vi ‏72 KB

    Why so many loops when you just need one?
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Test AI to Digital With 2 Levels.png ‏76 KB

  • A Threshold of 50

    We are running ODP.Net 9.2.0.4, IIS 5.0, .Net Framework 1.1, ASP.Net 1.1, IE 6.0, Oracle Server and Client 9.2.0.4, all on Windows 2000 Server service pack 4.
    We have discovered a curious problem when the 50th user since IISRESET (aspnet_wp restart) attempts to access certain functions in the application (not at logon, not at initial database access, only when they reach one of a couple of different functions within the application).
    Test Procedure: the tester opens a new browser session, logs on with a different userid, navigates to a particular point in the application, and executes one of the offending functions. Then the next tester does the same … and so on. 50+ different userids are employed in this test.
    Users 1 – 49 get to and execute the function (which is a cmdExecuteScalar or cmdExecuteReader on different tables). For these first 49, the function executes.
    For the 50th and subsequent users, an error is thrown. In different test cycles we used different userids (so the 50th user was not the same userid from test to test).
    If connection pooling is OFF (false), the error thrown is “ORA-12560 TNS Protocol Adapter Error”.
    If connection pooling is ON (true), the error thrown is “ORA-00942 Table or View does not exist”. This error gets thrown at the 50th distinct user regardless of the min pool size setting.
    Furthermore, once beyond the threshold, if we logout some of the earlier 1 – 49 userids and log back in, they still work. If we log them out and try new userids, the new ones throw the error. The original (1 – 49 pool) can still utilize the application.
    On the database side, we turned-on full blown logging (lsnrctl trace 16) in an attempt to catch the error. The attempt by the 50th and subsequent user never gets to the database. The ORA errors are thrown before the round-trip from aspnet_wp to the database.
    On the web server side, we are about to run another set of tests with ODP.Net tracing on. (I was not aware of this feature on our earlier tests.)
    This problem is also in production. The user base is not that big, but it does accumulate past 50 users sometime during the afternoon. It usually occurs around the same time almost every day, depending on activity. Each time, the customer has to bounce IIS. This is not good.
    Does anyone out there know of some configuration setting in ODP.Net, IIS, the .Net Config files, or elsewhere that could be a suspect here?

    More information.
    We set ODP.Net logging and ran through 50 users until we repated the error (again). Unfortunately it was not very illuminating to us. Perhaps someone else will find it so.
    These are the lines associated with the "ORA-00942 Table or View not found" that is thrown when using connection pooling. Remember, the 1st through 49th user since aspnet_wp restart ran this query with no issue.
    For the 50th (and subsequent) Logon User....
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OracleConnection::Open()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OracleConnection::Open()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OracleCommand::ExecuteReader()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OpsSqlExecuteReader()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OpsErrAllocCtx()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OpsErrAllocCtx(): RetCode=0 Line=185
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OpsSqlAllocCtx()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OpsSqlAllocCtx(): RetCode=0 Line=80
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 OpsSqlExecuteNonQuery(): SQL: SELECT Request_Type_ID , Description FROM REQUEST_TYPE WHERE SPO_ID = 'C-130' ORDER BY sort_order ASC
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OpsSqlExecuteReader(): RetCode=-1 Line=308
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OpsErrGetOpoCtx()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ERROR) Oracle error code=942
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OpsErrGetOpoCtx(): RetCode=0 Line=125
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OpsSqlFreeValCtx()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OpsSqlFreeValCtx(): RetCode=0 Line=154
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OpsErrFreeCtx()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OpsErrFreeCtx(): RetCode=0 Line=212
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OpsSqlFreeCtx()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OpsSqlFreeCtx(): RetCode=0 Line=105
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OracleConnection::Close()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OpsConCloseProxyAuthUserSession()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OpsConCloseProxyAuthUserSession(): RetCode=0 Line=488
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OpsConCheckConStatus()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OpsConCheckConStatus(): RetCode=0 Line=1444
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OracleConnection::Close()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OracleConnection::Dispose()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (ENTRY) OpsConFreeValCtx()
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OpsConFreeValCtx(): RetCode=0 Line=587
    TIME:2004/ 8/27-14:41:48:951 TID: 4c8 (EXIT) OracleConnection::Dispose()
    Again, any help would be greatly appreciated!

  • SSIS package is taking more time on production server than test server

    Same SSIS package was taking 18 hours on production and was just taking 6-7 hours on test server. When I increased the frequency of the Tlog backup file then it dropped to 11 hours. But still the difference of 3-4 hours is there.
    I had doubt that production server might have online transaction causing this issues, however this was proved wrong when I shutdown production sites and then also same output.
    Any idea where I am missing..
    Santosh Singh

    Thanks for taking time and trying to understand the explanation I had shared, please find comments inline:
    Are they performing server or database maintenance in the same window of time when the packages run. These include backups, index re-organize/rebuilt, partition switching, DBCC checks, statistics update, etc.?
    Comments: I am DBA too for them so I know better what is going on. No such jobs are executing during this window.
    What is the recovery model being used in production Full, Simple, Bulk-logged?
    Comments: Full
    Are the indexes and statistics being maintained in the source system you are extracting data from?
    Comments:Yes
    Have SQL Server configurations been modified such as Max Degree of Parallelism and Parallelism thresholds?
    Comments: Yes it was earlier, then I changed that back to Default while comparing Test environment which has everything default.
    Are there any special trace flags used to start up SQL Server in production?
    Comments: No only Trace 1118 for Tempdb enabled.
    Is SQL Server Auditing, Compression or Encryption being used in production?
    Comments: Only Backup compression has been enabled, no Auditing and no Encryption
    Is there Replication, Mirroring, Log-Shipping, Change Data Capture, Change Tracking being used in production?
    Comments: No Replication, Mirroring, Log-Shipping, Change Data Capture, Change Tracking being used in production. Earlier we had log shipping, however as of now log shipping as well. Its long time ago we removed log shipping.
    Are there other applications running along with SQL Server in the same Windows Server?
    Comments: No only sql server.
    Has SQL Server max memory changed from the default value? How much RAM is left for Integration Services and Windows? Check if paging is occurring.
    Comments: This is cluster setup for two instances on each node, so yes default memory is not still i had to adjust, however i didn't find high memory usage or memory issues in all log like sql, windows. 28 GB left for Integration Services and Windows
    and 50 GB-50 GB between two sql server instances. Paging looks normal too.
    Are there query hints being used in the SELECT queries?
    Comments:  Query we have bad, however this is our last target. The same query is executing in 6 hours in Test server and why it is taking now 11 hours is  the question of the hour for me.
    Thanks for all help again.
    Please let me know if I am missing anything.
    Santosh Singh

  • Confused with hardware tests / s.m.a.r.t. failing

    This morning I opened disk utility and with fear read the text saying smart status failing on my startup disk. Since then, I did several backups and decided that i will most probably need to purchase a new disk.
    I also did some other tests with other applications that showed different results:
    Diskwarrior - Verified
    Smartutility - failing
    Smartreader - failing
    Apple Hardware test - No trouble found
    Apple advanced hardware test - No trouble found
    Disk Utility 1st time - failing
    Disk Utility - 2nd,3rd time - verified
    For a strange reason Disk Utility changed its mind after causing me all this panic.
    What do you think I should do? Is there a problem or not? Any other good apps to test the disk with?
    I am in the middle of a documentary editing and don't want to think if the drive will fail in the next hour.
    thanks for your time

    *Regarding SMARTreporter: the status is nice and green and says verified but when doing the tests it all seems not so green..*
    Furthermore SMARTutility says that at 5 problems occurred at 3211 hours that were uncorrectable.
    How can I see what is the running hours of usage and what this error was and why it happened?
    Adding some info in case it helps"
    *Here are the 2 tests from SMARTreporter*
    *SMART Self-test* log structure revision number 1
    Num Test_Description Status Remaining LifeTime(hours) LBAof_firsterror
    # 1 Short offline Completed: read failure 90% 3221 417850920
    # 2 Short offline Completed: read failure 90% 3219 417850920
    # 3 Short offline Completed: read failure 90% 3219 417850920
    # 4 Short offline Completed: read failure 90% 3218 417850920
    *SMART Attributes* Data Structure revision number: 10
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
    1 RawRead_ErrorRate 0x000e 111 096 006 Old_age Always - 146133028
    3 SpinUpTime 0x0003 100 098 000 Pre-fail Always - 0
    4 StartStopCount 0x0032 099 099 020 Old_age Always - 1994
    5 ReallocatedSectorCt 0x0033 100 100 036 Pre-fail Always - 0
    7 SeekErrorRate 0x000f 071 060 030 Pre-fail Always - 12932560296
    9 PowerOnHours 0x0032 097 097 000 Old_age Always - 3221
    10 SpinRetryCount 0x0013 099 083 097 Pre-fail Always Inthepast 0
    12 PowerCycleCount 0x0032 099 099 020 Old_age Always - 1770
    184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
    187 Reported_Uncorrect 0x0032 001 001 000 Old_age Always - 383
    188 Command_Timeout 0x0032 100 099 000 Old_age Always - 17180131345
    189 HighFlyWrites 0x003a 100 100 000 Old_age Always - 0
    190 AirflowTemperatureCel 0x0022 066 052 045 Old_age Always - 34 (Lifetime Min/Max 19/40)
    191 G-SenseErrorRate 0x0032 100 100 000 Old_age Always - 1239
    192 Power-OffRetractCount 0x0032 100 100 000 Old_age Always - 35
    193 LoadCycleCount 0x0032 001 001 000 Old_age Always - 209116
    194 Temperature_Celsius 0x0022 034 048 000 Old_age Always - 34 (Lifetime Min/Max 0/62260)
    195 HardwareECCRecovered 0x001a 049 042 000 Old_age Always - 146133028
    197 CurrentPendingSector 0x0012 100 100 000 Old_age Always - 17179869188
    198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 17179869188
    199 UDMACRC_ErrorCount 0x003e 200 200 000 Old_age Always - 524288
    254 FreeFallSensor 0x0232 001 001 000 Old_age Always - 46
    4 StartStopCount 0x0000 000 000 000 Old_age Offline - 0
    *And here is the report from SMARTutility:*
    Error 318 occured at disk power-on lifetime: 3211 hours
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    40 51 00 FF FF FF 0F
    Error: Uncorrectable Error
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC PoweredUpTime Command/Feature_Name
    25 00 58 FF FF FF 4F 00 02:18:13.329 READ DMA EXT
    25 00 58 FF FF FF 4F 00 02:18:10.391 READ DMA EXT
    EF 02 00 00 00 00 00 00 02:18:10.390 SET FEATURES [Enable write cache]
    EF AA 00 00 00 00 00 00 02:18:10.390 SET FEATURES [Enable read look-ahead]
    EF 03 46 00 00 00 00 00 02:18:10.390 SET FEATURES [Set transfer mode]
    Error 317 occured at disk power-on lifetime: 3211 hours
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    40 51 00 FF FF FF 0F
    Error: Uncorrectable Error
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC PoweredUpTime Command/Feature_Name
    25 00 58 FF FF FF 4F 00 02:18:10.391 READ DMA EXT
    EF 02 00 00 00 00 00 00 02:18:10.390 SET FEATURES [Enable write cache]
    EF AA 00 00 00 00 00 00 02:18:10.390 SET FEATURES [Enable read look-ahead]
    EF 03 46 00 00 00 00 00 02:18:10.390 SET FEATURES [Set transfer mode]
    00 00 00 00 00 00 00 FF 02:18:10.378 NOP [Abort queued commands]
    Error 316 occured at disk power-on lifetime: 3211 hours
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    40 51 00 FF FF FF 0F
    Error: Uncorrectable Error
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC PoweredUpTime Command/Feature_Name
    25 00 58 FF FF FF 4F 00 02:17:37.687 READ DMA EXT
    2F 00 01 10 00 00 00 00 02:17:37.686 READ LOG EXT
    60 00 58 FF FF FF 4F 00 02:17:34.942 READ FPDMA QUEUED
    61 00 10 78 99 5C 40 00 02:17:25.411 WRITE FPDMA QUEUED
    61 00 10 B8 79 56 40 00 02:17:25.410 WRITE FPDMA QUEUED
    Error 315 occured at disk power-on lifetime: 3211 hours
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    40 51 00 FF FF FF 0F
    Error: Unknown
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC PoweredUpTime Command/Feature_Name
    60 00 58 FF FF FF 4F 00 02:17:34.942 READ FPDMA QUEUED
    61 00 10 78 99 5C 40 00 02:17:25.411 WRITE FPDMA QUEUED
    61 00 10 B8 79 56 40 00 02:17:25.410 WRITE FPDMA QUEUED
    61 00 10 C8 1D 51 40 00 02:17:25.406 WRITE FPDMA QUEUED
    61 00 10 88 F1 50 40 00 02:17:25.405 WRITE FPDMA QUEUED
    Error 314 occured at disk power-on lifetime: 3211 hours
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    40 51 00 FF FF FF 0F
    Error: Uncorrectable Error
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC PoweredUpTime Command/Feature_Name
    25 00 58 FF FF FF 4F 00 02:16:54.772 READ DMA EXT
    25 00 58 FF FF FF 4F 00 02:16:51.834 READ DMA EXT
    EF 02 00 00 00 00 00 00 02:16:51.833 SET FEATURES [Enable write cache]
    EF AA 00 00 00 00 00 00 02:16:51.833 SET FEATURES [Enable read look-ahead]
    EF 03 46 00 00 00 00 00 02:16:51.832 SET FEATURES [Set transfer mode]
    Self Test Log Revision: 1
    Self Test Log Count: 4
    Self Test Log:
    Num Test_Description Status Remaining LifeTime(hours) LBAof_firsterror
    # 1 Short offline Completed: read failure 90% 3221 417850920
    # 2 Short offline Completed: read failure 90% 3219 417850920
    # 3 Short offline Completed: read failure 90% 3219 417850920
    # 4 Short offline Completed: read failure 90% 3218 417850920

  • How to test if TV-Out (VIVO) is defective?

    Hi everybody!
    How can I test if the VIVO connector (TV-Out) of my G4 ti4200-VIVO is defective?
    I've read the thread "GF4 TV-OUT Guide" and I've tried everything explained there, but my TV keeps showing a blank screen.
    There's no load detection (the drivers don't detect a TV connected). I've put my monitor in the DVI connection also, but nothing seems to work.
    I've also tested to boot up my computer with ONLY the TV-out plugged. Nothing appears in the screen. I've tested the TV connection with another device and it works fine.
    I'm starting to think that something in the card is broken....
    Should the boot up process appear in the TV when only it is connected?
    Many thanks in advance!!!
    brutrak

    Hi,
    If you only have a TV connected, you should get the boot process appearing on the TV. It might not be readable, as it tends to default to NTSC on start-up, and older PAL TVs might not cope very well, but you should at least get something.
    A rare (but not unknown) problem could be that the DC input impedance of the TV set is too high. TV sets (and other video gear - e.g. VCRs) should have a DC impedance (resistance) of about 75 ohms. The "TV-detect" circuitry on most TV-outs does a fairly crude resistance measurement, and if it "measures" a resistance below a threshold value (usually somewhere in the 100 to 200 ohm range), it thinks it has a TV connected. So, if you can get hold of a resistance meter (or multimeter), and can measure the resistance between the inner pin of the lead you plug into the TV (with the lead plugged into the TV but not the VIVO, but measured at the VIVO end), you should get a reading of around 75 ohms.
    Naturally, you have to connect the TV to the Video OUT, not the IN to get a picture out
    Cheers

  • App Module Tester - JBO-33001: Cannot find the configuration file

    I have a custom main function for my application module. Launching the tester is fine
    launchTester("a.b.c.user_mnt.datamodel", "user_mntServiceLocal" );but if I comment out that line and add custom code:
    String amDef = "a.b.c.user_mnt.datamodel";
    String config = "user_mntServiceLocal";
    ApplicationModule am = Configuration.createRootApplicationModule(amDef,config);I get the following error:
    Diagnostics: (BC4J Bootstrap) Routing diagnostics to standard output
    (use -Djbo.debugoutput=silent to remove)
    [00] Diagnostic Properties: Timing:false Functions:false Linecount:true Threshold:6
    [01] CommonMessageBundle (language base) being initialized
    [02] Configuration.loadFromClassPath() failed
    [03] Stringmanager using default locale: 'en_US'
    [04] oracle.jbo.ConfigException: JBO-33001: Cannot find the configuration file
    /a/b/c/user_mnt/common/bc4j.xcfg in the classpathBut the bc4j.xcfg exists in
    /a/b/c/user_mnt/datamodel/common/bc4j.xcfgIs there a way to change where Jdev is looking for that file?
    Thank you,
    Brian

    Completely new workspace doesn't fix the problem.
    If I move the common folder to where jdev is looking it runs fine (I get a runtime error, but it runs none-the-less).
    Any help?
    Brian

  • How to judge correct rap in impact test

    Hi,guys
       I get a trouble during my impact test project.
       I use PCI-4472 and one module hammer connect to channel 0 and one accelerometer connect to channel 1.
       hammer as trigger of accelerometer to get impact response.
       My question is
       1) I need make sure the rap of hammer only once in each impact action. if yes, then record related response from accelerometer, if no, I need repeat impact action without record response. how can i check that?
       2) for this case, if I need average yet? ( I saw the SVXMPL_impact test(DAQmx).vi, it take some average computed for data, but for my application,since the impact only once each time, then if I can ignore the average function? I have few knowlegae about average theory.)
       3)how much samples I should use?
    Thanks for any help
    Tim

    To 1)
    How do you see a double rap?
    I assume you take a look at the hammer signal and look for more than one peak. Well do the same in LabVIEW. Hint:
    Waveform Peak Detection VI
    Owning Palette: Waveform Monitoring VIs
    Requires: Full Development System
    Finds the locations, amplitudes, and second derivatives of peaks
    and valleys in Signal In. Wire data to the Signal
    In input to determine the polymorphic instance to use or manually select the
    instance.
    This VI is similar to the Peak Detector VI.
    If you search the overall max or min with
    Waveform Min Max VI
    Owning Palette: Analog Waveform VIs and
    Functions
    Requires: Base Package
    Determines the maximum and minimum values and their associate time
    values for a waveform.
    you can determine a useful relative threshold.
    To 3)
    What is the lowest frequency you want to detect?  You should at least capture 1 period.  Together with pretrigger and windowing go for 2 periods
    What is the highest frequency you expect? For impact choose your sample rate at least 10x better 100x times higher. (Some say choose the highest you can get to avoid phaseshifts due to the aliasing filter)
    Both values will give you the number of samples.
    Greetings from Germany
    Henrik
    LV since v3.1
    “ground” is a convenient fantasy
    '˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'

Maybe you are looking for

  • Email language wise forwarder in specific DL on Exchange Server 2010

    Dear All, Can we set email language wise forwarder in specific  DL on Exchange Server 2010.  Below is the scenario : I have 4 users who sent mail to DL(DL-IT) in different languages like Dutch, German, English, now I want to set email forwarder on DL

  • Which iphone version will work all over the world?

    Hello there.Im from Pakistan.Need some info related to iPhone. Next month i'am going to Canada and planing to buy an iPhone 6 from there.. So im confused which version will work fine here in pakistan.. Kindly guide me about the version which should i

  • Connecting to a database using xp

    I've downloaded and installed the SQL developer and Oracle Database 10g Express Edition. I'm using WinXP. The tutorial is located at: http://www.oracle.com/technology/getting-started/sqldev.html I've followed the instructions but when I test the defi

  • Changing Default user Preferences in Workspace

    I am using HFM 9.3.1. Each time I set up a new user I have to get them to log in, and go through the following routine.. Go to File>Preferences. Then Select Financial Reporting on the left side of the preferences window. Under the Preview heading on

  • HT4858 Are my photos in ICloud

    Are photos stored in i cloud?