Database Read/Write Ratio

Hii All..
Could I use physical read/physical write per second in monthly awr report Or v$sysstat physical reads/physical writes in order to calculate database read/write percentage ?
Or is there diffrent method to understand database's act to sizing IO workload.
Best Regards

If I have to tune performance, i would take a time interval when performance degrades.
When you take the whole month you put all in one sack (eg. daily OLTP transactions and nightly backup), so the statistics may be useless.
Summarizing, look at intervals between selected snapshots (eg. 6am till 6pm).
If however you want to calculate for calculating sake, then you may use v$sysstat which contains statistics since the DB startup.

Similar Messages

  • Open standby database read/write

    What's the syntax to open a standby database read/write?
    Any help will be appreciated.
    Thanks

    Technically it's not open Standy database read/write,
    Activate the standby database using the SQL ALTER DATABASE ACTIVATE STANDBY DATABASE statement.
    This converts the standby database to a primary database, creates a new reset logs branch, and opens the database. See Section 8.5 to learn how the standby database reacts to the new reset logs branch.
    Physical Standby can only be open read/write in 11g with active standby option.

  • Metrics on database read/write/delete based on size of the table

    Hi,
    Though we have many performance measurement tools,sometimes it is difficult for developers to trace and measure each read, some reads which look fine in the development environment turn out to be show stoppers in quality environments.
    I am trying to find if we can give a rough estimate of ideal response time of an RFC based on the number of database fetches/writes in the RFC assuming that the loops and internal table reads etc are optimized.
    e.g.: if my RFC performs two reads and one insert.I would like to arrive at a figure e.g.: 200ms should be the ideal runtime of the RFC.
    I would like to base my calculations based on the following parameters:
    - Table Size
    - Key/Index used
    e.g.: For a FETCH operation
        Table Size | Key Used | Total Time 
         Upto 1 G  | Primary  | 100ms
         1 - 5  G  | Primary  | 200ms
    Similarly for insert and delete..
    I have the following questions for the forum in relation to the above:
    - Is the above approach good enough for arriving at a
      approximate metric on total response time of an RFC ?
    - Are there any other alternatives, apart from using the
      standard SAP tools ?
    - How are metrics decided for implementations with Java
      and  .NET frontends ?
    Thank you,
    Chaitanya

    Hi There
    Do you mean dba_segments table?
    My boss want to export 2 big tables and import to training environment, each table contains more than 2 million rows.
    I want to know how big(bytes, or megabytes) are those two tables on the hard drive, because we are going to run out of the space on the same server. I am not sure the diskspace can afford such big export or not, so if I can know how big are those 2 tables, and then I can decide what I can do for export. For example: I got 200MB left on my /home directory, that is the only place we can put export, those 2 tables could be bigger than 400MB even though I compress the export file.
    Hopefully this time it is clear.

  • Making sql server database read -write from read only

    hey guys
    i attached adventure works in sql server 2008 and it showing as read only ,
    so please guide me to make it read write or remove read only tag from database
    thanks in advance
    sujeet software devloper kolkata

    Hi,
    Is there an error message while you attach (Or restore) the database if so please provide it.
    If no Right click on your database choose properties -> go to options -> scroll to end then change read only option to false
    I hope this is helpful.
    Elmozamil Elamir
    MyBlog
    Please Mark it as Answered if it answered your question
    OR mark it as Helpful if it help you to solve your problem
    Elmozamil Elamir Hamid
    http://elmozamil.blogspot.com

  • Determine database read/write statistics

    From the following (in Oracle documentation)
    DB_WRITER_PROCESSES parameter is useful for systems that modify data heavily. It specifies the initial number of database writer processes for an instance.
    And from the "Deployment Guide for Oracle on Windows using Dell PowerEdge Servers.pdf" in http://www.oracle.com/technology/tech/windows/index.html
    RAID LEVELS I have heard that it is for disks where datafiles reside the following is true
    If I/O is <= 90% reads, then it is advisable to go for RAID 10. If I/O is > 90% reads, then RAID 5 could be considered.
    I would like to know
    1. How do we find out whether our database is "read heavy" or "write heavy"? Are there are scripts available please?
    2. In commercial environments, what sort of RAID Levels are normally used for "read heavy and write heavy databases?
    Edited by: sandeshd on Oct 14, 2009 3:11 PM

    We were in a similar situation some weeks ago, we decided to make a trigger (logout) for saving the butes for a specific schema, you can work with this data importing it with Excel or something similar.
    DROP TABLESPACE BYTES_USUARIOS INCLUDING CONTENTS AND DATAFILES;
    CREATE TABLESPACE BYTES_USUARIOS DATAFILE
    '/oradata/oradata/ewok/bytes_usuarios.dbf' SIZE 1024M AUTOEXTEND ON NEXT 25M MAXSIZE UNLIMITED
    LOGGING
    ONLINE
    PERMANENT
    EXTENT MANAGEMENT LOCAL AUTOALLOCATE
    BLOCKSIZE 8K
    SEGMENT SPACE MANAGEMENT MANUAL
    FLASHBACK ON;
    +++++++++++
    CREATE USER B1
    IDENTIFIED BY VALUES %password%
    DEFAULT TABLESPACE BYTES_USUARIOS
    TEMPORARY TABLESPACE TEMP
    PROFILE MONITORING_PROFILE
    ACCOUNT UNLOCK;
    -- 1 Role for B1
    GRANT CONNECT TO B1;
    ALTER USER B1 DEFAULT ROLE NONE;
    -- 2 System Privileges for B1
    GRANT CREATE TABLE TO B1;
    GRANT CREATE SESSION TO B1;
    -- 1 Tablespace Quota for B1
    ALTER USER B1 QUOTA UNLIMITED ON BYTES_USUARIOS;
    ++++++++++
    CREATE TABLE b1.BYTES_USUARIOS
    USERNAME VARCHAR2(30 BYTE),
    SID NUMBER,
    SERIAL# NUMBER,
    MACHINE VARCHAR2(64 BYTE),
    LOGON_TIME DATE,
    CLS VARCHAR2(53 BYTE),
    NAME VARCHAR2(64 BYTE),
    VALUE NUMBER
    TABLESPACE BYTES_USUARIOS
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    grant all on b1.bytes_usuarios to system;
    ++++++++++++++++
    grant select on v_$mystat to system;
    grant select on v_$session to system;
    grant select on v_$statname to system;
    DROP TRIGGER SYSTEM.TRG_LOGOFF;
    CREATE OR REPLACE TRIGGER SYSTEM.TRG_LOGOFF
    BEFORE LOGOFF
    ON DATABASE
    DECLARE
    --VAR_CADENA VARCHAR(20);
    begin
    --VAR_CADENA := "%bytes%";
    --execute immediate '       
    insert into b1.bytes_usuarios (
    select
    ss.username,
    ss.sid, ss.serial#, ss.machine, ss.logon_time,
    decode (bitand( 1,class), 1,'User ', '') ||
    decode (bitand( 2,class), 2,'Redo ', '') ||
    decode (bitand( 4,class), 4,'Enqueue ', '') ||
    decode (bitand( 8,class), 8,'Cache ', '') ||
    decode (bitand( 16,class), 16,'Parallel Server ', '') ||
    decode (bitand( 32,class), 32,'OS ', '') ||
    decode (bitand( 64,class), 64,'SQL ', '') ||
    decode (bitand(128,class),128,'Debug ', '') cls,
    name,(value/1024/1024) from sys.v_$statname m, sys.v_$mystat s, sys.v_$session ss
    where
    m.statistic# = s.statistic#
    and (name like '%bytes sent%' or name like '%bytes received%')
    and ss.sid = (select distinct sid from sys.v_$mystat)
    end;
    ++++++++++++
    TODO
    select username, name, sum(value)
    from b1.bytes_usuarios
    group by username, name
    order by username, name
    SOLO bytes enviados
    select username, name, sum(value)
    from b1.bytes_usuarios
    where
    and name like '%sent%'
    group by username, name
    order by username, name
    SOLO bytes recibidos
    select username, name, sum(value)
    from b1.bytes_usuarios
    where
    and name like '%received%'
    group by username, name
    order by username, name

  • Access form ABAP to external MySQL-Database (read/write)

    Hello!
    We have an external MySQL-DB (running on Linux). Now we should read this database from our SAP-System (running on Linux with Oracle-DB) to create a purchase order. After creating this order in our SAP we should update the dataset in the MySQL-DB (creating order was succesfully).
    How can we create the Connection to the MySQL-DB?
    Thank you.
    Best Regards
    Markus

    Hi Markus!
    Sorry for the delay, the day was well filled
    For example of ADBC, as Kennet said, you can use ADBC_DEMO.
    About RFC, I advise you to take a read on the SAP JCo (SAP Java Connector), this is a SAP middleware component that enables the development of SAP-compatible components and applications in Java. From this you can send what you want interfacing to SAP.
    As said above in my last post, I would advise creating an RFC instead Native SQL. Not sure how the scenario you have to develop this solution, but I believe will be a more secure.
    Regards.

  • How to open a "manual" Physical standby database in read/write mode

    Hi,
    I am running Oracle Database 10g Release 10.2.0.3.0 - 64bit Production Standard Edition on Linux version 2.6.9-42.0.8.ELsmp ([email protected]) (gcc version 3.4.6 20060404 (Red Hat 3.4.6-3))
    I've created a physical standby database, but since I am running Standard Edition, I am not using the DataGuard features. I use the rsync utility to copy over the archivelogs to the standby database, and I apply them periodically to the standby database.
    The standby database is started this way :
    startup nomount pfile='/u01/oradata/orcl/initorcl.stdby';
    alter database mount standby database;
    Everything runs perfectly fine, I can do "alter database open read only" and then I can do selects into tables to confirm that everything is up to date.
    The thing is, if I shutdown immediate the database, then do just startup :
    shutdown immediate;
    startup;
    The database opens with no error messages, but is still in read-only mode...
    I read that the default behavior for a standby database is to open read-only, like I am experiencing, but I would like to know what is the right way to open it correctly in read-write mode (I understand that after that, my standby will not be standby anymore and that I will have to recreate my standby database).
    Thanks,
    Mat

    Hello,
    There're features which allows you to open a Standby database in Read/Write mode but for all I know
    it needs Entreprise Edition.
    In Enterprise Edition you can use Logical Standby database. More over, for Physical standby there's
    a way by using flashback database so as to rolling backward the database and avoiding to recreate
    the Standby.
    In Standard Edition I'm afraid that you'll have to recreate your Standby database.
    Best regards,
    Jean-Valentin

  • Clone Production Database and Convert into Read Write Mode

    Hi,
    Please help me for below question...
    How to Create Test Database from Production Database without transporting backup of Production Database to Test Database and the Test Database should be in different directory structure and converting into read write mode?
    Please find me a solution as early as possible...
    Thanks & Regards
    Akhil

    if you don't need to move backup from prod to dev you need to create rman catalog and have access on it from dev server , after that you will be able to duplicate your prod to dev without moving backup and database will be in Read write by default .

  • Online read + write from/to ms sql server database

    hi all,
    we're using R 4.6C. Want to connect to MS Sql server database and read/write data from abap program.
    what's the best (and fastest) way do to this ?
    joerg

    I know only DBCON (Database multiconnect): see the notes 178949 and 323151 for more details.
    Message was edited by: max bianchi

  • Can PL/SQL read/write from a database server to another server?

    hi,
    please advise.
    thanks

    what i mean is output text file to other servers using pl/sql through utl_file package.No. utl_file reads/writes only on the server where your pl/sql code is running.
    But maybe you could map a network drive. Haven't tried that.

  • Windows Server 2012 - Hyper-V - iSCSI SAN - All Hyper-V Guests stops responding and extensive disk read/write

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM, 2 teamed NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM, 1 NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched of.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, WMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with 2 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair
    Our Problem
    Our problem is that for some reason all of the VMs stops responding or responds very slowly and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    Symptoms (i.e. this happens, or does not happen, at the same time)
    I we look at resource monitor on the host then we see that there is often an extensive read from a VHDX of one of the VMs (40-60 Mbyte/s) and a combined write speed to many files in \HarddiskVolume5\System Volume Information\{<someguid and no file extension>}.
    See iamge below.
    The combined network speed to the iSCSI SAN is about 500-600 Mbit/s.
    When this happens it is usually during and after a VSS ShadowCopy backup, but has also happens during hours where no backup should be running (i.e. during daytime when the backup has finished hours ago according to the log files). There is however
    not that extensive writes to the backup file that is created on an external hard drive and this does not seem to happen during all backups (we have manually checked a few times, but it is hard to say since this error does not seem leave any traces in event
    viewer).
    We cannot find any indication that the VMs themself detect any problem and we see no increase of errors (for example storage related errors) in the eventlog inside the VMs.
    The QNAP uses about 50% processing Power on all cores.
    We see no dropped packets on the switch.
    (I have split the image to save horizontal space).
    Unable to recreate the problem / find definitive trigger
    We have not succeeded in recreating the problem manually by, for instance, running chkdsk or defrag in VM and Hosts, copy and remove large files to VMs, running CPU and Disk intensive operations inside a VM (for instance scan and repair a database file).
    Questions
    Why does all VMs stop responding and why is there such intensive Read/Writes to the iSCSI SAN?
    Could it be anything in our setup that cannot handle all the read/write requests? For instance the iSCSI SAN, the hosts, etc?
    What can we do about this? Should we use MultiPath IO instead of NIC teaming to the SAN, limit bandwith to the SAN, etc?

    Hi,
    > All VMs are using dynamic disks (as recommended by Microsoft).
    If this is a testing environment, it’s okay, but if this a production environment, it’s not recommended. Fixed VHDs are recommended for production instead of dynamically expanding or differencing VHDs.
    Hyper-V: Dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx
    > This is the primary host and normaly all VMs run on this host.
    According to your posting, we know that you have Cluster Shared Volumes in the Hyper-V cluster, but why not distribute your VMs into two Hyper-V hosts.
    Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/en-us/library/jj612868.aspx
    > 2 teamed NIC dedicated to iSCSI storage.
    Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. (At least it’s not supported until Windows Server 2008 R2. Although
    Windows Server 2012 has built-in network teaming feature, I don’t article which declare that Windows Server 2012 network teaming support iSCSI connection)
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    > I have seen using MPIO suggests using different subnets, is this a requirement for using MPIO
    > or is this just a way to make sure that you do not run out of IP adressess?
    What I found is: if it is possible, isolate the iSCSI and data networks that reside on the same switch infrastructure through the use of VLANs and separate subnets. Redundant network paths from the server to the storage system via MPIO will maximize availability
    and performance. Of course you can set these two NICs in separate subnets, but I don’t think it is necessary.
    > Why should it be better to not have dedicated wireing for iSCSI and Management?
    It is recommended that the iSCSI SAN network be separated (logically or physically) from the data network workloads. This ‘best practice’ network configuration optimizes performance and reliability.
    Check that and modify cluster configuration, monitor it and give us feedback for further troubleshooting.
    For more information please refer to following MS articles:
    Volume Shadow Copy Service
    http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
    Support for Multipath I/O (MPIO)
    http://technet.microsoft.com/en-us/library/cc770294.aspx
    Deployments and Tests in an iSCSI SAN
    http://technet.microsoft.com/en-US/library/bb649502(v=SQL.90).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • Powershell use Connection String to query Database and write to Excel

    Right now I have a powershell script that uses ODBC to query SQL Server 2008 / 2012 database and write to EXCEL
    $excel = New-Object -Com Excel.Application
    $excel.Visible = $True
    $wb = $Excel.Workbooks.Add()
    $ws = $wb.Worksheets.Item(1)
    $ws.name = "GUP Download Activity"
    $qt = $ws.QueryTables.Add("ODBC;DSN=$DSN;UID=$username;PWD=$password", $ws.Range("A1"), $SQL_Statement)
    if ($qt.Refresh()){
    $ws.Activate()
    $ws.Select()
    $excel.Rows.Item(1).HorizontalAlignment = $xlCenter
    $excel.Rows.Item(1).VerticalAlignment = $xlTop
    $excel.Rows.Item("1:1").Font.Name = "Calibri"
    $excel.Rows.Item("1:1").Font.Size = 11
    $excel.Rows.Item("1:1").Font.Bold = $true
    $filename = "D:\Script\Reports\Status_$a.xlsx"
    if (test-path $filename ) { rm $filename }
    $wb.SaveAs($filename, $xlOpenXMLWorkbook) #save as an XML Workbook (xslx)
    $wb.Saved = $True #flag it as being saved
    $wb.Close() #close the document
    $Excel.Quit() #and the instance of Excel
    $wb = $Null #set all variables that point to Excel objects to null
    $ws = $Null #makes sure Excel deflates
    $Excel=$Null #let the air out
    I would like to use connection string to query the database and write results to EXCEL, i.e.
    $SQL_Statement = "SELECT ..."
    $conn = New-Object System.Data.SqlClient.SqlConnection
    $conn.ConnectionString = "Server=10.10.10.10;Initial Catalog=mydatabase;User Id=$username;Password=$password;"
    $conn.Open()
    $cmd = New-Object System.Data.SqlClient.SqlCommand($SQL_Statement,$conn)
    do{
    try{
    $rdr = $cmd.ExecuteReader()
    while ($rdr.read()){
    $sql_output += ,@($rdr.GetValue(0), $rdr.GetValue(1))
    $transactionComplete = $true
    catch{
    $transactionComplete = $false
    }until ($transactionComplete)
    $conn.Close()
    How would I read the columns and data for $sql_output into an EXCEL worksheet. Where do I find these tutorials?

    Hi Q.P.Waverly,
    If you mean to export the data in $sql_output to excel document, please try to format the output with psobject:
    $sql_output=@()
    do{
    try{
    $rdr = $cmd.ExecuteReader()
    while ($rdr.read()){
    $sql_output+=New-Object PSObject -Property @{data1 = $rdr.GetValue(0);data2 =$rdr.GetValue(1)}
    $transactionComplete = $true
    catch{
    $transactionComplete = $false
    }until ($transactionComplete)
    $conn.Close()
    Then please try to use the cmdlet "Export-Csv" to export the data to excel like:
    $sql_output | Export-Csv d:\data.csv
    Or you can export to worksheet like:
    $excel = New-Object -ComObject Excel.Application
    $excel.Visible = $true
    $workbook = $excel.Workbooks.Add()
    $sheet = $workbook.ActiveSheet
    $counter = 0
    $sql_output | ForEach-Object {
    $counter++
    $sheet.cells.Item($counter,1) = $_.data1$sheet.cells.Item($counter,2) = $_.data2}
    Refer to:
    PowerShell and Excel: Fast, Safe, and Reliable
    If there is anything else regarding this issue, please feel free to post back.
    Best Regards,
    Anna Wang

  • Help with utl_file (read/write file from local directory)

    Need help reading/writing file on local machine from plsql using 10.2 DB.
    I am trying to read/write a file from a local directory(laptop) without success.
    I have been able to read/write to the database server directory but can't write to directory on local machine.
    The utl_file_dir parm has been set to * and the db restarted but I can't get it to work... Here's the plsql statement.
    out_file := UTL_FILE.FOPEN ( 'C:\PLSQL', 'TEST.TXT', 'W' ,32767);
    Whenever I run it continues to write to c:\PLSQL dir on the database server. Have looked at the "Directory" object and created MY_DIR = C:\PLSQL but it writes to the db server.
    Running 10.2 on a remote windows server, running PLSQL using sql*navigator.
    Thanks in advance for your help..

    I don't see how you expect the server to be able to see your laptop across the network, hack into it and start writing files. Even if it could, what if there is more than one laptop with a C: drive? How would it know which one to write to?
    Is there a shared drive on the server you can access via the laptop?

  • Make CA database read-only?

    Is it possible to make Firefox not accept any new Certificate Authorities without user interaction?
    Long story short, I have having a problem where Firefox is adding a CA to it's database that is hosing things up. The actual problem is with the cert that is being offered to me, and I am working with that system owner to fix the cert, but in the mean time I would like to have Firefox not ever load the CA into it's database. This new CA is added without the user being prompted at all. Simply visiting a specific website causes this new CA to be added to the list (but again, not trusted, just added to the list).
    By default when it is loaded the new cert has no permissions so it is not trusted, but the problem is the fact that the CA that is added is a duplicate name with another known good CA in my list and it causes things to go wacky when there are two with the same name (different signatures, different issuer, the only thing the same is the nickname).
    I know this isn't a problem with Firefox directly. In 99.9999% of cases when it adds an additional CA to the list it doesn't cause a problem at all because it is not trusted and it won't inherently allow the secure connection. But since fixing the real problem with the owner of the website is going to take a long time (weeks/months) I would like to put a band-aid on the symptom so that I can cut my maintenance of this topic down greatly.
    I am running Red Hat Enterprise Linux 5.9 with Firefox 10.0.12 (RHEL distributed Firefox).
    I could go into extreme detail to explain what is causing my problem, but the short question I have is "Is there a way to make the CA database read-only"?
    I have tried editing the permissions of cert8.db in ~/.mozilla/firefox/*.default/ to only be readonly (0400) vice read-write (0600) currently. However this causes Firefox to have kittens when I try to use anything that reads the CA list so I had to change it back.
    I have a hacky script to remove the CA using certutil, but since certutil uses the 'nickname' of the cert to decide which one to delete, and both the good cert and the bad cert have the same nickname I get worried I'll blow away the good one and not the bad one. So far it has consistently matched the bad cert, but I don't have enough confidence that it will do that every time to push it out to my users. If I could use certutil -D with something more specific than nickname (fingerprint, signature value, etc) I would be OK with that as well.
    I know there are options to restrict user changes to things like proxies and the such, is there a similar way to do it with CAs? about:config doesn't appear show anything that looks like it would do it.
    Can I have it prompt me when it tries to add a CA to the database and allow me to say yes/no?
    I am OK if the change is something that requires manual intervention if we do decide to add another CA to the list. Currently I am having to repair this problem multiple times a day and new CAs don't come all that often.
    Unfortunately, I can't simply upgrade to the latest Firefox as software restrictions are in place. I'm open to any ideas you may have.

    Seems unlikely two unrelated CA or sub-CA certs would have the exact same issuer name. Was the name too generic? If they actually related and I assume created by your organization perhaps you could simply add the "Integration CA 456" certificate to your root store -- assuming you trust it.
    Firefox validates certificate chains by looking up issuers, and in case of duplicates by default it grabs the one with the most recent "Not Before" date. There is a newer algorithm under development that won't stop at the first match but will continue to trying intermediate combinations until it finds a match or runs out of options. If you'd like to try it you need to add the boolean preference "security.use_libpkix_verification" and set it's value to true. This pref will not appear in about:config by default, but you can right-click in about:config to add it.

  • Is it possible to read/write to text file without deleting it?

    I know how to read from a text file and how to write to a text file. The problem that i have is i need to use a text file to store data for my application to read and also for my application to write. I would like it if i could write two programs really, one reads, the other is used to update the text file. This file is a list of verbs. I thought about using databases but i couldn't get them to work. I downloaded MySQL server 5.0 and installed it. I then downloaded the driver from http://www.mysql.com/products/driver and ran the auto installer. it said everything worked out perfectly but when i try these lines:
    Class.forName("com.mysql.jdbc.Driver");
    I get a SQLException that says no suitible driver
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    ( I thought this driver came with the JDK but i guess not, i just read about it in a java book)
    I get a ClassNotFoundException
    that just says sun.jdbc.odbc.JdbcOdbcDriver
    So yeah, SQL is pretty much not working. I need a solution to my problem, either by using text files, or a different type of database. I heard you could use excel to create a database but i have no idea how and i hear microsoft access could also do this, however i don't have microsoft access and i don't intend on paying for it. So, here are my questions:
    1st, is there a tutorial on using excel databases in java programs
    (if not)
    2nd is there a way to read/write/update a text file without deleting it?
    (if not)
    3rd is there a way to get SQL working, i have windows vista this could be the problem
    (if not)
    4th what could i do to store information on the hd for reading and modifying later?
    thanks, lateralus

    A database might be overkill just for a list of words.
    Thoughts:
    <ul>
    <li>What is the extent of your "file updating"? If you are just appending to the file, opening it in append mode will keep the file from being clobbered.</li>
    <li>Otherwise, why not create new files instead of editting them? The file names could include a version number or timestamp, allowing the reader to select the newest one.
    </li>
    </ul>

Maybe you are looking for

  • Credit Note in case of return scenario

    Hi, In case of return scenario, credit note is issued with reference to return order or return delivery? I am unable to see the copy control setting in IDES for RE(Return Order) to G2(Credit Memo). Could any one please let me know the setting for the

  • Finding the windows directory

    With the LabVIEW application builder, I can choose to put a file, such as an ..ini file, into the windows directory. Now, I need to read the file in my program, but I can't find a variable available that returns the windows directory, so I'm just usi

  • Bex analyzer macros

    Hi, Is there any macro in the bex analyzer that allows you to delete columns in a report (workbook)?? In the columns of the report is showing an amount to a date, that date is a variable selection querie, so that shows you many columns (amount) as da

  • SAP BO Disclosure Management - Setting - Period

    Can the display of valid from and valid to dates be in dd/mm/yyyy instead of the American format of mm/dd/yyyy?

  • Duplicate messages from weeks ago

    Ever since upgrading to Leopard last month, Mail has been retrieving messages from one of my accounts that date back to weeks previous, which have already been retrieved. Every day I receive 5-20 "new messages" which I have already read. I saw this i