Data Loss when a database crashes

Hi Experts,
I was asked the question of "how much data is lost when you pull the plug off an oracle database all of a sudden".. and my answer was "all the data in the buffers that have not been committed". We know that you can have committed data sitting in the redo logs (that have not been written to the datafiles) that the instance will use for recovery once the instance is restarted again; however, this got me thinking and asking the question of how much uncommitted data is actually sitting in memory that potentially can be lost if the instance goes down all of a sudden?
With the use of sga_target and sga_max_size, the memory allocation for the cache buffer will vary from time to time.. So, is it possible to quantify the amount of lost data at all (in byts, kb, mb..etc)?
For example if the sga is set to 1gb (sga_max_size=1000mb), check point every 15mins (as we can't predict/know how often the app commits).. assume a basic transaction size for any small to medium size database. Redo logs set to 50mb (even though this doesn't come into play at this stage).
I would be really interested in your thoughts and ideas please.
Thanks

All Oracle Data Manipulation Language (DML) and Date Definition Language (DDL) statements must record an entry in the Redo log buffer before they are being executed.
The Redo log buffer is:
•     Part of the System Global Area (SGA)
•     Operating in a circular fashion.
•     Size in bytes determined by the LOG_BUFFER init parameter.
Each Oracle instance has only one log writer process (LGWR). The log writer operates in the background and writes all records from the Redo log buffer to the Redo log files.
Well, just to clarify, log writer is writing committed and uncommitted transactions from the redo log buffer to the log files more or less continuously - not just on commit (when the log buffer is 10mb full, 1/3 full, every 3 seconds or every commit - whichever comes first - those all trigger redo writes).
The LGWR process writes:
•     Every 3 seconds.
•     Immediately when a transaction is committed.
•     When the Redo log buffer is 1/3 full.
•     When the database writer process (DBWR) signals.
Crash and instance recovery involves the following:
•     Roll-Forward
The database applies the committed and uncommitted data in the current online redo log files.
•     Roll-Backward
The database removes the uncommitted transactions applied during a Roll-Forward.
What also comes into play in the event of a crash is MTTR, for which there is a advisory utility as of Oracle 10g. Oracle recommends using the fast_start_mttr_target initialization parameter to control the duration of startup after instance failure.
From what I understand, uncommitted transactions will be lost, or more precisely undone after an instance crash. That's why it is good practice to manually commit transactions, unless you plan to use SQL rollback. Btw, every DDL statement and exit from sqlplus implies an automatic commit.
Edited by: Markus Waldorf on Sep 4, 2010 10:56 AM

Similar Messages

  • Data loss when reading v7 hashes with db 4.7.25

    On a v7 hash created with db_load from db 3.2.9, db_dump from 4.7.25 will corrupt the database file unless either -r or -R are specified. With a v8 btree, 4.7's db_dump -p will output correctly but a db_dump -d a will not. In either case, dumping the v8 btree does not appear to lead to data loss.
    4.7's db_verify on the v7 hash gives an error about an impossible max_buckets setting in the metadata page. It seems to do this because it clobbers the last_pgno setting
    and compares 1, for instance, against 0 instead of 2.
    This patch prevents the clobbering with a 0:
    diff --git a/btree/bt_open.c b/btree/bt_open.c
    index f03652d..d77c7d6 100644
    --- a/btree/bt_open.c
    + b/btree/bt_open.c
    @@ -314,7 +314,7 @@ __bam_read_root(dbp, ip, txn, base_pgno, flags)
    t->bt_meta = base_pgno;
    t->bt_root = meta->root;
    - if (PGNO(meta) == PGNO_BASE_MD && !F_ISSET(dbp, DB_AM_RECOVER))
    + if (PGNO(meta) == PGNO_BASE_MD && meta->dbmeta.last_pgno > 0 && !F_ISSET(dbp, DB_AM_RECOVER))
    __memp_set_last_pgno(mpf, meta->dbmeta.last_pgno);
    } else {
    DB_ASSERT(dbp->env,
    diff --git a/hash/hash_open.c b/hash/hash_open.c
    index f5e1d7f..769b583 100644
    --- a/hash/hash_open.c
    + b/hash/hash_open.c
    @@ -110,6 +110,7 @@ __ham_open(dbp, ip, txn, name, base_pgno, flags)
    if (F_ISSET(&hcp->hdr->dbmeta, DB_HASH_SUBDB))
    F_SET(dbp, DB_AM_SUBDB);
    if (PGNO(hcp->hdr) == PGNO_BASE_MD &&
    + hcp->hdr->dbmeta.last_pgno > 0 &&
    !F_ISSET(dbp, DB_AM_RECOVER))
    __memp_set_last_pgno(dbp->mpf,
    hcp->hdr->dbmeta.last_pgno);
    1) Is this a correct/good fix?
    2) Why is db_dump writing to database files?
    besides if you are interested in running a free online shop, what's more, you can also put your shop on facebook, please sign up at the following site http://www.miiduu.com/facebook-store?source=fb

    Hi,
    Do you see the same symptoms if you use the Berkeley DB upgrade utility or API?
    http://download.oracle.com/docs/cd/E17076_02/html/programmer_reference/am_upgrade.html
    Regards,
    Alex Gorrod
    Oracle Berkeley DB

  • Can you minimize logging in a SSIS data flow when the database is in a SQL Server 2012 Always On Group?

    We have a file that we are loading 1st into a staging database then into a production database that contains over 5 million rows. Both databases belong to a SQL Server 2012 AG. We would like to minimize the logging in the staging database but t
    the same time keep the staging database in the AG. I know about fast load and setting the buffer settings in SSIS but I've read that this doesn't work on replicated tables and I am assuming that speaks to the AG.
    Are there any articles or someone's personal experiences with this type of scenario that could point us in the right direction and offset some of the initial data load into staging by minimizing logging?
    Thanks,
    Sue
    Sue

    Hi Sue,
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
    Thank you for your understanding and support.
    Regards,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • HT4972 Data loss when updating OS to 5.1.1

    Hi,
    I have a critical problem, i hope anyone can help me.
    I have updated my iphone from 4.3.3 to 5.1.1, but when itunes tried to back my data it gave me error, then its updated the OS to 5.1.1 when i restarted the iphone all my data was deleted (Like Contacts, Photos, Notes,...), is there any way to restore my data ??
    Thanks in advance.

    Sorry to hear about the problem with iOS upgrade. Unfortunately there is NO way you can downgrade the iOS on the phone. Try restart, reset and restore of the phone again. It will fix the issue.

  • How to avoid data loss when an action is perfomed ....

    hi,
    I am using a dynamic Tab. Each tab contains a seperate jsp page.( the jsp page is included for the corresponding tab). Each page can contain more than 25 fields. The problem is for example i will select some check box in the first tab and i will go the second tab and i will do some insert operations , when i came back to the first tab , the checkbox which i had selected or the text what i had entered should be there. If it can be solved by using AJAX, pls guide me.
    Tools i am using : jsp, struts.
    Looking forward to hear to solve this problem.

    hi....
    c through that when u eturn to tab.... set the form values to the page
    i mean if u r using a form bean for ur jsp. use name name atrribute of the sturts html tag. and give the form bean name to the name attribute i hope this would solve your problem
    thaks
    with rgards
    shekhar

  • Question: Will non committed persistence data loss if my application which is using Kodo/JDO crashes???

    Hi,
    I am very new to JDO and Kodo and I am still learning. I have a user
    specification that requires no data loss when the application crashes. If I
    am developing my application using Kodo for data access layer, when my
    application crashes just because and needs to restart, what happen to all
    the persistence data that have not committed to database??
    Vivian

    I am very new to JDO and Kodo and I am still learning. I have a user
    specification that requires no data loss when the application crashes.
    If I am developing my application using Kodo for data access layer, when
    my application crashes just because and needs to restart, what happen to
    all the persistence data that have not committed to database??If an app crashes, all current transactions will be aborted. There is a
    difference between data loss and aborting the current transaction. Data
    loss implies losing some persistent data -- data that resides in the
    database. That won't happen with Kodo.
    You will, however, lose any changes that have not been committed to the
    database yet. This is a good thing. You absolutely DO NOT want an
    unfinished transaction to be recorded, because that could violate the
    integrity of your data. Consider a transaction that decrements from one
    bank account and increments another to implement a funds transfer. You
    certainly wouldn't want to record the decrement unless you are absolutely
    sure the increment would be recorded too!

  • Safest Way to Penetration Test an Oracle DB with Potential Data Loss

    Hi,
    I was wondering what the safest way to protect Oracle from data loss when running a web application scan. We currently have an external company about to perform a web application scan and they warned us of potential data loss. However, we can't afford much downtime and our storage doesn't support things such as Copy on Write. What would you recommend? Do you think that something like putting the database in read-only mode for the duration of the test (2 hours) and enabling audit on all actions would be sufficient (we could then review the audit to see if any unauthorized calls were made)? Thanks.

    If not running live you might consider restoring your database to before the test. But you need to have confidence this would work.
    I assume your running live for the duration of the test.
    Going read only might invalid the test, and your application might not be able to run read only without generating errors.
    Examine and be aware of the flashback technologies available to you at your database version and which ones might be useful. In this context increase undo space/retention target might be helpful but dont dash off doing something at last minute.
    Ensure you have checked out how to use logminer.
    Consider not continously updating and standby database you have until test is complete.
    Ensure your more recent backup is successful and you have checked your restore procedures and have contingency places in place.
    In practice the web peneration test may attempt to change a small amount of data in a small number of records, but the agreement probably means they are not liabable if they dropped schema in the database!
    If you have to correct data following their test then do so carefully. Doing the wrong thing (especially in a panic) can make a sitation worse, especially if you are doing something you are not familiar with. Often it may be better the data loss through the application itself.
    If you do turn on auditing be aware of what it gives you before you turn in on, and beware any space implications.
    I notice your are recently registered on the site ... this may mean you dont have much experience with oracle, you may be more of a system administrator for instance. No disrespect in that whatsoever. However especially if this is the case then remember in my opinion dashing to change something last minute statisically often does more harm than good overall and may be harder to undo.
    Hope this helps.
    bigdelboy
    Edited by: bigdelboy on 28-Mar-2009 01:18
    Edited by: bigdelboy on 28-Mar-2009 01:22

  • The character when upgrade database

    hj all,
    i have a problem is confuse,
    the old oracle database, character set is UTF8, now i upgrade oracle DB from 8 to 10gr2, and my question:
    if i set character set in the oracle DB 10gR2 is AL32UTF8 , after import dump file from old DB into DB 10GR2, so the data is lost or how effactive?
    thanks all;

    Both UTF8 and AL32UTF8 support the full character repertoire of Unicode. They only differ in the way of how supplementary characters (characters with Unicode codepoints above U+FFFF) are encoded. Assuming the original UTF8 data is valid, there should be no data loss when moving from UTF8 to AL32UTF8. If you upgrade the database to 10.2.0.4/10.2.0.5 first, you can use DMU to perform this character set migration and ensure any invalid data is detected and handled properly along the process.

  • Data Loss in DB to DB Transformation in ODI

    Hi,
    I am facing data loss when I am trying a transformation for a DB to DB mapping in ODI.
    I have two tables in two different schemas with the following specifications. In ODI designer model of i have put the type of place as number in target and place as varchar2 for source and accordingly done the mapping.It works successfully when i am putting the data as ('12', 'ani', '12000', '55').
    Now for testing I am giving the datas as ('1', 'ani', '12000', '55') and ('2', 'priya', '15000', '65t') and when I am executing it is giving the error as expected(ORA-01722: invalid number) in the task (Insert flow into I$ table). My C$ table is populated with the datas from source. But E$,I$ and target tables are not populated with the data.
    Now when I am puttting data in source as ('3', 'shubham', '12000', '56') and ('4', 'shan', '12000', '59') it is getting completed successfully , datas from C$ tables are deleted and data is inserted into the target table.
    Now my question is where are the datas ('1', 'ani', '12000', '55') and ('2', 'priya', '15000', '65t') gone. If they are lost what is the recoverable table so that no data loss takes place.
    The codes for source and target tables are as follows:
    source table code:
    CREATE TABLE "DEF"."SOURCE_TEST"
        "EMP_ID"   NUMBER(9,0),
        "EMP_NAME" VARCHAR2(20 BYTE),
        "SAL"      NUMBER(9,0),
        "PLACE"    VARCHAR2(10 BYTE),
        PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE
    inserted data:
    INSERT INTO "DEF"."SOURCE_TEST" (EMP_ID, EMP_NAME, SAL, PLACE) VALUES ('1', 'ani', '12000', '55')
    INSERT INTO "DEF"."SOURCE_TEST" (EMP_ID, EMP_NAME, SAL, PLACE) VALUES ('2', 'priya', '15000', '65t')
    Target table code:
    CREATE TABLE "ABC"."TARGET_TEST"
        "EMP_ID"     NUMBER(9,0),
        "EMP_NAME"   VARCHAR2(20 BYTE),
        "YEARLY_SAL" NUMBER(9,0),
        "BONUS"      NUMBER(9,0),
        "PLACE"      NUMBER(9,0),
        PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE
    Thanks.

    So, first you have data in "DEF"."SOURCE_TEST".
    You then run your interface, and the data is moved into "ABC"."TARGET_TEST" if the interface executes successfully with no errors.
    Correct? - no data loss
    But if you're saying that you need to handle records which are going to cause the "invalid number" error, then you should read up on 'flow' and 'static' control and how to flag errors before loading them. Flow and Static Control allows ODI to identify erroneous records prior to loading - they'll be put in the E$ table for you to deal with later.
    If you haven't already, I'd encourage you to take a look at the documentation on this:
    Implementing Data Quality Control

  • External Drives for Mac Experiencing Data Loss with Maverick OS -- UPDATED FOR NOVEMBER 6, 2013

    --- Updated November 6, 2013 ---
    On October 30th, 2013 Western Digital informed registered customers of affected products via E-mail regarding reports of Western Digital and other external HDD products experiencing data loss when updating to OS X Mavericks (10.9).  Our investigation to date has found that for a small percentage of customers that have the WD Drive Manager, WD Raid Manager and/or WD SmartWare software applications installed on their Mac, there can be cases of a repartition and reformat of their Direct Attached Storage (DAS) devices without customer acknowledgement which can result in data loss.  
    WD has been tracking this issue closely through our WD Forum and through our Technical Support hotline and the occurrence rate of this event has been very low.  A specific set of conditions and timing sequences between the OS and the WD software utilities has to occur to cause this issue.  Should this event occur, the data on the product can likely be recovered with a third party software utility if the customer stops using the device immediately after the OS X Mavericks (10.9) upgrade.  WD will be issuing updated versions of these software applications that resolve this issue.
    WD strongly urges our customers to uninstall these software applications before updating to OS X Mavericks (10.9), or delay upgrading until we provide an update to the applications.  If you have already upgraded to Mavericks,  WD recommends that you remove these applications and restart your computer.  If you have already upgraded to Mavericks and are experiencing difficulty in accessing your external hard drive,  please do not save anything to the drive, disconnect the drive from your computer, and contact Western Digital Customer Service at http://support.wdc.com/contact/.
    --- Updated November 5, 2013 ---
    There are reports of Western Digital and other external HDD products experiencing data loss when updating to Apple's OS X Mavericks (10.9).  Western Digital is urgently investigating these reports and the possible connection to the WD Drive Manager, WD Raid Manager and WD SmartWare software applications. 
    Until the issue is understood and the cause identified, WD strongly urges our customers to uninstall these software applications on their systems before updating to OS X Mavericks (10.9), or delay upgrading.  If you have already upgraded to Mavericks, WD recommends that you remove these applications and restart your computer. WD has removed these software applications from our web site solely as a precaution as we investigate this issue.
    If you have already upgraded to Mavericks and are experiencing difficulty in accessing your external hard drive, please do not save anything to the drive, disconnect the drive from your computer, and contact Western Digital Customer Service at http://support.wdc.com/country/ for further assistance.
    You can now download the WD Software Uninstaller.  This utility will remove Mac WD SmartWare and WD Drive Manager software.  You can find the uninstaller under any of the Mac Drive Downloads sections such as the My Book Studio below.
    http://support.wdc.com/product/download.asp?groupid=124&sid=214&lang=en

    I agree. After installing Mavericks I was troubleshooting and reinstalling drivers for days. Many Apps did not work anymore, although the updates slowly arrive. As total divergence of the old Apple philosophy, I had to use endless library cleaning terminal commands to get a new Canon network printer running again. Canon provided the procedures after updating from OS X10.6 to 10.7 already. Now it seems, that the first time I used the Super Drive (CD-DVD Drive) trying to burn an audio CD with baffling error messages (Drive already used..). After this, the RAID1 status of the two MyBook archives changed to JBOD. The changes of OS X10.8 to 10.9 I find unnecessary (iBooks could be an App, Maps we have already the same on other channels). Some changes are even a step back (calendar graphics), the so much more user friendly Office suite iWorks is free, but degraded an of limited use!MadOverlord wrote:
    I have had multiple cases of data loss on WD drives since upgrading to Mavericks, and I do not use any WD software. I was using a 4-bay PROBOX USB3 enclosure with 4 independant drives, each with 1 volume, no RAID. I have managed to copy large files off the WD drive onto my MBP internal drive using the finder, and then find that they are not identical. This problem is intermittent, does not generate any Finder errors, and the drives all show 100% health via SMART. The configuration was rock-solid before Mavericks, and has trashed the directories of 4 drives since I upgraded last week. I am attempting to find a solid replication case for this, but it is difficult. I have not been able to replicate the issue on another 2-bay USB2 dock that I have (different manufacturer). One thing is clear: only one thing changed -- I upgraded to Mavericks. 

  • Location of oracle database crash date time

    Hi,
    After system crash happens and oracle database is recovered from system crash where can i find the entry in oracle which shows the time of system crash?
    I tried the following to get the system crash date and time:
    When i start auditing a user by logon and the user is logged on to system oracle creates an entry in dba_audit_session table for user logontime in TIMESTAMP column, but if the system crashes in between when the user is logged in to the system then no entry is made in LOGOFF_TIME column of dba_audit_session table.
    in followin example orcl user is being audited by logon.
    orcl logsin the system at time 11:36 am and my system crashes.
    when orcl logsin again after instance recovery logoff_time is blank, and shows a new entry of orcl.
    SQL > select username,to_char(timestamp,'dd-mm-yyyy hh:mi:ss'),to_char(logoff_time,'dd-mm-yyyy hh:mi:ss') from dba_audit_session where username like 'TMS' order by timestamp;
    USERNAME TO_CHAR(TIMESTAMP,' TO_CHAR(LOGOFF_TIME
    ORCL 16-10-2012 11:36:16
    ORCL 16-10-2012 11:46:33
    My aim is to get the date & time of database crash.

    Hi;
    As mention here the only way to check alert.log, If Asm avaliable you need also check asmlog and related log files.
    If you have OSwatcher on your system you can also check what process or wha happend on your server too
    PS:Please dont forget to change thread status to answered if it possible when u belive your thread has been answered, it pretend to lose time of other forums user while they are searching open question which is not answered,thanks for understanding
    Regard
    Helios

  • ORA-28150 when accessing data from a remote database

    Portal Version: Portal 3.0.9.8.0
    RDBMS Version: 8.1.6.3.0
    OS/Vers. Where Portal is Installed:: Solaris 2.6
    Error Number(s):: ORA-28150
    I have a problem with using a database link to access a table in
    a remote database. So long as the dblink uses explicit logins
    then everything works correctly. When the dblink does not have a
    username then I get the ORA-28150 message. The database link is
    always public. A synonym is created locally that points to a
    synonym in the remote database. I am using the same Oracle user
    in both databases. The Oracle portal lightweight user has this
    same Oracle user as its default schema. The contents of the
    remote table are always visible to sqlplus, both when the link
    has a username and when it doesn't have a username.
    All the databases involved are on the same version of Oracle.
    I'm not sure which Oracle login is being used to access the
    remote database, if my lightweight user has a database schema
    of 'xyz' then does portal use 'xyz' to access the remote
    database? I would be very grateful for any help or pointers that
    might help to solve this problem.
    James
    To further clarify this, both my local and remote databases
    schemas are owned by the same login.
    The remote table has a public synonym.
    The link is public but uses default rather than explicit logins.
    The local table has a public synonym that points to the remote
    synonym via the database link.
    If I change the link to have an explicit login then everything
    works correctly.
    I can view the data in the remote database with TOAD and with
    sqlplus even when the database link has default login rather
    than explicit login.
    This seems to point to Portal as being the culprit. Can anyone
    tell me whether default logins can be used across database links
    with portal?
    TIA
    James

    832019 wrote:
    One way to do this is by creating a database link and joining the two tables directly. But this option is ruled out. So please suggest me some way of doing this.Thus you MUST use two connection strings.
    Thus you are going to be either constructing some intricate SQL dynamically or you are going to be dragging a lot of data over the wire and doing an in memory search.
    Although realistically that is what the database link table would have done as well.
    Might be better to look at moving the table data from one database to the other. Depends on size of course.

  • When is the Next update of IOS..? ****** of with Data Loss issue

    I am using Iphone 4 for past 2 years in India on Dococmo Network, since after the update of IOS 5 and 5.0.1 I
    am FAcing the frequent Data loss in Mobile...!!
    Time beign i can over come it by turing off the cellular connetion to on and off mode where it resetting the data connection...!!!

    Since you haven't bothered to describe what this mysterious "data connection issue" is, we have no way to confirm or deny your statement that the problem is widespread.
    The fact remains, you're using it on an unsupported carrier.
    If you'd like to try and describe WHAT THE PROBLEM IS instead of getting defensive about it, we might be able to help. Otherwise, your initial question has been answered. No one here can tell you when the next update to iOS will be released.

  • When export mode is full and userid is system, data loss happens.

    Hi!
    I'm running Oracle 7.3.4 On HP-UX. I have problem when I try to export full my DB.
    I make a php program(a kind of ticket management program) and it use WEBDB's data in Oracle.
    Now, I export my WEBDB's data through userid="WEBDB" and the result of log is
    Connected to: Oracle7 Server Release 7.3.4.3.0 - Production
    With the distributed and parallel query options
    PL/SQL Release 2.3.4.3.0 - Production
    Export done in KO16KSC5601 character set
    . about to export WEBDB's tables via Conventional Path ...
    . . exporting table IMSI
    EXP-00008: ORACLE error 8103 encountered
    ORA-08103: object no longer exists
    . . exporting table TEMP 334 rows exported
    I export full my DB through user="SYSTEM" and the result of log is
    . about to export WEBDB's tables via Conventional Path ...
    . . exporting table IMSI
    EXP-00008: ORACLE error 604 encountered
    ORA-00604: error occurred at recursive SQL level 1
    ORA-08103: object no longer exists
    . . exporting table TEMP 331 rows exported
    The num of TEMP's row is 334 rows. I created "IMSI" table 2~3 days ago.
    What a terrible row & table loss!
    Please, what do I do? I need to do export full and no data loss...

    If its anything like exporting in Rel 2, you will have to export using SYS, not another user.
    Hope that helps ;)

  • ODBC Data Source Administrator (64-bit) crashes when I try to invoke PostgreSQL ANSI(x64) or Unicode(x64) drivers

    ODBC Data Source Administrator (64-bit) crashes with the following operation:
    1. Open 'ODBC Data Source Administrator (64-bit).
    2. Choose 'User DSN' tab
    3. Click 'Add' on right of windows. 'Create New Data Source' window opens
    4. Select driver 'PostgreSQL ANSI(x64)'
    5. Click 'Finish'
    6. Then get message 'ODBC Administrator has stopped working', with only option 'Close program'. A generic, unhelpful message is give: "A problem caused the program to stop working correctly. Windows will close the program and notify you if a solution is available."
    Using the 32 bit ODBC Data Source Administrator and 32 bit driver there is no problem. I also had no problem with this under Windows 7 using the 64 bit drivers. What might cause this? Is there a way to debug and determine whether the driver is at fault or whether
    it's ODBC Data Source Administrator.
    Further information:
    This Windows 8 Enterprise 64 bit installation was an upgrade from Windows 7 Enterprise 64 bit.
    The PostgreSQL ANSI(x64) driver can be installed from: http://www.postgresql.org/ftp/odbc/versions/msi/
    I am using version 'psqlodbc_09_01_0200-x64'
    Please request any further information that might be useful.
    Regards,
    Tom.
    NB Reposted from   Windows 8 Hardware Compatibility forum. It's not really a hardware problem.

    Hi,
    This type issue should be more related to SQL product. You may post the issue on SQL forum.
    http://social.msdn.microsoft.com/Forums/en/category/sqlserver
    Kim Zhou
    TechNet Community Support

Maybe you are looking for

  • Can't send e-mail with Thunderbird on Win8 out of server

    Hello, I have problem sith sending mails from Thunderbird. I have new computer with Win 8. On Outlook everything works fine. On previous op system thunderbird worked fine. Now I can recive all e-mails. I can send emails within server. I can't send ma

  • Remote Immediate Disconnect

    Ok, so I just got a replacement 3Gs, and up until this new phone, I had no problems connecting my iphone to itunes via remote application. Now, when I select "Add Library" the phone does indeed show up as a device in itunes, and lets me enter the pin

  • Prevent screen timeout in Flash 10.1 for Android

    Is there a way (in AS3 or the HTML container) to prevent the phone's screen from turning off while playing a game in Flash?  I'm working on some game prototypes for Flash Player 10.1 on Android 2.2, and it seems that as long as I'm touching / clickin

  • Location is WAY off, by half the world.

    the iphone location settings (time, maps) and all apps using them (yelp, google latitude, etc.) are all showing my location as being on the other side of the world. In Istanbul, Turkey when I am actually located on the east coast of the USA. I have r

  • Can the license manager load the computer-based license first, instead of the user-based license?

    We have several machines in the lab with computer-based licenses on them and a number of techs with user-based licenses. When a tech needs to occassionally sign in as themselves, instead of the generic service account, the license manager attaches th