StandBy Database goes Corrupt
Hi,
I'm new to Data Guard tech. I've installed Oracle 10g Data Guard on Solaris platform.
However, while doing some operation i noticed that the control file of the StandBy db has been corrupted, soon after i took its backup (By Renaming it) and create new Control file from Primary DB and placed at same location with same name.
As i restarted the Standby Db - it gave me an error of Datafile old and i got it as the SCN number has been changed with respect to new control file and Datafile.
Currently, there is nothing in the DB's and both (Primary and Standby DB) are blank and will go in production after some days.
Now, my Query is how to make them synchronize - how to make my StandBy Db in working condition???
Note: >> If required, I can Shutdown Primary DB.
>> Db is running in Archive Log mode but it got corrupted long back and i dont know the exact date of its corruption.
While Surfing, i got at some site that:
Create StandBy control file from Primary DB and place it at StandBy DB.
Place all Datafiles and index and Redo log files form primary to StandBy DB And then try to start. Now please suggest - Will these steps make my StanDby Db working - Else Plz provide me the correct stpes ot make my StandBy Db working.
PS: Both my DBs are empty currently.
Thanks a lot for your help.
mate I suggest that you do a number of different things.
one is shut down the database and copy all the files to the standby database location,
startup mount the production database and create the standby control file
alter database create standby controlfile ' location /standby.ctl';then copy it to the location of the control files in your standby database. then delete all the control file there. and copy the standby in control01.ctl control02.ctl and control03.ctl
the startup no mount your standby,
alter database mount standby database; then get your connectivity to your production working by doing a conn to each database from the other one, ie. from the sql prompt on the production , connect to the standby,from the sql prompt on the standby connect to the production etc.
if that is fine then on the production database at the prompt
open it up
alter database openon the standby database .
alter database recover managed standby database disconnect from session;and it should go.
Similar Messages
-
Data block corrupted on standby database (logical corruption)
Hi all,
we are getting the below error on our DRSITE,it is MANUAL PHYSCIAL STANDBY DATABSE...
The following error has occurred:
ORA-01578: ORACLE data block corrupted (file # 3, block # 3236947)
ORA-01110: data file 3: '/bkp/oradata/orcl_raw_cadata01'
ORA-26040: Data block was loaded using the NOLOGGING option
I have checked in the Primary database, that there are some object which are not being logged into the redo logfiles.....
SQL> select table_name,INDEX_NAME,logging from dba_indexes where logging='NO'
TABLE_NAME INDEX_NAME LOG
MENU_MENUS NUX_MENU_MENUS_01 NO
MENU_USER_MENUS MENU_USER_MENUS_X NO
OM_CITY IDM_OM_CITY_CITY_NAME NO
OM_EMPLOYER EMPLR_CODE_PK NO
OM_EMPLOYER IDM_EMPLR_EMPLR_NAME NOOM_STUDENT_HEAD OM_STUDENT_HEAD_HEAD_UK01 NO
OT_DAK_ENTRY_DETL DED_SYS_ID_PK NO
OT_DAK_ENTRY_HEAD DEH_SYS_ID_PK NO
OT_DAK_ENTRY_HEAD IDM_DEH_DT_APPL_REGION NO
OT_DAK_ENTRY_HEAD IDM_DEH_REGION_CODE NO
OT_DAK_REFUNDS_DETL DRD_SYS_ID_PK NO
TABLE_NAME INDEX_NAME LOG
OT_MEM_FEE_COL_DETL IDM_MFCD_MFCH_SYS_ID NO
OM_STUDENT_HEAD IDM_STUD_COURSE NO
13 rows selected.
so the main problem is in the OM_EMPOYER tables if i would delete the indexes from that table recreate it again with the logging clause,and then apply the archvied logs to the DRSITE.WILL THE problem will resolve.
Pls suggest me...Hi..
Firstly how did you confirm that it was that index only.Can you post the output of
SELECT tablespace_name, segment_type, owner, segment_name
FROM dba_extents WHERE file_id = 3 and 3236947 between block_id
AND block_id + blocks - 1;
This query can take time, if are sure that its the index don't fire this command .
Secondly, when you will drop and recreate the index, it will be logged into the redo logfile.This information will be be logged in to an the archivelog file as its the replica of the redo logfile. Then when you apply this archive log maually, it will drop that index and then recreate it using the same sql.
HTH
Anand -
Standby database datafile corruption.
Hi:
I am getting the following errors on my standby db. Could u please tell me how to handle the situation. I am skeptical to try out any thing as this might end up in datafile corruption. Please advice..
MRP0: Background Media Recovery terminated with error 1237
Fri Nov 30 09:15:25 2007
Errors in file /u01/prod/oraprod/proddb/9.2.0/admin/PROD_hunter/bdump/prod2_mrp0_19630.trc:
ORA-01237: cannot extend datafile 392
ORA-01110: data file 392: '/u03/prod/oraprod/proddata/a_txn_data01.dbf'
ORA-19502: write error on file "/u03/prod/oraprod/proddata/a_txn_data01.dbf", blockno 424577 (blocksize=8192)
ORA-27072: skgfdisp: I/O error
Linux Error: 4: Interrupted system call
Additional information: 424576
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
MRP0: Background Media Recovery process shutdown
---------------------------------------------------------------------------------------------------------------------Check this error,
ORA-01237: cannot extend datafile string
Cause: An operating system error occurred during the resize.
Action: Fix the cause of the operating system error and retry the command.
Is there any DDL statement in your primary to resize datafile ? Do you have enough space on standby server? -
How can I create a Standby Database going from AIX to Solaris
Hello All,
I need to create a read-only replica of my Oracle 9i database on AIX to a 9i or greater database on Solaris. The replica must recieve the changes to the master database on an interval. The standard DataGuard solution does not support mixed operating systems.
Is there another way to create the replica? I would prefer not to create triggers on the master tables.
Your input is appreciated.
ThanksHi,
If you just wants to have replication, try materialised view. Hope this will work.
Regards
Jomo -
The password file is getting corrupted after rebooting the standby database
Hi,
The password file is getting corrupted after rebooting the standby database.
Since the databases are not in sync, I had to copy the pwfile from primary to standby to make 'em sync.
(BUT.... the pwfile is not getting corrupted every time I reboot the standby by)
Could somebody please explain the reason for the pwfile on the standby database getting corrupted while rebooting ?
Env: Oracle 11g on Windows 7
Thanks in advanceModerator Action:
This thread was originally posted to the Oracle/Sun Servers HARDWARE forum.
This is definitely not an issue related to any hardware components.
... it's now moved to Database General Questions, hopefully for closer topic alignment. -
I am new to the Data Guard process and i wanted to know the following:
I have a 9i Primary DB that ships to a Logical Standby Database, however if the logical standby database goes down for whatever reason, (hardware failure) it is unreachable, what can I configure to alert me that the has occurred?
Is it something on the Primary DB that alerts me that the redo logs are perhaps not being processed? Does one use the Dead Connection Detection parameter SQLNET.EXPIRE? How do I identify the Primary to Standby connection is down amongst so many user connections?
Many Thanks
AMYou will have an error for the archive log destiantion that is for your standby
For like 10g or 11g using a SYNC or ASYC the logwr reports the issue the archiver process will report same as well.
LGWR: Error 1041 disconnecting from destination LOG_ARCHIVE_DEST_2 standby host 'testsb'
so for your archive destination for you standby look for errors to the destination and then alert. -
Hi.
I's needed to activate some physical standby databases in the DR site to test if the applications are working correctly. Everthing ran correctly, only I have an one problem:
When the DataGuard was synchronized, this does correctly:
Ex.
Prod database
select max(sequence#) from v$log_history; Max sequence: 115
Standby database
select max(sequence#) from v$log_history; Max sequence: 114
But in OEM - DataGuard monitoring page indicate me the sequence that the database advanced during the tests and not the sequence in wich really the standby database goes.
How can I solve this issue?
ThanksHi Hemant.
I's followed the steps of "12.6 Using a Physical Database for Read/Write testing and reporting (Oracle Data Guard concepts and Administration guide)" and in this guide have a note for first make a guaranteed restore point, then activate the standby db and after the test restore to this point.
All the steps is running correctly but when I check in the Data guard status pages on the OEM, the *'Last received log'* column have the last sequences generated during the test and in the *'Last applied log'* column have a zero, but if I check with the next sql query in the standby database:
sql> select max(sequence#) from v$log_history;
have the sequence correctly; before I was do the steps to synchronize with the primary database.
I's make a clean of the agents on both servers, the same data.
After the some day, the databases are synchronize. -
Datafile corrupted On standby database
one of the datafile got corrupted in the standby database . when backing up with rman i got ora-15966 error .
i found the corrupted datafile but i did not have any backups to recover .
is there any way to recover the block corruption ?Hello,
What's your Oracle version? OS? following link shows how to fix it , you can search oracle documentation according to your version.
http://www.mpi-inf.mpg.de/departments/d5/teaching/ss05/is05/oracle/server.920/a96521/repair.htm#11355
Regards -
Block corruption on Standby database
Oracle 10g R2 64bit on Solaris 10 installed on two database server, Sun M5000 and Sun V890
Primary and physical Standby database is configured with Max performance Async mode, log shipping is ok, archive logs are also applying..
I opened the standby database on readonly mode, couple of SQLs are running successfully but few SQLS are throwing error meesaage, here is log message -
SQL> select count(1) from inventory_stock;
select count(1) from inventory_stock
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 12, block # 28109)
ORA-01110: data file 12: '/backup1/np13/data/invindx01.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option
However there is no error meesage recorded in Alert log files related to block corruption. Please suggestselect file#,UNRECOVERABLE_CHANGE#,UNRECOVERABLE_TIME
from V$DATAFILE
where UNRECOVERABLE_TIME is not null
FILE# UNRECOVERABLE_CHANGE# UNRECOVER
4 9.7333E+12 12-SEP-10
5 9.7333E+12 12-SEP-10
6 9.7333E+12 12-SEP-10
7 9.7333E+12 12-SEP-10
9 9.7333E+12 12-SEP-10
12 9.7333E+12 13-SEP-10
13 9.7333E+12 13-SEP-10
14 9.7327E+12 01-SEP-10
15 9.7333E+12 13-SEP-10
17 9.7333E+12 13-SEP-10
22 9.7333E+12 13-SEP-10
23 9.7333E+12 13-SEP-10
24 9.7333E+12 13-SEP-10
32 9.7333E+12 13-SEP-10
33 9.7333E+12 13-SEP-10
34 9.7333E+12 13-SEP-10
35 9.7333E+12 13-SEP-10
41 9.7324E+12 25-AUG-10
42 9.7333E+12 13-SEP-10
43 9.7333E+12 13-SEP-10
44 9.7333E+12 13-SEP-10
45 9.7333E+12 13-SEP-10
57 9.7332E+12 11-SEP-10
60 9.7333E+12 13-SEP-10
62 9.7333E+12 12-SEP-10
63 9.7333E+12 13-SEP-10
65 9.7333E+12 13-SEP-10
66 9.7333E+12 13-SEP-10
68 9.7333E+12 13-SEP-10
70 9.7333E+12 13-SEP-10
71 9.7333E+12 12-SEP-10
73 9.7333E+12 12-SEP-10
74 9.7333E+12 13-SEP-10
75 9.7333E+12 12-SEP-10
77 9.7324E+12 25-AUG-10
79 9.7333E+12 13-SEP-10
83 9.7333E+12 13-SEP-10
84 9.7333E+12 13-SEP-10
86 9.7333E+12 13-SEP-10
87 9.7333E+12 12-SEP-10
89 9.7333E+12 12-SEP-10 -
Corrupt Block and Standby Database
Guys,
I created a standby database recently. I then discovered a corrupt block on my primary, I assume the corruption is also on the standby since the files were coppied. If I repair the corrupt block on the primary how do I move the correction to the standby, do I have to recreate it?
DB version is 9iR2
DeltonHi Delton,
How do you plan to repair the corrupt block ?
* Drop and re-create the object
* Restore from backup
In both cases, changes are replicated to the standby database, so nothing to worry about. As Sybrand has mentioned, make sure the changes are done with LOGGING option.
Regards
Asif Momen
http://momendba.blogspot.com -
DBV-0200,block already marked corrupted on physical standby database
Dear all,
we are facing the problem of *'DBV-200 that is block already marked corupted on standby database'* on all index datafile, we facing this error in oracle 10.2.0.3 version of database when we are running dbv utility.
but this error can't found on our production database. It is only on standby database.
We canot find the root cause of this error so any body tell me the cause and solution on this.
Thanks
Kiran Rane.Hi Ravi,
i checked all indexes on our primary database some indexes is in logging mode and more than half indexes is in nologging mode but i have some doubt about index logging and nologging mode,
when our primary database is running on 9.2.0.8 version of database this kind of error not observe but after upgrading to 10.2.0.3 version of database we are getting this kind of error so if this version having some bugs or for avoiding this error any patch is available. so tell me what is the exact reason behind this error.
Thanks
Kiran Rane. -
Creating standby database using data guard.
Current Env:
Oracle 11.2.0.1
ASM
Size 1.7T and growing.
I'm rebuilding a standby database and need to use rman because of a few factors. In the past, I did a file copy create a standby control file config init.ora and it always worked.
We have a database and had the stanby build; but someone issued a flashback and corrupted the standby database.
Because of the size 1.7T and growing and we are now using ASM.
My research is only showing building standby through rman using dupicate database command.
I would like to copy the cold backup to a external drive and ship it to the standby site where I'll do the restore. This is because of time and bandwith required to rebuild the standby will interfere with operaitions for the period the files are being copied.
So what would be the high level steps.
1) get the cold backup
2) create the standby control file
3) ship the data via corrier
4) restore the database
5) and this is where i'm not sure. recover the standby control file - but during the restore of the database the "normal" control file will open and perhaps do a checkpoint. therefore the standby controlfile will be usless.
6) recover the standby database.
Has anyone accomplished this?
As much specifics will be helpful. This system is operational and needs to be done right the first time.
thanks,
-RobThank you.
1) I'm going to off load the cold backup to an external drive and have a courier take it to the DR site.
Why, we are replicating the SAN over to the DR site. When SAN replication backs up Oracle becomes non-responsive. Therefore; sending the data over the pipe is not an option for the standby rebuild. Yea' that's the easy way to do it. But this system is operational and critical to operations; so we will not risk saturating the pipe for any period of time. The pipe got saturated one time and it was not pretty.
2) I'm running the test in my lab to make sure I can create the standby database using the cold backup and rman.
3) In the past it was easy; I got a cold backup of the data files using the same ksh scripts that have been working for 20 years. Copy the files over to the standby site, and put it into managed recovery. This technique has been working fine sense Oracle 8i days. ASM through a huge monkey wrench in the ksh script backup and now I'm forced to use rman. Hey, I'm told it's an okay product but when it comes to backups I never get fancy; that just makes things more complicated. Okay I wont complain about asm anymore, i guess there is an advantage in there somewhere.
-Rob -
How to delete the foreign archivelogs in a Logical Standby database
How do I remove the foreign archive logs that are being sent to my logical standby database. I have files in the FRA of ASM going back weeks ago. I thought RMAN would delete them.
I am doing hot backups of the databases to FRA for both databases. Using ASM, FRA, in a Data Guard environment.
I am not backing up anything to tape yet.
The ASM FRA foreign_archivelog directory on the logical standby FRA keeps growing and nothing is get deleted when
I run the following command every day.
delete expired backup;
delete noprompt force obsolete;
Primary database RMAN settings (Not all of them)
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 9 DAYS;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE DB_UNIQUE_NAME 'WMRTPRD' CONNECT IDENTIFIER 'WMRTPRD_CWY';
CONFIGURE DB_UNIQUE_NAME 'WMRTPRD2' CONNECT IDENTIFIER 'WMRTPRD2_CWY';
CONFIGURE DB_UNIQUE_NAME 'WMRTPRD3' CONNECT IDENTIFIER 'WMRTPRD3_DG';
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
Logical standby database RMAN setting (not all of them)
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 9 DAYS;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
How do I cleanup/delete the old ASM foreign_archivelog files?OK, the default is TRUE which is what it is now
from DBA_LOGSTDBY_PARAMETERS
LOG_AUTO_DELETE TRUE SYSTEM YES
I am not talking about deleting the Archive logs files for the Logical database that it is creating, but the Standby archive log files being sent to the Logical Database after they have been applied.
They are in the alert log as follows under RFS LogMiner: Registered logfile
RFS[1]: Selected log 4 for thread 1 sequence 159 dbid -86802306 branch 763744382
Thu Jan 12 15:44:57 2012
*RFS LogMiner: Registered logfile [+FRA/wmrtprd2/foreign_archivelog/wmrtprd/2012_01_12/thread_1_seq_158.322.772386297] to LogM*
iner session id [1]
Thu Jan 12 15:44:58 2012
LOGMINER: Alternate logfile found. Transition to mining archived logfile for session 1 thread 1 sequence 158, +FRA/wmrtprd2/
foreign_archivelog/wmrtprd/2012_01_12/thread_1_seq_158.322.772386297
LOGMINER: End mining logfile for session 1 thread 1 sequence 158, +FRA/wmrtprd2/foreign_archivelog/wmrtprd/2012_01_12/threa
d_1_seq_158.322.772386297
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 159, +DG1/wmrtprd2/onlinelog/group_4.284.771760923 -
Creation of Standby database on same host
Hi All,
I am using oracle 9.2.0.1.0 on windows XP machine.
*I want to create the standby database for my primary database on same machine.
But both tha data base is heaving diff instance as well as the diff database name say "ASHU" for primary & "TEST" for standby database.
I took the backup of all datafile and copied it in the standby location.
Generate the standby control file also and copied to the standby location.
But when i am going to start the standby database it is giving the below error:
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
SQL> alter database mount standby database;
alter database mount standby database
ERROR at line 1:
ORA-01103: database name 'ASHU' in controlfile is not 'TEST'
How to rectify it? Please help me
(* This is only for learning pupose , that's why i am going to create both on same host">
SQL> alter database mount standby database;
alter database mount standby database
ERROR at line 1:
ORA-01103: database name 'ASHU' in controlfile is not 'TEST'
How to rectify it? Please help me
>
You do not change the dbname of the physical standby database. They always must have the same name as the primary.
Here is a snippet from my standby initialization file for a 9i installation. Notice the parameter LOCK_NAME_SPACE that was replaced in 10g with DB_UNIQUE_NAME
db_file_name_convert=('E:\PRIMA','E:\PHYST')
db_name='PRIMA'
instance_name='PHYST'
service_names='PHYST'
lock_name_space='PHYST'
log_archive_dest_1='LOCATION=E:\PHYST\ARCHIVE'
log_archive_start=TRUE
log_archive_format='%t_%s.arc'
log_file_name_convert=('E:\PRIMA','E:\PHYST')Kind regards
Uwe
http://uhesse.wordpress.com
Edited by: Uwe Hesse on 27.07.2009 13:36 -
Restored standby database from primary; now no logs are shipped
Hi
We recently had a major network/SAN issue and had to restore our standby database from a backup of the primary. To do this, we restored the database to the standby, created a standby controlfile on the primary, copied this across to the control file locations and started in standby recover and applied the logs manually/registered to get it back up to speed.
However, no new logs are being shipped across from the primary.
Have we missed a step somewhere?
One thing we've noticed is that there is no RFS process running on the standby:
SQL> SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY;
PROCESS CLIENT_P SEQUENCE# STATUS
ARCH ARCH 0 CONNECTED
ARCH ARCH 0 CONNECTED
MRP0 N/A 100057 WAIT_FOR_LOG
How do we start this? Or will it only show if the arc1 process on the primary is sending files?
The arc1 process is showing at OS level on the primary but I'm wondering if its faulty somehow?
There are NO errors in the alert logs in the primary or the standby. There's not even the normal FAL gap sequence type error - in the standby it's just saying 'waiting for log' and a number from ages ago. It's like the primary isn't even talking to the standby. The listener is up and running ok though...
What else can we check/do?
If we manually copy across files and do an 'alter database register' then they are applied to the standby without issue; there's just no automatic log shipping going on...
Thanks
RossHi all
Many thanks for all the responses.
The database is 10.2.0.2.0, on AIX 6.
I believe the password files are ok; we've had issues previously and this is always flagged in the alert log on the primary - not the case here.
Not set to DEFER on primary; log_archive_dest_2 is set to service="STBY_PHP" optional delay=720 reopen=30 and log_archive_dest_state_2 is set to ENABLE.
I ran those troubleshooting scripts, info from standby:
SQL> @troubleshoot
NAME DISPLAY_VALUE
db_file_name_convert
db_name PHP
db_unique_name PHP
dg_broker_config_file1 /oracle/PHP/102_64/dbs/dr1PHP.dat
dg_broker_config_file2 /oracle/PHP/102_64/dbs/dr2PHP.dat
dg_broker_start FALSE
fal_client STBY_PHP
fal_server PHP
local_listener
log_archive_config
log_archive_dest_2 service=STBY_PHP optional delay=30 reopen=30
log_archive_dest_state_2 DEFER
log_archive_max_processes 2
log_file_name_convert
remote_login_passwordfile EXCLUSIVE
standby_archive_dest /oracle/PHP/oraarch/PHParch
standby_file_management AUTO
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE
PHP PHP MAXIMUM PERFORM PHYSICAL S MOUNTED
ANCE TANDBY
THREAD# MAX(SEQUENCE#)
1 100149
PROCESS STATUS THREAD# SEQUENCE#
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
MRP0 WAIT_FOR_LOG 1 100150
NAME VALUE UNIT TIME_COMPUTED
apply finish time day(2) to second(1) interval
apply lag day(2) to second(0) interval
estimated startup time 8 second
standby has been open N
transport lag day(2) to second(0) interval
NAME Size MB Used MB
0 0
On the primary, the script has froze!! How long should it take? Got as far as this:
SQL> @troubleshoot
NAME DISPLAY_VALUE
db_file_name_convert
db_name PHP
db_unique_name PHP
dg_broker_config_file1 /oracle/PHP/102_64/dbs/dr1PHP.dat
dg_broker_config_file2 /oracle/PHP/102_64/dbs/dr2PHP.dat
dg_broker_start FALSE
fal_client STBY_R1P
fal_server R1P
local_listener
log_archive_config
log_archive_dest_2 service="STBY_PHP" optional delay=720 reopen=30
log_archive_dest_state_2 ENABLE
log_archive_max_processes 2
log_file_name_convert
remote_login_passwordfile EXCLUSIVE
standby_archive_dest /oracle/PHP/oraarch/PHParch
standby_file_management AUTO
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE SWITCHOVER_STATUS
PHP PHP MAXIMUM PERFORMANCE PRIMARY READ WRITE SESSIONS ACTIVE
THREAD# MAX(SEQUENCE#)
1 100206
NOW - before you say it - :) - yes, I'm aware that fal_client as STBY_R1P and fal_server as R1P are incorrect - should be PHP - but it looks like it's always been this way! Well, as least for the last 4 years where it's worked fine, as I found an old SP file and it still has R1P set in there...?!?
Any ideas?
Ross
Maybe you are looking for
-
Difficulties in importing Sony HRDTG3E via USB in my Intel Duo Mac Book
I have just bought Sony HDRTG3E. I have up to date iMovie 08 latest update. When I connect via USB from Sony to my Macbook file comes across but does not open automatically in iMovie. What an earth am I doing wrong
-
I tried using the recovery tool for mp3 players and it fried my Microphoto 8gig!!! It was checking and uploading the firmware and it locked up.. not my comp locked up, but the Creative Recovery thing. I left it for a while and now my zen's hardware i
-
SSRS-Hyperlink to a folder Error messages
Hi All, I developed a report with hyperlink on a textbox. When user clicks to this hyperlink it should go to a network folder. I'm able to achieve this functionality using Javascript windows.open functionality. My requirement is need to replace the
-
I have search high and low and it seems that the issue I have has been discussed in one form or another, but no solution has been found. Here is what's happening: one birthday entered into Outlook (either via putting a birthday into a Contact record
-
ACR not keeping same edit as in Bridge?
In Bridge, my photos are auto toned, and I happen to like the effect most of the time. But when I go to open it in ACR, it does not keep the effect. I have tried to change the settings and have looked for a way to fix this, but to avail. I want the s