Datafiles and control file corrupted
I have oracle 9i installed on windows 2003 server. A couple of days before our windows crashed and the system administrator took the backup of all files and reinstalled windows. After reinstallation of windows I restored the files backed up by system administrator and tried to start the database. I got the following error:
ORA-00227: corrupt block detected in controlfile: (block 1, # blocks 1)
ORA-00202: controlfile: 'D:\ORACLE\ORADATA\ORCL\CONTROL03.CTL'
All the multiplexed copies of control files are also corrupted. We do not have the backup since last few days and the database is in noarchivelog mode. Simple restoration to the old backup would be a great loss. Kindly help me to recover the control file and datafiles.
All the multiplexed copies of control files are also corrupted. We do not have the backup since last few days and the database is in noarchivelog mode. Simple restoration to the old backup would be a great loss. Kindly help me to recover the control file and datafiles.You could try to re-create the control files assuming the database itself was closed properly.
However you should only do this if you know what you are doing.
In any case you should backup the database in its current state in case things getting worse by trying to recover. CHECK this backup twice!
In any case: Open a support ticket at Oracle. You will most probably need their help.
In addition to that - it look quite bad for your data. You should prepare for data loss (just to be sure; i dont want to scare you)....
Ronny Egner
My Blog: http://blog.ronnyegner-consulting.de
Similar Messages
-
Oracle binary and control files
Hi All,
I want know whether the oracle binary and control files are they related in anyway.
I have my physical files on a SAN storage and my oracle binary files on a local disk.
In case if I delete my oracle binaries and restore it from a backup, will I be able to start my database without any issues.
Since all my oracle datafiles,controlfiles and redofiles are located in SAN storage.Oracle binaries and control files are related in some way because Oracle version is recorded in control files:
oerr ora 201
00201, 00000, "control file version %s incompatible with ORACLE version %s"
// *Cause: The control file was created by incompatible software.
// *Action: Either restart with a compatible software release or use
// CREATE CONTROLFILE to create a new control file that is
// compatible with this release.When restoring Oracle binaries on UNIX you should take care about setuid bits on oracle executable to avoid local connection issues by non oracle Unix accounts. -
Multiplexing redo logs and control files to a separate diskgroup
General question this one...
I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
+DATA - for datafiles, control files, redo logs
+FRA - for achive logs, flash recovery. RMAN backup
Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
Thoughts?Some of the decision will probably depend on your specific environment's data activity, volume, and throughput.
Something to remember is that redo logs are sequential write, which benefit from a lower RAID overhead (RAID-10, 2 writes per IOP vs RAID-5, 4 writes per IOP). RAID-10 is often not cost-effective for the data portion of a database. If your database is OLTP with a high volume of random reads/writes, you're potentially hurting redo throughput by creating contention on the disks sharing data and redo. Again, that depends entirely on what you're seeing in terms of wait events. A low volume database would probably not experience any noticeable degraded performance.
In my environment, I have RAID-5 and RAID-10 available, and since the RAID-10 requirement from a capacity perspective for redo is very low, it makes sense to create 2 diskgroups for online redo, separate from DATA, and separate from each other. This way, we don't need to be concerned with DATA transactions impacting REDO performance, and vice versa, and we still maintain redo redundancy.
In my opinion, you can't be too paranoid. :)
Good luck!
K -
Database restore without temp, undo and control files.
Hi All,
You might found this question silly but I don't know so asking this question here.
I have cold back up of the database. Now, I want create clone of that database, but I have some different paths for the DBFs so I will create new control file after restoring the database.
Now, I know that I don't need control files and tempfiles to be restored. I have 10 undo files in backup but on the new clone database I don't need all 10. I want only 5. So can I do the restoration without undo , temp and control file and later on add undo and temp?? and if yes then tell me that can I add them at mount level??
This is my first restore, Please guide me its very urgentNitin Joshi wrote:
f the COLD Backup does not include the Online Redo Logs, an ALTER DATABASE OPEN RESETLOGS is requireed >>to create these Online Redo Logs. Unfortunately, an OPEN RESETLOGS can only be done after an Incomplete >>Recovery or when using a Backup Control file.
Therefore, we do a RECOVER with a CANCEL to simulate an Incomplete Recovery.Completely agree with you Hemant. And the links you've provided,i've gone through many times. Excellent description.
I just wanted to know in above(OP's) scenario if he has complete cold backup(includes online redo logs), does he really need open reset logs or any recovery?
Regards!no , if you have cold backup with online redo log files then i don't think so you need to open database in resetlogs.Resetlog is always after incomplete recovery or recovery using backup controlfile or you dont have redo logs.
I am completely agree with you that with given scenario for the cold backup undo tablespace would not be part of recovery and you can
-offline drop undo tablespace file
-create another one undo tablespace and its undo datafile
-point spfile to that newly undo tablespace
I think Aman is saying in the context of restore and recover online database where undo tablespace create a vital role in database recovery, the undo blocks roll back the effects of uncommitted transactions previously applied by the rolling forward phase.
Khurram -
Where is the location of tablespace file and control file
Hi, all
where is the location of tablespace file and control file? tksFor DataFiles, query DBA_DATA_FILES or V$DATAFILE
For TempFiles, query DBA_TEMP_FILES or V$TEMPFILE
For Online Redo Logs, query v$LOGFILE
For Archived Redo Logs, query v$ARCHIVED_LOG
for Controlfiles, query v$CONTROLFILE
Hemant K Chitale
http://hemantoracledba.blogspot.com -
How to create parameter and control file like filename + date
Hello there
I am trying to create parameter and control file with following command
in SQLPLUS
create pfile='/u03/oradata/WEBDB/backup/initWEBDB.ora' from spfile;
In RMAN
copy current controlfile to '/u03/oradata/WEBDB/backup/cf_longterm.cpy';
how can I put date at the end of filename like
initWEBDB8jan06.ora and cf_longterm8jan06.cpy
Thanks in advance
LionelASM is reliable but a smart DBA is very careful. If ASM is doing mirroring this is like RAID doing mirroring. What happens if you accidentally delete one copy ... the other one disappears instantly. Not a good idea.
With respect to redo logs you need a minimum of three groups, two members, and one thread per instance. So a 2 node cluster should, at a minimum have 12 physical files.
Not mirroring the redo logs, assuming multiple members, is not as critical. -
How to create redlog and control file at ASM in linux RAC
Hi Experts,
I will to maintance a oracle 10g database at ASM as RAC iin linux red hat.
i am a new person with some question.
nornally speaking, oracle recommadition for oracle 10g database as
create 3 copy fills for control file
create at least 2 redo log with mirror files in system.
However, I checked find
redlog file is at FRA place +FLSdisk1 and no mirror
control file is at FRA place--+FLSDISK1/
datebase file at ‘+DATA1/
There are no mirror for relog.
Go to EM, I also could not find place to enter file name.
We use ASM to hold database to support RAC.
Do i need to create redlog file as
ALTER DATABASE ADD LOGFILE GROUP 1 ('+FLSdisk1/sale/onlinelog/REDO01.LOG','+FLSdisk1/sale/onlinelog/REDO01_mirror.LOG') SIZE 1000M reuse;
my boss told me that ASM is reliable.
Do you need to creat more directory to arrange redlog and control files in ASM for RAC system?
FRA is a good place to store control file and redlog file ?
Thanks
JIM
Edited by: user589812 on Jul 3, 2009 3:03 PMASM is reliable but a smart DBA is very careful. If ASM is doing mirroring this is like RAID doing mirroring. What happens if you accidentally delete one copy ... the other one disappears instantly. Not a good idea.
With respect to redo logs you need a minimum of three groups, two members, and one thread per instance. So a 2 node cluster should, at a minimum have 12 physical files.
Not mirroring the redo logs, assuming multiple members, is not as critical. -
Location of Redo log and control files?
Dear all,
I am checking the location of redo log and control files, but found that the redo log file (like log02a.dbf ....) in the same directory of data files. However, I couldn't find any control files in the data files directries.
What could be the location of control files?
Amyselect name
from v$controlfile
or
show parameter control_filesKhurram -
SQL Loader and control file changes for different users
In the front end of my application I can select a data file and a control file, and load data to the table mentioned in .ctl file. Every user who logs in uses the same .ctl file and so loads onto the same table. Now I want the user to load data onto the table in his own schema. I can get the username of the user currently logged in and i want to insert it into that username.table. So can i copy the contents of the .ctl file into a variable, modify it into username.table in that string and pass that variable as a parameter to the sqlldr command instead of the .ctl file.
Or is there a better way how I can modify the same control file everytime to change tablename to username.tablename in .ctl file and pass to sqlldr to load data to table in local user schema table.
Thanks and RegardsThanks for the reply .. user do have their user credentials but only for the application ... but all users use a common loader and control file once they log into the application. So irrespective of which user is logged in he selects the same control file and loads to the same table mentioned in the control file .. i instead want user to be able to load to the table in control file but into his schema like username.tablename instead of just the tablename mentioned in .ctl file.
-
Load data with SQL Loader link field between CSV file and Control File
Hi all,
in a SQL Loader control file, how do you specify link with field in CSV file and Control file?
E.g. if I wat to import the record in table TEST (col1, col2, col3) with data in csv file BUT in different position. How to do this?
FILE CSV (with variable position):
test1;prova;pippo;Ferrari;
xx;yy;hello;by;
In the table TEST i want that col1 = 'prova' (xx),
col2 = 'Ferrari' (yy)
col3 = default N
the others data in CSV file are ignored.
so:
load data
infile 'TEST.CSV'
into table TEST
fields terminated by ';'
col1 ?????,
col2 ?????,
col3 CONSTANT "N"
Thanks,
AttilioWith '?' mark i mean " How i can link this COL1 with column in csv file ? "
Attilio -
Very high log file sequential read and control file sequential read waits?
I have a 10.2.0.4 database and have 5 streams capture processes running to replicate data to another database. However I am seeing very high
log file sequential read and control file sequential read by the capture procesess. This is causing slowness in the database as the databass is wasting so much time on these wait events. From AWR report
Elapsed: 20.12 (mins)
DB Time: 67.04 (mins)
and From top 5 wait events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 1,712 42.6
log file sequential read 99,909 683 7 17.0 System I/O
log file sync 49,702 426 9 10.6 Commit
control file sequential read262,625 384 1 9.6 System I/O
db file sequential read 41,528 378 9 9.4 User I/O
Oracle support hasn't been of much help, other than wasting my 10 days and telling me to try this and try that.
Do you have streams running in your environment, are you experiencing this wait. Have you done anything to resolve these waits..
ThanksWelcome to the forums.
There is insufficient information in what you have posted to know that your analysis of the situation is correct or anything about your Streams environment.
We don't know what you are replicating. Not size, not volume, not type of capture, not rules, etc.
We don't know the distance over which it is being replicated ... 10 ft. or 10 light years.
We don't have any AWR or ASH data to look at.
etc. etc. etc. If this is what you provided Oracle Support it is no wonder they were unable to help you.
To diagnose this problem, if one exists, requires someone on-site or with a very substantial body of data which you have not provided. The first step is to fill in the answers to all of the obvious first level questions. Then we will likely come back with a second level of questioning.
But when you do ... do not post here. Your questions are not "Database General" they are specific to Streams and there is a Streams forum specifically for them.
Thank you. -
HI, My database was operating on noarchivelog mode, I do have a backup from last night but all three control files seemed to be corrupted. Is there anyway, i can create new control file and syncronise with rest of the files?. If yes, can you please tell me the steps involved in creating new controlfile as I don't have any idea how to do that. Thanks alot.
Hi,
Set oracle_sid="your sid name"
connect to sqlplus
SQL>conn/as sysdba
start your database in nomount stage
SQL>Startup nomount
Then type the following commands
SQL> CREATE CONTROLFILE REUSE DATABASE "your database name"
MAXLOGFILES 5 --optional
MAXLOGMEMBERS 3 --optional
MAXDATAFILES 14 --optional
MAXINSTANCES 1 --optional
MAXLOGHISTORY 226 --optional
LOGFILE
GROUP 1 'your logfile path' SIZE your logfile size,
GROUP 2 'your logfile path' SIZE your logfile size
DATAFILE
'your datafile path',
'your datafile path'
After that open the database with RESETLOGS
then shutdown the database
SQL>shu
Now multiflex the control file and mention the path in init file
And take the complete closed backup( backup your datafile, control file, logfile)
Then startup the database
SQL> Startup
Now your database is ready to USE.
This is the Example to:
SQL> CREATE CONTROLFILE REUSE DATABASE "ORCL"
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 14
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 'E:\ORACLE\ORADATA\ORCL\REDO01.LOG' SIZE 100M,
GROUP 2 'E:\ORACLE\ORADATA\ORCL\REDO02.LOG' SIZE 100M,
GROUP 3 'E:\ORACLE\ORADATA\ORCL\REDO03.LOG' SIZE 100M
DATAFILE
'E:\ORACLE\ORADATA\ORCL\UNDOTBS01.DBF',
'E:\ORACLE\ORADATA\ORCL\EXAMPLE01.DBF',
'E:\ORACLE\ORADATA\ORCL\INDX01.DBF',
'E:\ORACLE\ORADATA\ORCL\TOOLS01.DBF',
'E:\ORACLE\ORADATA\ORCL\USERS01.DBF',
'E:\ORACLE\ORADATA\ORCL\OEM_REPOSITORY.DBF',
'E:\ORACLE\ORADATA\ORCL\CWMLITE01.DBF',
'E:\ORACLE\ORADATA\ORCL\DRSYS01.DBF',
'E:\ORACLE\ORADATA\ORCL\ODM01.DBF',
'E:\ORACLE\ORADATA\ORCL\XDB01.DBF',
'E:\ORACLE\ORADATA\ORCL\USERS02.DBF',
'E:\ORACLE\ORADATA\ORCL\USERS03.DBF',
'E:\ORACLE\ORADATA\ORCL\USERS04.DBF'
SQL>ALTER DATABASE OPEN RESETLOGS;
And one more thing:
To rename the database change reuse to set in the create control file script as shown below
Regards
S.Senthil Kumar -
How to recover datafile.when control file auto backup is off
hi friend
i took hot backup of my database using following command
rman>backup database;
in my case controlfile auto backup is off.
and i have lost my all controlfiles as well as datafiles except spfile.
i have recovered control file using dbms_backup_restore package.
know i am able to mount database using following command
rman> startup mount;
when i wrote following command
rman> restore database;
i got following error
RMAN-06023: no backup or copy of datafile name found to restore
enven i have backup of datbase.
can anybody tell me how to recover datafiles in this case.
thanking you
sohailhi,
I think you might have a problem here as the error from RMAN is described in the following metalink note
Doc ID: Note:100565.1
You should change you backup script to be something like
rman {
backup database include current controlfile;
do you have any earlier backups of your database?
regards
Alan -
.pdf and .doc files corrupted on 2 Macs?
On an iMac and a shared Mac Mini, a number of PDF and DOC files have all of a sudden gotten corrupt. Some of these have not been modifed in 2 years but all of a sudden are corrupt. I've tried Data Rescue and cannot find anything online about these multiple files getting corrupted. I've rebuilt permissions and done various ML Cache Cleaner options to no avail. Anyone heard of anything like this? It just happened all of a sudden on 2 computers. At first I figured the first computer had the partition going bad but that doesn't explain 2 computers at once.
Dear M.V,
with the same above configuration, now I am able to open pdf's which are having a size lessthan 2 MB.
below is the access log
127.0.0.1 - - [13/Feb/2008:15:04:36 +0530] "GET /pdfcheck.php?file=CampusMap HTTP/1.1" 200 2000000 below is the error log
[13/Feb/2008:15:10:49] warning ( 3288): for host 127.0.0.1 trying to GET /pdfcheck.php, finish-response reports: HTTP2228: Response content length mismatch (2000000 bytes with a content length of 2535786) php code
<?php
if(!isset($_GET['file']))die('LOGGED! no file specified');
$file_path=$_SERVER['DOCUMENT_ROOT'].'/pdfs/'.strip_tags(htmlentities($_GET['file'])).'.pdf';
$file_name = $_GET['file'];
$mm_type="application/pdf";
header("Cache-Control: public, must-revalidate");
header("Pragma: hack");
header("Content-Type: " . $mm_type);
header("Content-Length: " .(string)(filesize($file_path)) );
header('Content-Disposition: inline; filename="'.$file_name.'"');
header("Content-Transfer-Encoding: binary\n");
readfile($file_path);
?>Thanks
madhu -
Renaming datafiles in control files with database mounted but not open
Hi,
I moved a database (physical files) from one server to another. I need to modify the contents of the control files since the directory structure of the servers are not the same (and I can't change that).
I know I can use ALTER DATABASE BACKUP CONTROLFILE TO TRACE to produce a script that I can than modify and run with the instance started, database mounted but not open, and that will recreate the control files. I don't want to do that.
I was also told I can modify the datafile entries in the control files by starting the instance, mounting but not opening the database. Then I can issue the (this is the part I need help with) ALTER DATABASE RENAME FILE <file1> to <file2>. When I tried this it complains that <file1> is not found. Obviously the command I used is not the right one,,, what is the right command for what I want to do.
Thanks,
GabrielMove all datafiles from one directory to an other without recreate controlfile :
SYS@DEMO102> select file_name from dba_data_files
2 union
3 select member from v$logfile
4 union
5 select file_name from dba_temp_files
6 union
7 select name from v$controlfile;
FILE_NAME
E:\ORACLE\ORADATA\DEMO102C\CONTROL01.CTL
E:\ORACLE\ORADATA\DEMO102C\CONTROL02.CTL
E:\ORACLE\ORADATA\DEMO102C\CONTROL03.CTL
E:\ORACLE\ORADATA\DEMO102C\EXAMPLE01.DBF
E:\ORACLE\ORADATA\DEMO102C\REDO01.LOG
E:\ORACLE\ORADATA\DEMO102C\REDO02.LOG
E:\ORACLE\ORADATA\DEMO102C\REDO03.LOG
E:\ORACLE\ORADATA\DEMO102C\SYSAUX01.DBF
E:\ORACLE\ORADATA\DEMO102C\SYSTEM\SYSTEM01.DBF
E:\ORACLE\ORADATA\DEMO102C\TBS102_1.DBF
E:\ORACLE\ORADATA\DEMO102C\TBS102_2.DBF
E:\ORACLE\ORADATA\DEMO102C\TEMP01.DBF
E:\ORACLE\ORADATA\DEMO102C\UNDOTBS01.DBF
E:\ORACLE\ORADATA\DEMO102C\USERS01.DBF
14 rows selected.
SYS@DEMO102> create pfile='E:\oracle\admin\DEMO102\pfile\pfile102.ora' from spfile;
File created.
SYS@DEMO102> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.Here, I move all datafiles mentionned above, and modify my pfile for new controlfile directory. Then :
SYS@DEMO102> startup pfile=E:\oracle\admin\DEMO102\pfile\pfile102.ora
ORACLE instance started.
Total System Global Area 272629760 bytes
Fixed Size 1288940 bytes
Variable Size 163579156 bytes
Database Buffers 100663296 bytes
Redo Buffers 7098368 bytes
Database mounted. --Note that we are in mount state
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: 'E:\ORACLE\ORADATA\DEMO102C\SYSTEM\SYSTEM01.DBF'
SYS@DEMO102> alter database rename file 'E:\ORACLE\ORADATA\DEMO102C\USERS01.DBF' to 'E:\ORACLE\ORADATA\demo102\USERS01.DBF';
Database altered.
SYS@DEMO102> alter database rename file 'E:\ORACLE\ORADATA\DEMO102C\SYSAUX01.DBF' to 'E:\ORACLE\ORADATA\demo102\SYSAUX01.DBF';
Database altered.
SYS@DEMO102> alter database rename file 'E:\ORACLE\ORADATA\DEMO102C\UNDOTBS01.DBF' to 'E:\ORACLE\ORADATA\demo102\UNDOTBS01.DBF';
Database altered.
SYS@DEMO102> alter database rename file 'E:\ORACLE\ORADATA\DEMO102C\SYSTEM\SYSTEM01.DBF' to 'E:\ORACLE\ORADATA\demo102\SYSTEM\SYSTEM01.DBF';
Database altered.
SYS@DEMO102> alter database rename file 'E:\ORACLE\ORADATA\DEMO102C\EXAMPLE01.DBF' to 'E:\ORACLE\ORADATA\demo102\EXAMPLE01.DBF';
Database altered.
SYS@DEMO102> alter database rename file 'E:\ORACLE\ORADATA\DEMO102C\TBS102_1.DBF' to 'E:\ORACLE\ORADATA\demo102\TBS102_1.DBF';
Database altered.
SYS@DEMO102> alter database rename file 'E:\ORACLE\ORADATA\DEMO102C\TBS102_2.DBF' to 'E:\ORACLE\ORADATA\demo102\TBS102_2.DBF';
Database altered.
SYS@DEMO102> alter database rename file 'E:\ORACLE\ORADATA\DEMO102C\REDO01.LOG' to 'E:\ORACLE\ORADATA\demo102\REDO01.LOG';
Database altered.
SYS@DEMO102> alter database rename file 'E:\ORACLE\ORADATA\DEMO102C\REDO02.LOG' to 'E:\ORACLE\ORADATA\demo102\REDO02.LOG';
Database altered.
SYS@DEMO102> alter database rename file 'E:\ORACLE\ORADATA\DEMO102C\REDO03.LOG' to 'E:\ORACLE\ORADATA\demo102\REDO03.LOG';
Database altered.
SYS@DEMO102> alter database rename file 'E:\ORACLE\ORADATA\DEMO102C\TEMP01.DBF' to 'E:\ORACLE\ORADATA\demo102\TEMP01.DBF';
Database altered.
SYS@DEMO102> alter database open;
Database altered.
SYS@DEMO102> create spfile from pfile='E:\oracle\admin\DEMO102\pfile\pfile102.ora';
File created.
SYS@DEMO102> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS@DEMO102> startup
ORACLE instance started.
Total System Global Area 272629760 bytes
Fixed Size 1288940 bytes
Variable Size 163579156 bytes
Database Buffers 100663296 bytes
Redo Buffers 7098368 bytes
Database mounted.
Database opened.
SYS@DEMO102> select file_name from dba_data_files
2 union
3 select member from v$logfile
4 union
5 select file_name from dba_temp_files
6 union
7 select name from v$controlfile;
FILE_NAME
E:\ORACLE\ORADATA\DEMO102\CONTROL01.CTL
E:\ORACLE\ORADATA\DEMO102\CONTROL02.CTL
E:\ORACLE\ORADATA\DEMO102\CONTROL03.CTL
E:\ORACLE\ORADATA\DEMO102\EXAMPLE01.DBF
E:\ORACLE\ORADATA\DEMO102\REDO01.LOG
E:\ORACLE\ORADATA\DEMO102\REDO02.LOG
E:\ORACLE\ORADATA\DEMO102\REDO03.LOG
E:\ORACLE\ORADATA\DEMO102\SYSAUX01.DBF
E:\ORACLE\ORADATA\DEMO102\SYSTEM\SYSTEM01.DBF
E:\ORACLE\ORADATA\DEMO102\TBS102_1.DBF
E:\ORACLE\ORADATA\DEMO102\TBS102_2.DBF
E:\ORACLE\ORADATA\DEMO102\TEMP01.DBF
E:\ORACLE\ORADATA\DEMO102\UNDOTBS01.DBF
E:\ORACLE\ORADATA\DEMO102\USERS01.DBF
14 rows selected.
SYS@DEMO102> Nicolas.
Maybe you are looking for
-
No other details, other than it just won't start or open when I upgrade. It's in my task manager list, but the GUI does not come up.
-
Raid Card Constantly Degrades Volumes - Mac Pro
I just got a new Mac Pro with Raid card. I of course purchased hard drives separately, rather than apples inflated rate. I picked up four 750 GB Samsung F1 Raid (F1R) edition drives from OWC thinking it would work well with my system. No matter what
-
Invoke Java 3d from a JSP?
Hi, I have a sample java code that invokes a Canvas3d and then invokes a 3D object. Our project goal is to create an online application-- A JSP page which will invoke the java 3d object. I am working on Oracle JDeveloper and OC4J server. Currently ,
-
Is the colour of iphone 5s space grey getting damaged easily like iphone 5 black
Is the colour of iphone 5s grey gets easily damage like iphone 5 black
-
Did Yosemite changed FCPx color correction in inspector?
prior to installing Yosemite I had the color correction functions as part of the clip inspector in the video tab. Now I notice after installing Yosemite that the color correction functions including color, saturation, and exposure are an effect th