Instance fast recovery and redolog file size

Hi,
Could you please explain me how size of redolog file helps to recover instance faster?
Thanks
KSG

Very quickly I shall try to tell.
The answer lies in the relation of the number of dirty buffers needed for the recovery. The number of dirty buffers needed for recovery are limited by the checkpoint process when DBWR is pinged to write few of them to the datafile. So with a log switch you are going to hit a checkpoint. So if the size of the log files is going to be smaller, the checkpoint frequency would be large thus making the buffers being written to the data fikles more aggressively and limiting the time for the instance reocvery. The biger you make them, the more longer it would take for teh checkpoint to happen and thus in the case of instance recovery, it would bemore time required. That said, using a small sized log file would also lead to "cehckpoint inicomplete" error as well since it may happen that DBWR won't be able to match with the speed of the checkpoint event being generated and its own writing speed,.
HTH
Aman....

Similar Messages

  • Increase redolog file size - Merits and Demerits

    Hi
    Currently, we are in  9.2.0.7.0 oralce version and having redolog file sizes (Mirrlog and origlog) of 100MB.
    Now we are planning to increase the size to 200 MB so that we could reduce the number of archive log files.
    Can you please let me know what would be the demerits of bigger size in redolog files?
    And also let me know the step by step process how to increase the size of redolog files?
    Thank you

    > I understand what you are saying but in our situation our backup policy is one time online backup  and one time offline backup in a week.....Online backup is on Thu and Offline backup is on Sunday.......
    >
    > In case of system crash if needed we would need to apply archive log files; If we have lesser number of archive logs; recover database would be faster.......correct me if am wrong.
    You are wrong.
    Ok, let's see an example:
    You took your backup on sunday midnight and your DB needs recovery on wednesday.
    Meanwhile you created say, 800 M worth of redolog data per day.
    That sums up to (monday, tuesday, wednesday) 3x800 M = 2400 M that need to be recovered.
    Going with your current setup (100 M redolog size) the largest archivelog file can be 100 M, makes 24 files to restore and recover.
    After changing the redologsize to, say 200 M, you only have 12 files to restore and recover.
    But know what? It's still 2400 M of data.
    Since you will likely not put every archivelog file to its own tape, but rather change the tape each day (just an assumption) or maybe don't use manually operated tapes at all, the little latency overhead in handling tapes doesn't count in to your overall recovery time.
    All in all you still need to feed the same amount of data to the recovery process.
    Apart from this:
    if you're discussing short recovery times, than you'd never perform just two data backups a week.
    You'd make online backups every day - maybe incremental ones.
    You' d use the flashback recovery area.
    An additional thing often overlooked: in many cases the ultimate performance killer for a restore/recovery scenario is not the technology in use.
    It's that when the case is there, the DBA is not sure anymore, what do to.
    He wonders:
    Where the good backups are.
    How to get them back from the 3rd party backup tool.
    How to check them.
    Where to get a different storage system because the original one is broken.
    How to figure out what needs recovery
    How the tools work
    By ensuring that you always master the theory and the how to of restore and recovery - that's how you make it quick and painless (and dataloss-less).
    regards,
    Lars

  • Editing PDF and keeping file size small

    We are having lot of difficulties in keeping the PDF file small after editing. Our PDFs have text and fields and are used in a browser. They need to be updated multiple times a year to reflect change in text.
    PDF was created from MS Word 2010 using Acrobat X.
    Word had only Arial and Verdana fonts.
    When PDF was created from word, settings were set to "Never Embed" fonts. Till this point, all seems fine.
    Now, the moment we try to edit text (even add a letter), PDF prompts that fonts may be embedded and file size increases by 20-30KB.
    Checking File-Properties-Fonts show that font has changed to "Arial, Bold  Type: True Type (CID), Encoding: Identity-H". Before editing, font type was not CID.
    Under Optimization, there is no embedded font and we unchecked “subset all embedded fonts in optimization”. Save the file.
    Still File size is higher by 10k-20K - though only one word was added.
    Questions:
    How to avoid font type CID getting added automatically by Acorbat? It seems it takes more space and we don't need it. We couldn't find a way to remove/replace it.
    How to keep the file size not increasing much? Every time we edit with only few words added and do optimize, etc. still file size increases by 10-20 KBs.
    thanks
    apjs

    As these PDFs are used as electronic agreement forms and have fields/Javascript, recreating them from word requires lot of work. We do understand if there are substantial changes then we should recreate from word as PDF is not designed for major editing. However, in most cases we are trying to edit few lines of text and still file size is increasing by 10-20K.
    We have tried Save As but so far optimization option gives us better results in terms of reduction in file size.
    We do use common fonts - like Arial, Verdana, Times New Roman. We understand that we should embed uncommon fonts but we avoid uncommon fonts as embedding increases PDF file size.
    CID is coming up even when common font like Arial is being used.
    Thanks for trying to help us.

  • Problem exporting '.txt' file size 23 KB and '.zip' file size 4 MB

    I am using Apex 3.0 version screen to upload '.txt' file and '.zip' file containing images.
    I can successfully export '.txt' file and '.zip' file containing images as long as '.txt' file size is < 23 KB and '.zip' file size < 4 MB from database table 'TBL_upload_file' to the OS directory on the server.
    processing of Larger files (sizes 35 KB and 6 MB) produce following Error Message.
    ‘ORA-21560: argument 2 is null, invalid or out of range’ error.
    Here is my code:
    I am using following code to export Documents from database table 'TBL_upload_file' to the OS directory on the server.
    create or replace procedure "PROC_LOAD_FILES_TO_FLDR_BYTES"
    (pchr_text_file IN VARCHAR2,
    pchr_zip_file IN VARCHAR2)
    is
    lzipfile varchar(100);
    lzipname varchar(100);
    sseq varchar(1000);
    ldocname varchar(100);
    lfile varchar(100);
    -- loaddoc (p_file in number) as
    l_file UTL_FILE.FILE_TYPE;
    l_buffer RAW(32000);
    l_amount NUMBER := 32000;
    l_pos NUMBER := 1;
    l_blob BLOB;
    l_blob_len NUMBER;
    l_file_name varchar(200);
    l_doc_name varchar(200);
    a_file_name varchar (200);
    end_pos NUMBER;
    begin
    -- Get LOB locator
    SELECT blob_content,doc_name
    INTO l_blob,l_file_name
    FROM tbl_upload_file
    WHERE DOC_NAME = pchr_text_file;
    --get length of blob
    l_blob_len := DBMS_LOB.getlength(l_blob);
    -- save blob length to determine end position
    end_pos:= l_blob_len;
    -- Open the destination file.
    -- l_file := UTL_FILE.fopen('BLOBS','MyImage.gif','w', 32767);
    l_file := UTL_FILE.fopen('BLOBS',l_file_name,'WB', 32760); --use write byte option supported in 10G
    -- if small enough for a single write
    IF l_blob_len < 32760 THEN
    utl_file.put_raw(l_file,l_blob);
    utl_file.fflush(l_file);
    ELSE -- write in pieces
    -- Read chunks of the BLOB and write them to the file
    -- until complete.
    WHILE l_pos < l_blob_len LOOP
    DBMS_LOB.read(l_blob, l_amount, l_pos, l_buffer);
    UTL_FILE.put_raw(l_file, l_buffer);
    utl_file.fflush(l_file); --flush pending data and write to the file
    -- set the start position for the next cut
    l_pos := l_pos + l_amount;
    -- set the end position if less than 32000 bytes, here end_pos captures length of the document
    end_pos := end_pos - l_amount;
    IF end_pos < 32000 THEN
    l_amount := end_pos;
    END IF;
    END LOOP;
    END IF;
    --- zip file
    -- Get LOB locator to locate zip file
    SELECT blob_content,doc_name
    INTO l_blob,l_doc_name
    FROM tbl_upload_file
    WHERE DOC_NAME = pchr_zip_file;
    l_blob_len := DBMS_LOB.getlength(l_blob);
    -- save blob length to determine end position
    end_pos:= l_blob_len;
    -- Open the destination file.
    -- l_file := UTL_FILE.fopen('BLOBS','MyImage.gif','w', 32767);
    l_file := UTL_FILE.fopen('BLOBS',l_doc_name,'WB', 32760); --use write byte option supported in 10G
    -- if small enough for a single write
    IF l_blob_len < 32760 THEN
    utl_file.put_raw(l_file,l_blob);
    utl_file.fflush(l_file); --flush out pending data to the file
    ELSE -- write in pieces
    -- Read chunks of the BLOB and write them to the file
    -- until complete.
    l_pos:=1;
    WHILE l_pos < l_blob_len LOOP
    DBMS_LOB.read(l_blob, l_amount, l_pos, l_buffer);
    UTL_FILE.put_raw(l_file, l_buffer);
    UTL_FILE.fflush(l_file); --flush pending data and write to the file
    l_pos := l_pos + l_amount;
    -- set the end position if less than 32000 bytes, here end_pos contains length of the document
    end_pos := end_pos - l_amount;
    IF end_pos < 32000 THEN
    l_amount := end_pos;
    END IF;
    END LOOP;
    END IF;
    -- Close the file.
    IF UTL_FILE.is_open(l_file) THEN
    UTL_FILE.fclose(l_file);
    END IF;
    exception
    WHEN NO_DATA_FOUND THEN
    RAISE_APPLICATION_ERROR(-20214,'Screen fields cannot be blank, Proc_Load_Files_To_Fldr_BYTES.');
    WHEN TOO_MANY_ROWS THEN
    RAISE_APPLICATION_ERROR(-20215,'More than one record exist in the tbl_load_file table, Proc_Load_Files_To_Fldr_BYTES.');
    WHEN OTHERS THEN
    -- Close the file if something goes wrong.
    IF UTL_FILE.is_open(l_file) THEN
    UTL_FILE.fclose(l_file);
    END IF;
    RAISE_APPLICATION_ERROR(-20216,'Some other errors occurred, Proc_Load_Files_To_Fldr_BYTES.');
    end;
    I am new to the Oracle.
    Any help to modify this scipt and resolve this problem will be greatly appreciated.
    Thank you.

    Ask this question in the Apex forums. See Oracle Application Express (APEX)
    Regards Nigel

  • Change in Oracle Parameters and Log file size

    Hello All,
    We have scheduled DB Check job and the log file showed few errors and warnings in the oracle parameter that needs to be corrected. We have also gone through the SAP Note #830576 – Oracle Parameter Configuration to change these parameters accordingly. However we need few clarifications on the same.
    1.Can we change these parameters directly in init<SID>.ora file or only in SP file. If yes can we edit the same and change it or do we need to change it using BR tools.
    2.We have tried to change few parameters using DB26 tcode. But it prompts for maintaining the connection variables in DBCO tcode. We try to make change only in default database but it prompts for connection variables.
    Also we get check point error. As per note 309526 can we create the new log file with 100MB size and drop the existing one. Or are there any other considerations that we need to follow for the size of log file and creating new log file. Kindly advise on this. Our Environment is as follows.
    OS: Windows 2003 Server
    DB: Oracle 10g
    regards,
    Madhu

    Hi,
    Madhu, We can change oracle parameters at both the levels that is init<SID> as well as SPFILE level.
    1. If you do the changes at init<SID> level then you have to generate the SPFILE  again and then database has to be restarted for the parameters to take effect.
        If you make the changes in SPFILE then the parameters will take effect depending on the parameter type whether it is dynamic or static. You also need to generate the PFILE i.e init<SID>.ora
    2. If possible do not change the oracle parameters using the tcode. I would say it would be better if you do it via the database and it would be much easier.
    3. Well its always good to have a larger redo log size. But only one thing to keep in mind is that once you change the size of the redolog the size of the archive log also changes although the number of files will decrease.
    Apart from that there wont be any issues.
    Regards,
    Suhas

  • OCR and Reducing file size

    I have a large document (a book) that I am trying to scan. I will be scanning it chapter by chapter. The book was printed in grayscale, so I don't have a pure BLACK AND WHITE document. I would like to optimize the file size, but I have a few questions about that.
    Currently running:
    Windows 7
    Acrobat Pro X
    Epson GT-S80 High-speed scanner
    1. What is a good typical workflow? I have tried scanning the documents to PDF using the scanner's software then opening them up in Acrobat to OCR them. I have tried using Acrobat's Scan feature with OCR being one of the steps in the scanning process. I have tried letting both softwares do their own color mode detection, where they will mix black and white and grayscale to reduce the file size, but have typically told it to stick with grayscale because that gives me the cleanest and clearest document. Does anyone have any recommendations on getting a good quality image and using a mix of black and white, as well as grayscale, or should I keep using just grayscale?
    2. I am having some trouble, I think, with the file size. I have a 12 page document I believe was either scanned at 300 dpi or was scanned at full resolution because I used CLEARSCAN, and downsampled everything to 300 dpi. I don't remember exactly, but that file is about 2.20 MB in size, and I think that runs about 185K per page. I would think there could be a way to get a smaller file.
    3. For text recognition purposes, this document is not ideal because it is a collection of powerpoint slide sheets (2 - 3 slides per page), and in some cases there is text on top of image in the slides, and it seems very hard to discern.
    4. Once a document has been scanned, and OCR has been run on it, I was under the impression that the OCR is in a separate layer, and that (if Searchable Text is chosen), you basically have a scanned image with another layer of searchable text. Because the OCR'd text is "there somewhere", is it possible to remove the scanned image text, and have just the raw recognized text, similar to if I created the document in Word, and created a PDF?
    5. Sort of back to number 1, suppose I am stuck with leaving the scanned image behind, and just running OCR, what is the optimal way to reduce the file size of the PDF? I had read that running your scan at 600 dpi may help with the text recognition. The same article suggested doing the higher resolution scan and using the ClearScan because it would  a) recognize the text better and  b) convert the text image to actual text and reduce the file size. From there, should I then just run the PDF optimizer to downsample the images to a certain DPI to further reduce the size?
    Hopefully you all can understand what I am saying and help fill in some gaps.
    Thanks,
    Ian

    Let us know if this tutorial helps you with your workflow Acrobat X: Taking the guesswork out of scanning to PDF.

  • Exporting Pdf's for the Web, maintaining quality and keeping file size down

    1) I don't have acrobat Pro yet.
    I've been trying to export a small document 14 pages for email, and I can't get the file size down below 26 mgs even when I'm compromising images to an extremely poor quality. I have not had this problem with Cs3 previously. Not sure why the file size is so high. I don't want to have to down sample everything in other programs, would kind of defeat the purpose of proofing out the program in the first place. I'm wondering if the extra file size is related to metadata?

    What preset are you using? How many images, and how large are they? how much vector art?
    The first thing to do is to remove all unused swatches and styles from the document, then do a "save as" to remove the excess undo information, then try again.
    And next time you should ask this type of question in the InDesign forum for your platform. This area is for discussing new features users would like to see. :)
    Peter

  • Projector and dir file size grow 7X unexpectedly - dcr remains the same.

    I’m updating a Shockwave project that is also available
    for download (so size matters and I’ve got a dcr and exe of
    the same project). Last year the file sizes were as follows: dir =
    933kb, dcr = 203kb, exe = 4,857kb … all reasonable. I started
    with last year’s file, eliminated numerous redundant scripts
    and cast members (including an unused font), combined the
    functionality of scripts that were similar, streamlined the
    operation of other scripts, updated some internal data, replaced
    the one and only bmp with a new one for this year (same size),
    saved and compacted … and for all my effort to clean up this
    year’s version I get the following: dir = 33.9Mb, dcr =
    224kb, exe = 37.5Mb The xtras are an obvious suspect but I went
    down the list and they’re the same as last year, the only
    outside xtra being POM (which hasn’t changed since 2006) I
    recompiled last year’s project and the sizes were still
    reasonable, so Director isn’t broke. Anyone know what's up?
    PS: I made a dummy copy of this year's movie, elminated
    everything (all sprites, all cast members, all xtras), save and
    compact, and the dir file is still 33Mb?!?!?
    PPS: Well, I copied everything into a fresh document, spent
    about an hour making sure scripts not attached to sprites didn't
    get left behind, re-attached the required xtras, and now I'm back
    down to a 4.4Mb projector but I'm not sure why.

    Thanks Mike. The odd thing is the results above were after a
    “save and compact” (as a matter of habit I always
    compact). Even with everything stripped out of the file I
    couldn’t get Director to jettison whatever garbage had gotten
    lodged in. On the plus side, moving the cast and score to a fresh
    document not only cured the problem but also allowed me to
    eliminate some legacy xtras that were no longer being used (this
    project has been updated yearly since 2002).
    PS: Flash's memory and file size audit report is really nice
    for trouble shooting this kind of problem, it would be nice if
    Director gets the same feature.

  • How do I completely crop a PDF so that the cropped data is removed and the file size is reduced?

    How do I completely crop a PDF so that the cropped data is removed and the total file size is reduced?
    When I use the "Crop" function, the cropped data still remains in the file and there is no reduction in file size. I need a way to truly crop a PDF using Acrobat software.

    When you export, try to get the full file path or else you will have to do a lot of manual searching.
    If you downloaded the picture from Messages, the picture is stored in your User Library/Messages. to make your User Library visible, hold down the option key while using the Finder “Go To Folder” command. Enter ~/Library/Messages/Attachments. 
    If you prefer to make your user library permanently visible, use the Terminal command found below.
    http://osxdaily.com/2011/07/04/show-library-directory-in-mac-os-x-lion/
    You might want to bookmark the command. I had to use it again after I installed 10.8.4. I have also been informed that if you drag the user library to Finder it will remain visible.

  • SHA256 and LARGE file sizes

    Some of the people who digitally sign a PDF add about 32k to the file size (AA pro 9.0.0, Hash Alg SHA1) and others who sign the same PDF add 3200k to the file size (AA pro 9.3.4, Hash Alg SHA256)
    We had a 79k Doc file that after 4 signatures was 12 MB in size using 9.3.4 and the same file with 4 signatures was 217k using 9.0.0
    Question 1) Why are the 9.3.4 signatures adding 3 MB to the file for each signature?
    Question 2) Where can we control which hash alg someone uses (get everybody using SHA1)?

    no they do not - and all our signatures are created the same way using the same procedure - the only difference is the Adobe version

  • Question on recovery of redolog file

    hii,
    how to open database,
    when block corruption error in redolog file.
    there are two archive redo log groups.
    and database in non archive mode.
    thank's in advance

    I think this is a very difficult situation. You cannot proceed in opening your database as your redo log is corrupt (current I guess :( ). Your database is no noarchivelog mode, which makes it more diffidcult.
    When you say you have two archive log groups, do you mean redo log groups?
    Are those groups multiplexed?
    In this case you can still recover from this error by dropping the corrupt redo log file and using the valid surviving redo log member.
    If you have only two redo log groups, not multiplexed members an no archivelog mode, then I suggest you to dump the contents of your redo log file and detect up to which point you still can find a valid SCN, so can execute an incomplete recover until that scn.
    Procedure to dump the contents of your redo log file is:
    alter system dump logfile '/xxxx';
    This will create a text file with the contents from your redo log file, once you have it you can check up to whic valid SCN you can get. Next perform an incompete recover until that found SCN.
    ~ Madrid.

  • Simple crop and the file size triples

    Good day,
    I'm trying to figure out why a video encoded using AME comes out with a much bigger file size than the original. Even if all I'm doing is using it to trim the source file. No change in settings, bit rate, dimensions etc.
    In the current example I'm taking a webex recordings and trimming out some dead space in the beginning. I've lowered every setting (video and audio) I can to it's lowest potential and the resulting file looks worse than the original and is triple the file size. Both the source and the result are .mp4.
    Please assist. The is making the files go from email-able to not. Making a video shorter shouldn't make it larger....
    Thank you for your help.

    When encoding to virtually ANY format, you will be able to set the BITRATE. This determines how big or small an output file is. Apparently, the file you are creating is running about 3x the bitrate of the source file. You can adjust the bitrate under the VIDEO tab in AME.
    Not knowing the specs of the source, I can't suggest how to export. Perhaps if you post a screen shot of your export panel in AME, that will help. Note that if you choose "HD 1080p" H.264 preset that will be much larger than say "YouTube 1080p", there are many options.
    Thank you
    Jeff Pulera
    Safe Harbor Computers

  • How restore database only with dbf, ctl, and redolog file

    Hi friends,
    I had a database oracle 11g, but the server was crashed. Then I just rescue only the files DBF, CTL, and REDOLOG. But I couldn't rescue the spfile and pwd file.
    How can I restore this database in other server? How can I recreate the spfile?
    I hope anyone can give me a hand.
    Thanks.

    Thanks Hemant K Chitale.
    And exactly like EdStevens said, I don't have any spfile or init file, also I don't have the alertlog. I just have the files:
    CONTROL01.CTL
    CONTROL02.CTL
    CONTROL03.CTL
    REDO01.LOG
    REDO02.LOG
    REDO03.LOG
    SYSAUX01.DBF
    SYSTEM01.DBF
    TEMP01.DBF
    UNDOTBS01.DBF
    USERS01.DBF
    And I just know the original database  was in 10g standar edition on Windows server 2008.
    Which will be the steps to recover the database, how I could create a spfile from the control files, redo, and dbf?
    Best regards.

  • How do find all database slog size and mdf file size ?

    hi experts,
    could you share query to find all databases log file size and mdf file (includes ndf files ) and total db size ? in MB and GB
    I have a task to kae the dbs size  around 300 dbs
    ========               ============     =============        = ===        =====
    DB_Name    Log_file_size           mdf_file_size         Total_db_size           MB              
    GB
    =========              ===========               ============       ============     
    Thanks,
    Vijay

    Use this ViJay
    set nocount on
    Declare @Counter int
    Declare @Sql nvarchar(1000)
    Declare @DB varchar(100)
    Declare @Status varchar(25)
    Declare @CaptureDate datetime
    Set @Status = ''
    Set @Counter = 1
    Set @CaptureDate = getdate()
    Create Table #Size
    SizeId int identity,
    Name varchar(100),
    Size int,
    FileName varchar(1000),
    FileSizeMB numeric(14,4),
    UsedSpaceMB numeric(14,4),
    UnusedSpaceMB numeric(14,4)
    Create Table #DB
    Dbid int identity,
    Name varchar(100)
    Create Table #Status
    (status sql_Variant)
    Insert Into #DB
    Select Name
    From Sys.Databases
    While @Counter <=(Select Max(dbid) From #Db)
    Begin
    Set @DB =
    Select Name
    From #Db
    Where @Counter = DbId
    Set @Sql = 'SELECT DATABASEPROPERTYEX('''+@DB+''', ''Status'')'
    Insert Into #Status
    Exec (@sql)
    Set @Status = (Select convert(varchar(25),status) From #Status)
    If (@Status)= 'ONLINE'
    Begin
    Set @Sql =
    'Use ['+@DB+']
    Insert Into #Size
    Select '''+@DB+''',size, FileName ,
    convert(numeric(10,2),round(size/128.,2)),
    convert(numeric(10,2),round(fileproperty( name,''SpaceUsed'')/128.,2)),
    convert(numeric(10,2),round((size-fileproperty( name,''SpaceUsed''))/128.,2))
    From sysfiles'
    Exec (@Sql)
    End
    Else
    Begin
    Set @SQL =
    'Insert Into #Size (Name, FileName)
    select '''+@DB+''','+''''+@Status+''''
    Exec(@SQL)
    End
    Delete From #Status
    Set @Counter = @Counter +1
    Continue
    End
    Select Name, Size, FileName, FileSizeMB, UsedSpaceMB, UnUsedSpaceMB,right(rtrim(filename),3) as type, @CaptureDate as Capturedate
    From #Size
    drop table #db
    drop table #status
    drop table #size
    set nocount off
    Andre Porter

  • SWF and FLV file  sizes

    Hello,
    I have an flv file and the same flv file in an swf file. Both
    files are the same size. So, it seems that it doesn't matter,
    whether you import an flv file or an swf file, to keep your file
    size small. Is this correct ?
    Thanks,
    Paul

    > I have an flv file and the same flv file in an swf file.
    Both files are
    > the
    > same size.
    That's just a coincidence.
    There is different baggage / overheads with FLV vs SWF. So
    you will usually
    get some differences .. sometimes SWF smaller, other times
    FLV. However,
    the actual video payload is the same in both, and the audio
    is similar too,
    and they are the bulk of the file size. So there shouldn't be
    a huge
    difference in file size between SWF an FLV.
    Jeckyl

Maybe you are looking for

  • Photos on iPhone in wrong order after sync with iMac iPhoto

    I just purchased my iPhone 6 and am trying to get it all set up with my photos stored on iPhoto on iMac. When I sync phone using iTunes all of images sync over. Not a problem. But here's where the issue is... On my iMac in iPhoto, all photos are disp

  • Relative links in interactive pdf

    Hi. I'm building an interactive pdf that will work as the content of an interactive cd. The main document is saved on the "cd" directory, on the desktop. I created a button that will open an external pdf file. The link to the file is an absolute link

  • FF Mac Desktop Addon and links download errors after upgrade to 35.0.1

    My problem started when FF was repeatedly trying to automatically upgrade to 35.0.1. It repeatedly failed the incremental upgrade and said I should download from Mozilla.org. However, after doing that and installing 35.0.1 both addons and application

  • OID and DCM failure after loading patch 2703110 and 2682125

    Hello, When I install the following in the order specified on Windows 2000 Professional: 1) Oracle 9iAS 9.0.2.0.1 2) Oracle Patch 2703110 (9.0.2.2) 3) Patch 2682126 (OID Patch) - Installed before the last step in Patch 2703110 (dcmctl calls). I get t

  • Space saving from duplicate documents

    If we were to store a lot of dulicate documents in Content DB. Does a programer has to manage it or does content DB has internal mechanisim to detect a duplicate document and conserve storage.