Invalid Stored block length - Error Oracle 12c DB installation in OEL6

Hi Guys,
I got the below error message during the installation on Oracle 12c DB in OEL6.
"Invalid Stored Block Length". And the installation got terminated.
All the pre-req got passed and now i got sucked at this step.
Can someone help me on this please.....
Thanks,
Sunil

Here is the Error Log
INFO: Extracting files to '/u01/app/oracle/product/12.1.0/db_1'.
INFO: Performing fastcopy operations based on the information in the file 'oracle.server_EE_dirs.lst'.
INFO: Performing fastcopy operations based on the information in the file 'oracle.server_EE_filemap.jar'.
INFO: Performing fastcopy operations based on the information in the file 'racfiles.jar'.
INFO: Performing fastcopy operations based on the information in the file 'oracle.server_EE_exp_1.xml'.
INFO: Performing fastcopy operations based on the information in the file 'oracle.server_EE_1.xml'.
INFO: Performing fastcopy operations based on the information in the file 'setperms1.sh'.
INFO: Number of threads for fast copy :1
INFO: invalid stored block lengths
SEVERE: oracle.sysman.oii.oiif.oiifb.OiifbEndIterateException: invalid stored block lengths
  at oracle.sysman.oii.oiic.OiicInstallAPISession.doOperation(OiicInstallAPISession.java:490)
  at oracle.sysman.oii.oiic.OiicAPIInstaller.doOperation(OiicAPIInstaller.java:1009)
  at oracle.sysman.oii.oiic.OiicAPIInstaller.doOperation(OiicAPIInstaller.java:970)
  at oracle.install.driver.oui.OUISetupDriver.setup(OUISetupDriver.java:358)
  at oracle.install.driver.oui.SetupJob.call(SetupJob.java:315)
  at oracle.install.driver.oui.SetupJob.call(SetupJob.java:49)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
  at java.lang.Thread.run(Thread.java:662)
INFO: Update the state machine to STATE_READY
INFO: isSuccessfullInstallation: false
INFO: isSuccessfullRemoteInstallation: true
INFO: Adding ExitStatus FAILURE to the exit status set
INFO: Shutting down OUISetupDriver.JobExecutorThread
SEVERE: [FATAL] invalid stored block lengths
   CAUSE: No additional information available.
   ACTION: Refer to the logs or contact Oracle Support Services
   SUMMARY:
       - invalid stored block lengths.
Refer associated stacktrace #oracle.install.commons.util.exception.DefaultErrorAdvisor:7314
INFO: Advice is ABORT
SEVERE: Unconditional Exit
INFO: Adding ExitStatus FAILURE to the exit status set
INFO: Dispose the current Session instance
INFO: Dispose the install area control object
INFO: Update the state machine to STATE_CLEAN
INFO: Finding the most appropriate exit status for the current application
INFO: Exit Status is -1
INFO: Shutdown Oracle Database 12c Release 1 Installer

Similar Messages

  • An Internal Error Occured - Invalid Stored Block Length

    When try to sync im getting this on the client WINCE
    CNS 9025,
    Invalid Stored Block Length.
    Any help????

    Note:395392.1
    Applies to:
    Oracle Lite - Version: 10.2.0.2.0
    This problem can occur on any platform.
    Symptoms
    The mobile user is not able to synchronize and fails with this Error: "An internal error has occurred. invalid stored block lengths"
    There was no any modification on the mobile client or Mobile Server.
    Cause
    This Error Comes from the Bug 5508302 "INVALID STORED BLOCK LENGTHS WHEN SYNCRONIZING "
    Solution
    Apply the one-off patch "5508302" which is available on Metalink on your Mobile Server following the Readme.txt patch
    On the Mobile Client where there is the issue, run the program c:\mobileclient\bin\update.exe, to install the patch on the client.
    The others Clients will receive the patch automatically by the dmagent.

  • Unzipping gives java.util.zip.ZipException: invalid stored block lengths

    When i try to unzip a file using java.util.zip i get the following exception
    java.util.zip.ZipException: invalid stored block lengths
    Can anyone offer any suggestions as to why this is happening?

    GZIPInputStream use mark(), reset()...
    So the InputStream you use in the constructor of GZIPInputStream must be able to handle these method.
    The simple way to achieve this is:
    new GZIPInputStream(new BufferedInputStream(yourInputStream));

  • Incorrect Block Length error when configuring SSL

    Hello, gurus:
    I am messing around with SSL configurations on WebLogic 6.0.2. I have generated
    a CSR, and located my non-password protected private key and CSR files to the
    /config/[my_test_domain] folder. I have received my test cert from VeriSign, which
    I have saved to /config/[my_test_domain] as cert.pem. Lastly, I copied off of
    VeriSign's site an Intermediate CA certificate (or Server Cert Chain), and saved
    that at ca.pem.
    Now when I attempt to start WebLogic, I am seeing the following Alert messages:
    ==============================================================
    <2001/08/07 12:03:04:JST> <Alert> <WebLogicServer> <&#12475;&#12461;&#12517;&#12522;&#12486;&#12451;
    &#12467;&#12531;&#12501;&#12451;&#12464;&#12524;&#12540;
    &#12471;&#12519;&#12531; weblogic.security.AuthenticationException: Incorrect
    block length 64 (mod
    ulus length 128) possibly incorrect SSLServerCertificateChainFileName set for
    th
    is server certificate &#12395;&#30683;&#30462;&#12364;&#12354;&#12426;&#12414;&#12377;&#12290;>
    weblogic.security.AuthenticationException: Incorrect block length 64 (modulus
    le
    ngth 128) possibly incorrect SSLServerCertificateChainFileName set for this serv
    er certificate
    at weblogic.security.X509.verifySignature(X509.java:251)
    at weblogic.t3.srvr.SSLListenThread.<init>(SSLListenThread.java:440)
    at weblogic.t3.srvr.SSLListenThread.<init>(SSLListenThread.java:297)
    at weblogic.t3.srvr.T3Srvr.initializeListenThreads(T3Srvr.java:942)
    at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:403)
    at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:169)
    at weblogic.Server.main(Server.java:35)
    ==============================================================
    BTW, I am doing all of this on a Japanese (EUC_JP) OS, so I apologize if part
    of the above message is rendered illegible.
    Anyhow, does anyone have any idea as to what is bombing?
    Thanks in advance,
    Brooke

    Can you elaborate on what you did to get the root ca cert from verisign's repository
    page and
    convert it to DER format using OpenSSL? I've been trying to figure out how to do
    this for about
    a week now... I finally got verisign support to just email me a root ca cert but I
    would like to know
    what you did.. Did you just cut & paste the class 1 root ca from the repository page
    (http://www.verisign.com/repository/root.html) to a file? Where did you get OpenSSL
    and what
    did you do to convert the file to a DER? I looked at the OpenSSL site but I couldn't
    figure it out.
    Any help on this would be greatly appreciated. I can't believe how much time I have
    wasted
    looking into this...
    Kirk Everett
    Brooke wrote:
    "Brooke" <[email protected]> wrote:
    ...Lastly, I copied off of VeriSign's site an Intermediate CA
    certificate (or Server Cert Chain), and saved that as ca.pem.
    ..... And that was the whole problem. After doing more search of the resources here,
    I discovered that the Server Certificate Chain File Name needed the Root Server
    CA cert from VeriSign. The solution was to copy VeriSign's Root Server CA cert
    from their repository page, and then use OpenSSL to transform that into a .der
    file. Using this .der file as the Server Certificate Chain File did the trick.

  • ORACLE 12c RAC installation

    Hello ,
    I'm trying to install oracle 12c cluster.
    I have a question about OCR and VOTING disks
    which Disk is shared and which one is local ?
    if both of them are shared, which ASM DG is created by Grid installer ? OCR or Voting ! and how to store both of Voting and OCR on sperated ASM disk groups.
    regards , Dia

    OCR and Voting disks are shared.
    You may put them on a separate DG (my preferences) or on one DG with data.
    This may be helpful -- https://docs.oracle.com/database/121/CWSOL/storage.htm#CWSOL0003, https://docs.oracle.com/database/121/CWLIN/storage.htm, Oracle 12c RAC On your laptop Step by Step Implementation Guide 1.0, ORACLE-BASE - Oracle Database 12c Release 1 (12.1) RAC On Oracle Linux 6 Using VirtualBox.

  • Minimum System Requirement for Oracle 12c Database Installation

    Hi,
           Can Anybody please tell me the minimum requirement ti install Oracle 12c Database On Virtual Box.
    Regards...
    Asit

    Enterprise Edition        6.4 GB
    Standard Edition          6.1 GB
    Standard Edition One   6.1 GB
    This are the pure hdd requirements.
    Then it's 1 GB of space in the /tmp directory.
    Minimum: 1 GB of RAM
    Recommended: 2 GB of RAM or more.
    Swap Space Requirement for Linux:
    Between 1 GB and 2 GB.
    1.5 times the size of the RAM
    Between 2 GB and 16 GB
    Equal to the size of the RAM
    More than 16 GB
    16 GB

  • Errors- Oracle 11g release2 installation on Windows7 (32Bit)

    Hi,
    Details:
    Oracle Version: win32_11gR2_database
    O.S. Windows7
    32-bit
    I have downloaded Oracle 11.2 enterprise edition from OTN. And while installing I am getting errors like
    File not found F:\app\SURi\product\11.2.0\dbhome_3\oc4j\j2ee\oc4j_applications\applications\em.earAfter clicking on Continue button it is showing same error like file not found with different file names. Twice I have downloaded the software from OTN and I am getting the same error while installing.
    Please help me in resolving these issues. Many thanks in advance for your help.
    Thanks,
    Suri

    Hi All,
    Q1) Oracle Version: win32_11gR2_database windows (32 bit)
    O.S. Windows xp
    I have fallowed all above steps to install 11g, while installing i haven't seen any options to enter manually, i mean even database passwords change and unlock like that.
    It has shown installation successfully completed without my interaction at any window, fine.
    but, when i am trying to work on that i am not seeing SQL Plus symbol even to enter in Oracle and there i am seeing SQL Builder, when i try to that it is asking java path.
    is that needed jdk installation for this..
    Q2) Oracle Version: win32_11gR2_database windows (32 bit)
    O.S. Windows 7 Ultimate 32 bit
    In this OS ( i was installed JDK already and) iam able to enter in to SQL Builder and it can allow to run the quiries, but the sql builder window is looking like cmd window. was that installed properly. here also i am not seeing SQL Plus symbol as we generally use to enter in to Oracle. here while installing i am able to change passwords and unlocks.
    is there any environmental variables changes needed before going to install and any changes required, can please suggest good step by step document to install win32_11gR2_database windows (32 bit) on windows 7 32 bit and windows xp
    Thanks in Advance.
    Surendra

  • ORA-01017: invalid username/password;  for  Oralce 12c OEM  installation

    Hi Experts,
    Following error Oralce 12c OEM installation ,i have no clue sys user passowrd is correct in response/new_install.rsp file
    [oracle@sdp12 OEM_Packages]$ ./runInstaller -silent -responseFile /oracle/oracle8/OEM_Packages/response/new_install.rsp
    Starting Oracle Universal Installer...
    Checking Temp space: must be greater than 400 MB. Actual 1222 MB Passed
    Checking swap space: must be greater than 150 MB. Actual 4000 MB Passed
    Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-02-24_06-22-19PM. Please wait ...[oracle8@sdp38 OEM_Packages]$
    ERROR: ERROR:Exception occurred while connecting to database. Check the connection details of the database you specified and retry.
    ORA-01017: invalid username/password; logon denied
    Unable to connect to the database and validate whether it is a supported database due to one of the following reasons:
    (1) Incorrect credentials
    (2) Listener may be down
    (3) Database may be down
    Check the credentials ,the status of the listener and the database and retry.
    i am able to connect using same username password
    [oracle8@sdp38 response]$ sqlplus sys/sys512@TET1
    SQL*Plus: Release 11.2.0.1.0 Production on Sun Feb 24 18:29:15 2013
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL>
    From response/new_install.rsp file
    DATABASE_HOSTNAME=sdp12
    LISTENER_PORT=1521
    SERVICENAME_OR_SID=TET1
    SYS_PASSWORD=sys512
    SYSMAN_PASSWORD=sysman512
    SYSMAN_CONFIRM_PASSWORD=sysman512
    i have no clue here, any help greatly appropriated
    thanks

    few things you can check :
    a) Is there a password file ? Is the password correct in it ?
    b) SYSDBA remote login is disabled. remote_login_passwordfile is not set to EXCLUSIVE in the spfile or init.ora.
    Set: remote_login_passwordfile= EXCLUSIVE
    Create a password file:
    Unix: $
    orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=sys entries=5
    Windows:
    C:\> orapwd file=%ORACLE_HOME%\database\pwd%ORACLE_SID% password= sys entries=5
    To synchronize the password for sys for normal connections and connections as sysdba connect as a sysdba user and reset the sys password:
    $ sqlplus "/ as sysdba" SQL> ALTER USER SYS IDENTIFIED BY change_on_install;

  • Developer Error: invalid  window block name passed to set_window_position

    Hi
    I had Registered a Form in Oracle apps
    After that i'm trying to open the form,then this message will be displayed
    Developer Error: invalid  window block name passed to set_window_position
    in the PreForm Trigger I had written the code as apps.set_window_position('MY block name(MAIN_BLOCK)','MY window(MAIN_WINDOW)')
    i had wriiten like that
    but
    After click on that Ok Button of the error message my form is Open
    and My form is Also not having any Minimize,Maximize and Close Buttons
    Please Tel me How should i resolve that Error and also how can i Get that buttons for the Form

    I agree with Francios, you should ask your question in the General EBS Discussion forum.
    in the PreForm Trigger I had written the code as apps.set_window_position('MY block name(MAIN_BLOCK)','MY window(MAIN_WINDOW)')You are using a Oracle E-Business Suite (EBS) framework procedure to set the window position, not the Forms standard Set_Window_Property built-in.
    and My form is Also not having any Minimize,Maximize and Close ButtonsThese could be controlled by the APPS.Set_Window_Position procedure, but I'm not sure. It could simply be that you set (or neglected to set) the default properties of your Form's window the enable these buttons. More than likely, the APPS procedure is setting them. It's been too long since I worked with the EBS to say for certain. You really should ask your question in the General EBS Discussion forum.
    Craig B-)
    If someone's response is helpful or correct, please mark it accordingly.

  • Oracle giving Block corruption errors when using CDC for sending the data to SQL Server 2012

    Hello Friends,
    We are facing an error while using CDC with Oracle. It is a "Block corruption" error, which indicates at some level of data corruption. We ran RMAN validate command to scan the database for corruption but it returned with no errors, however he
    Alert Log in Oracle is still coming up with the following error. Has anyone experienced this error when using Oracle Standard Edition and SQL 2012 ?
    Trace file e:\app\pulse-ad\diag\rdbms\orcl\orcl\trace\orcl_ora_5992.trc
    Oracle Database 11g Release 11.1.0.7.0 - 64bit Production
    Windows Server 2003 Version V5.2 Service Pack 2
    CPU                 : 4 - type 8664, 4 Physical Cores
    Process Affinity    : 0x0000000000000000
    Memory (Avail/Total): Ph:6782M/24575M, Ph+PgF:12203M/30844M
    Instance name: orcl
    Redo thread mounted by this instance: 1
    Oracle process number: 151
    Windows thread id: 5992, image: ORACLE.EXE (SHAD)
    *** 2013-12-12 03:04:33.655
    *** SESSION ID:(1281.3832) 2013-12-12 03:04:33.655
    *** CLIENT ID:() 2013-12-12 03:04:33.655
    *** SERVICE NAME:(orcl) 2013-12-12 03:04:33.655
    *** MODULE NAME:(xdbcdcsvc.exe) 2013-12-12 03:04:33.655
    *** ACTION NAME:() 2013-12-12 03:04:33.655
    Lost-write detected for sequence 70856. The lost-write starts occurring in block 11193. The current block being validating is 12930.
    Block dump of the first lost-write block:
    Flag: 0x1 Format: 0x22 Block: 0x00002bb9 Seq: 0x000114bf Beg: 0x94 Cks:0x68ee
    Dump of memory from 0x0000000598D06C00 to 0x0000000598D06E00
    598D06C00 00002201 00002BB9 000114BF 68EE8094  [."...+.........h]
    598D06C10 00085BF1 0023BDA1 000DE19C 000DE19C  [.[....#.........]
    598D06C20 0000000C 00000000 2209160A 5BF10000  [..........."...[]
    598D06C30 3EB10502 00C0F5CA 0031BDA1 00010205  [...>......1.....]
    598D06C40 02B22C6A 038A6D69 00000001 00000000  [j,..im..........]
    598D06C50 4D554407 30373230 35BB0206 001100AE  [.DUM0270...5....]
    598D06C60 0001040A 000D000E 038A6D69 56B25735  [........im..5W.V]
    598D06C70 729C0003 E19C0001 000C0006 000D0006  [...r............]
    598D06C80 02BB0502 00C0F5CD 0023BDA1 000A0002  [..........#.....]
    598D06C90 00C00013 000000D0 00030201 56B25736  [............6W.V]
    598D06CA0 03890001 00000000 00000000 002E0105  [................]
    598D06CB0 FFFF0003 00C0F5CD 56B25736 3EB10003  [........6W.V...>]
    598D06CC0 FFFF0024 0014000C 000C0018 00120014  [$...............]
    598D06CD0 09CC0058 E75B0022 0009000F 00085BF1  [X...".[......[..]
    598D06CE0 0024BDA1 000DE19D 000DE19D 0000000C  [..$.............]
    598D06CF0 00000000 2309160A 5BF10000 3EB10502  [.......#...[...>]
    598D06D00 00C0F5CD 0020BDA1 00010205 02B22C72  [...... .....r,..]
    598D06D10 03900974 00000019 00000000 3030300A  [t............000]
    598D06D20 33303030 06323132 AE35BB02 0B441100  [0003212...5...D.]
    598D06D30 0001040A 000D000E 03900974 56B25736  [........t...6W.V]
    598D06D40 729C0003 E19D0011 000C0006 000D0006  [...r............]
    598D06D50 02BB0502 00C0F5CD 0024BDA1 00EA0002  [..........$.....]
    598D06D60 00270016 000001FC 00032C01 56B25736  [..'......,..6W.V]
    598D06D70 00000001 00000000 30393007 002E0105  [.........090....]
    598D06D80 FFFF0003 00C0F5CD 56B25736 00000003  [........6W.V....]
    598D06D90 FFFF0025 00140052 000C0018 00070035  [%...R.......5...]
    598D06DA0 0003000A 00070003 0001001D 00030001  [................]
    598D06DB0 00010001 00010001 00010001 00010001  [................]
    598D06DC0 00010001 00010001 00010001 00010001  [................]
    598D06DD0 00010001 00000001 00010001 00010001  [................]
    598D06DE0 00010001 00000014 09720174 00000022  [........t.r."...]
    598D06DF0 0009000F 00085BF1 0025BDA1 000DE19A  [.....[....%.....]
    Block dump of the current block being validating:
    Flag: 0x1 Format: 0x22 Block: 0x00003282 Seq: 0x000114c8 Beg: 0x0 Cks:0x312a
    Dump of memory from 0x0000000598DDFE00 to 0x0000000598DE0000
    598DDFE00 00002201 00003282 000114C8 312A8000  [."...2........*1]
    598DDFE10 50424703 31303607 34353335 69745319  [.GBP.6015354.Sti]
    598DDFE20 6E696C72 72502067 6375646F 4C207374  [rling Products L]
    598DDFE30 4E206474 C3025650 0380013D 0457454E  [td NPV..=...NEW.]
    598DDFE40 4E1E09C2 1E09C204 10C2024E 1E09C204  [...N....N.......]
    598DDFE50 09C2044E C2024E1E 03C30510 021B0929  [N....N......)...]
    598DDFE60 C3053DC3 0F192602 2602C305 C3050F19  [.=...&.....&....]
    598DDFE70 0C1A6203 5102C105 C2041F4E 044E1E09  [.b.....QN.....N.]
    598DDFE80 4E1E09C2 0410C202 4E1E09C2 1E09C204  [...N.......N....]
    598DDFE90 10C2024E 2903C305 78071B09 011D0B71  [N......)...xq...]
    598DDFEA0 BF020101 1FBF0215 4E018001 53014E01  [...........N.N.S]
    598DDFEB0 0723002C 0B0C7178 0A3C3C18 30303030  [,.#.xq...<<.0000]
    598DDFEC0 33373030 4D033337 47034255 36075042  [007373.MUB.GBP.6]
    598DDFED0 38333936 4E113331 2065776B 74616C50  [693813.Nkwe Plat]
    598DDFEE0 6D756E69 56504E20 0B0AC303 4E038001  [inum NPV.......N]
    598DDFEF0 C2045745 0459512E 59512EC2 5253C203  [EW...QY...QY..SR]
    598DDFF00 512EC204 2EC20459 C2035951 C3055253  [...QY...QY..SR..]
    598DDFF10 1B092903 0B0AC303 3C04C305 C3053239  [.).........<92..]
    598DDFF20 32393C04 4F08C305 C105114F 1F4E5102  [.<92...OO....QN.]
    598DDFF30 512EC204 2EC20459 C2035951 C2045253  [...QY...QY..SR..]
    598DDFF40 0459512E 59512EC2 5253C203 2903C305  [.QY...QY..SR...)]
    598DDFF50 78071B09 01190A71 C0030101 C0034709  [...xq........G..]
    598DDFF60 8001330A 4E014E01 002C5301 71780723  [.3...N.N.S,.#.xq]
    598DDFF70 3C180B0C 30300A3C 30303030 33373337  [...<<.0000007373]
    598DDFF80 42554D03 50424703 31304207 344C5131  [.MUB.GBP.B011QL4]
    598DDFF90 6F725020 63657073 614A2074 206E6170  [ Prospect Japan ]
    598DDFFA0 646E7546 64724F20 44535520 30302E30  [Fund Ord USD0.00]
    598DDFFB0 04C30331 03800133 0557454E 5B1603C3  [1...3...NEW....[]
    598DDFFC0 03C30521 04215B16 1F4004C3 1603C305  [!....[!...@.....]
    598DDFFD0 C305215B 215B1603 4004C304 03C3051F  [[!....[!...@....]
    598DDFFE0 031B0929 043304C3 4D245AC2 245AC204  [).....3..Z$M..Z$]
    598DDFFF0 02C3054D 040A1A18 494002C1 1603C305  [M.........@I....]
    *** 2013-12-12 03:05:07.984
    ** LOGMINER WARNING - Invalidated 6 LCRs **
    Complete dump of first invalid START LCR follows:
    ++  LCR Dump Begin: 0x0000000532C004E0 - CANNOT_SUPPORT
         op: 255, Original op: 3, baseobjn: 0, objn: 233316, objv: 1
         DF: 0x00000002, DF2: 0x00000000, MF: 0x00000000, MF2: 0x00000000
         PF: 0x40000001, PF2: 0x00002000
         MergeFlag: 0x00, FilterFlag: 0x00
         Id: 0, iotPrimaryKeyCount: 3, numChgRec: 4
         NumCrSpilled: 0
         RedoThread#: 1, rba: 0x0114c8.0001c6ce.00d4
         scn: 0x0003.56b593be, xid: 0x0008.00c.00100d85, pxid: 0x0008.00c.00100d85
         ncol: 0newcount: 0, oldcount: 0
         LUBA: 0x3.c109c0.c.15.38f64
    Thanks
    Dee

    Hi Dee,
    Thank you for your question.
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
    Thank you for your understanding and support.
    Regards,
    Mike Yin
    TechNet Community Support

  • Error while installing, Oracle 12C

    Hi Guys,
    I am getting error while installing the oracle 12c software
    its giving error like
    [INS-30131] Initial setup required for the execution of installer validations failed.
    Please help
    Thanks
    Krishna

         HI Srini,
    i am working on windows7 proffessional
    Location: C:\Users\Lenovo\.oracle\logs
    File name: oraInstall2013-12-10_11-56-33PM
    Zero byte file.
    I am the administrator of this PC. so i guess permissions are correct..
    Still i am geetting permisssions denied kinda error.
    Cause - Failed to access the temporary location.
    Action - Ensure that the current user has required permissions to access the temporary location.
    Additional Information: 
    - Framework setup check failed on all the nodes
    - Cause: Cause Of Problem Not Available
    - Action: User Action Not Available
    Summary of the failed nodes
    lenovo-pc
    - Version of exectask could not be retrieved from node "lenovo-pc"
    - Cause: Cause Of Problem Not Available
    - Action: User Action Not Available
    PLease help

  • Invalid Volume Free Block Count Error In Disk Utility

    My Ti-Book has been acting very strange lately so I decided to run the Verify Disk Option in the Disk Utility program. When I ran it, I got a series of messages in red that were:
    Volume Bit Map Needs Minor Repair
    Invalid volume free block count
    Error: The Underlying Task reported failure on exit
    When the verify disk function completed, I was asked to enter my administrator password. When I entered it and clicked on Okay, the Disk Utility program froze up and I had to restart.
    What do the two error messages mean? How do I repair my disk?

    Hi, WTM. The Verify Disk routine in Disk Utility is almost never worth running. It does the same error-detection tasks as the Repair Disk routine, but then it doesn't repair any errors that it finds.
    Start up from your Tiger installer DVD, open Disk Utility, select your hard drive, and run the Repair Disk routine. If you get the same "task reported failure on exit" message, you'll need a stronger directory-repair utility like DiskWarrior, or you'll need to erase your hard drive completely and reinstall everything on it.

  • The error: "Oracle Error 904 / Invalid Column name"

    When I try to take the Oracle Backup with the Exp command. I got the following error.
    The error: "Oracle Error 904 / Invalid Column name" .
    Any solution or suggestion?????????

    Asif, to add another question to the list of things to check. Is the version of the exp utility you are using to perform the export operation the same version as the database? If not, that would explain the error.
    If you are trying to export using a newer version of the export utility that generally will not work. You should switch to using the db version.
    If you are exporting using an older version because the target is an older version then usually you need to run the catexp# where # is the version number script for the target version that Oracle provides in the $ORACLE_HOME/rdbms/admin directory.
    HTH -- Mark D Powell --

  • Database upgrade to 10.2.0.4 error - oracle RAC status showing Invalid

    Hi all,
    I am getting this error during database upgrade to 10.2.0.4 error - "oracle real application clusters status invalid. "
    I am getting the above error after running catupgrd
    and checking the status SELECT COMP_NAME, VERSION, STATUS FROM SYS.DBA_REGISTRY;
    This is not a RAC database and this is the only components which shows INVALID.
    Is it normal? Please advice me the action/steps to take to rectify ASAP as it is urgent ... Thanks & Regards, Deepak Gupta

    Probably database was 9i and it was upgraded to 10g?
    It can be normal situation.
    Please check metalink note 312071.1
    If database is not rac, then simply ignore this status as workaround is quite difficult and definetely it's not worth to go for it.

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

Maybe you are looking for