Problem: oad.log file empty.

I have installed ias9i (1.0.2.2) on AIX cluster (4 cards). Installation went successfuly. Discoverer proceses started but I can't log into discoverer viwer. The message is:
"Unable to bind oad"
So I followed all advaices published on metalink:
1. Nslookup -> OK.
2. Check pref.txt for MachineIP and applypreferences.sh -> OK.
3. Add -a -OAipAddr -a 147.181.239.43 to registersession.sh, registerlocator.sh, registerpreferences.sh -> OK.
4. Add -OAipAddr 147.181.239.43 to startlocator.sh -> OK.
5. Add -OAipAddr 147.181.239.43 to startoad.sh or -host 147.181.239.43 to oadutil -> Failed.
The oad starts but it seems that in other subnet (locator cannot contact it and registersession also), but I can't check that because oad.log is empty (I have added -v option ofcource).
I have tried to restart all services, delete logs and so on several times.
What is the problem? Thakns in advance for any help.
regards
Krzysztof

Could you please repost this question over on the Flash Professional forums?  This forum is primarily for end users, the Pro forums will get you in touch with a wider developer audience.
Thanks,
Chris

Similar Messages

  • Teradata fast load log file empty

    hei all,
    after update odi 11g teradata fast load script not running, error tells to see the log file but log is empty
    any solution
    naseer

    any solution please

  • Another Install Problem (With Log Files)

    Hey there.
    I've read many of the install problem threads and have tried numerous things to get this working, but to no avail. This is getting VERY frustrating.. :-E
    Machine is a Dell Latitude with 1Gb mem, Running XP Pro SP2.
    My login ID is dgault.
    I've set my temp directories (temp and tmp) both to point to c:\temp
    Hera are my log files:
    ====================================
    XE.bat.log -- START
    ====================================
    Instance created.
    ====================================
    XE.bat.log -- END
    ====================================
    ====================================
    CloneRmanRestore.log -- START
    ====================================
    SQL> startup nomount pfile="C:\oraclexe\app\oracle\product\10.2.0\server\config\scripts\init.ora";
    ORA-24324: service handle not initialized
    ORA-24323: value not allowed
    ORA-28547: connection to server failed, probable Oracle Net admin error
    SQL> @C:\oraclexe\app\oracle\product\10.2.0\server\config\scripts\rmanRestoreDatafiles.sql;
    SQL> set echo off;
    ERROR:
    ORA-03114: not connected to ORACLE
    ERROR:
    ORA-03114: not connected to ORACLE
    ERROR:
    ORA-03114: not connected to ORACLE
    ERROR:
    ORA-28547: connection to server failed, probable Oracle Net admin error
    SQL> spool C:\oraclexe\app\oracle\product\10.2.0\server\config\log\cloneDBCreation.log
    ====================================
    CloneRmanRestore.log -- END
    ====================================
    ====================================
    CloneDBCreation.log -- START
    ====================================
    SQL> Create controlfile reuse set database "XE"
    2 MAXINSTANCES 8
    3 MAXLOGHISTORY 1
    4 MAXLOGFILES 16
    5 MAXLOGMEMBERS 3
    6 MAXDATAFILES 100
    7 Datafile
    8 'C:\oraclexe\oradata\XE\system.dbf',
    9 'C:\oraclexe\oradata\XE\undo.dbf',
    10 'C:\oraclexe\oradata\XE\sysaux.dbf',
    11 'C:\oraclexe\oradata\XE\users.dbf'
    12 LOGFILE GROUP 1 ('C:\oraclexe\oradata\XE\log1.dbf') SIZE 51200K,
    13 GROUP 2 ('C:\oraclexe\oradata\XE\log2.dbf') SIZE 51200K,
    14 GROUP 3 ('C:\oraclexe\oradata\XE\log3.dbf') SIZE 51200K RESETLOGS;
    SP2-0640: Not connected
    SQL> exec dbms_backup_restore.zerodbid(0);
    SP2-0640: Not connected
    SP2-0641: "EXECUTE" requires connection to server
    SQL> shutdown immediate;
    ORA-24324: service handle not initialized
    ORA-24323: value not allowed
    ORA-28547: connection to server failed, probable Oracle Net admin error
    SQL> startup nomount pfile="C:\oraclexe\app\oracle\product\10.2.0\server\config\scripts\initXETemp.ora";
    ORA-24324: service handle not initialized
    ORA-01041: internal error. hostdef extension doesn't exist
    SQL> Create controlfile reuse set database "XE"
    2 MAXINSTANCES 8
    3 MAXLOGHISTORY 1
    4 MAXLOGFILES 16
    5 MAXLOGMEMBERS 3
    6 MAXDATAFILES 100
    7 Datafile
    8 'C:\oraclexe\oradata\XE\system.dbf',
    9 'C:\oraclexe\oradata\XE\undo.dbf',
    10 'C:\oraclexe\oradata\XE\sysaux.dbf',
    11 'C:\oraclexe\oradata\XE\users.dbf'
    12 LOGFILE GROUP 1 ('C:\oraclexe\oradata\XE\log1.dbf') SIZE 51200K,
    13 GROUP 2 ('C:\oraclexe\oradata\XE\log2.dbf') SIZE 51200K,
    14 GROUP 3 ('C:\oraclexe\oradata\XE\log3.dbf') SIZE 51200K RESETLOGS;
    SP2-0640: Not connected
    SQL> alter system enable restricted session;
    SP2-0640: Not connected
    SQL> alter database "XE" open resetlogs;
    SP2-0640: Not connected
    SQL> alter database rename global_name to "XE";
    SP2-0640: Not connected
    SQL> ALTER TABLESPACE TEMP ADD TEMPFILE 'C:\oraclexe\oradata\XE\temp.dbf' SIZE 20480K REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED;
    SP2-0640: Not connected
    SQL> select tablespace_name from dba_tablespaces where tablespace_name='USERS';
    SP2-0640: Not connected
    SQL> select sid, program, serial#, username from v$session;
    SP2-0640: Not connected
    SQL> alter user sys identified by "&&sysPassword";
    SP2-0640: Not connected
    SQL> alter user system identified by "&&systemPassword";
    SP2-0640: Not connected
    SQL> alter system disable restricted session;
    SP2-0640: Not connected
    SQL> @C:\oraclexe\app\oracle\product\10.2.0\server\config\scripts\postScripts.sql
    SQL> connect "SYS"/"&&sysPassword" as SYSDBA
    ERROR:
    ORA-28547: connection to server failed, probable Oracle Net admin error
    SQL> set echo on
    SQL> spool C:\oraclexe\app\oracle\product\10.2.0\server\config\log\postScripts.log
    ====================================
    CloneDBCreation.log -- END
    ====================================
    ====================================
    postScripts.log -- START
    ====================================
    SQL> @C:\oraclexe\app\oracle\product\10.2.0\server\rdbms\admin\dbmssml.sql;
    SP2-0310: unable to open file "C:\oraclexe\app\oracle\product\10.2.0\server\rdbms\admin\dbmssml.sql"
    SQL> @C:\oraclexe\app\oracle\product\10.2.0\server\rdbms\admin\dbmsclr.plb;
    SQL> DROP PUBLIC DATABASE LINK DBMS_CLRDBLINK;
    SP2-0640: Not connected
    SQL> CREATE PUBLIC DATABASE LINK DBMS_CLRDBLINK USING 'ORACLR_CONNECTION_DATA';
    SP2-0640: Not connected
    SQL> CREATE OR REPLACE LIBRARY ORACLECLR_LIB wrapped
    2 a000000
    3 1
    4 abcd
    5 abcd
    6 abcd
    7 abcd
    8 abcd
    9 abcd
    10 abcd
    11 abcd
    12 abcd
    13 abcd
    14 abcd
    15 abcd
    16 abcd
    17 abcd
    18 abcd
    19 16
    20 51 8d
    21 LSqVp2u3D6yxyD42bHCkpHL03/8wg04I9Z7AdBjDpSjA9TNSMjO9GP4I9Qm4dCtp6jfnlRLO
    22 EXUFAGLlV0fbBvBjoirfWNdXU3VV0WYkgIWTZhXOjnGHQ2VzowkkIKuoKmprxsHwQ=
    23
    24 /
    SP2-0640: Not connected
    SQL> DROP TYPE DBMS_CLRParamTable;
    SP2-0640: Not connected
    SQL> DROP TYPE DBMS_CLRType;
    SP2-0640: Not connected
    SQL> CREATE OR REPLACE TYPE DBMS_CLRType wrapped
    2 a000000
    3 1
    4 abcd
    5 abcd
    6 abcd
    7 abcd
    8 abcd
    9 abcd
    10 abcd
    11 abcd
    12 abcd
    13 abcd
    14 abcd
    15 abcd
    16 abcd
    17 abcd
    18 abcd
    19 d
    20 4be 207
    21 3WAupYEFJyUtDT58GzFPeWkUS6wwgwKJr0hqynRAv7leuFljpGFIgxvNNkagWXCAOYNjnLy1
    22 ulbIGu/7Jr4I+E4ghHw/fZT2AjJ43oXGRL90ldDxQSra1CPcaBsAtcpUa02tik8fNqx/KMgr
    23 633l8+Va2DhCmvZXp9G7vbOPt7Pl3MM9zMw2e9Y0okY53GpiRO894C9geS1Y7KzzE/IgLaEu
    24 32gKwpBN6M0RCm7BYQ+ovzICzvY5VVyfs/mJp4oYS98qQpcbag5dyZAf0OP/aKDRu8nMxkFb
    25 i/etbPODbix+jSyOsHVw8+Q+m5vbJnoYgrAEVyEgB3LQctJbF95qK2fWuM+PzvFnTTxAGGzD
    26 bbFaBpyXAP09LtZsxHxeICUOFvBRezKHmWrTb5DRlika6Lg9ukf6Rh9Vb+74Kw+dCaqdPNsm
    27 BbgD7N+pj3pEKfdUH3CrGeQtEflPW7LZ5wEdk1k/oTs5nee7t70+LOfUmSdFznr3wK/OVfn4
    28 KShfwfMR
    29
    30 /
    SP2-0640: Not connected
    SQL> CREATE OR REPLACE TYPE BODY DBMS_CLRType wrapped
    2 a000000
    3 1
    4 abcd
    5 abcd
    6 abcd
    7 abcd
    8 abcd
    9 abcd
    10 abcd
    11 abcd
    12 abcd
    13 abcd
    14 abcd
    15 abcd
    16 abcd
    17 abcd
    18 abcd
    19 e
    20 41f 191
    21 WGxKHaEucYlWwCTtmi+GiJKjekYwgwK3ctxqfHQHrbs+zza9qFIBBo/k3vRdV42GdJcBu7Vv
    22 ITu0l2CDDI1d+D9K6+h7yxxZwO9Xtk4x8RFMvTqmcLYXjeAqvfUCO0DbqqDG+0SG03B8N8zU
    23 x3CB7ZzBJqbdVlPKP72aumnr8weouKrQT4tmBg3nhDN3+4ve7JkpJVEIEI+T5dJDg3IF2nEb
    24 xv03mcyUhyCvDbOazgEBY+LaQTQ99WwuW3WZw4y5xOakbH7mnBiomlFxUQglR1Hft6tRchhS
    25 tJTSEuprYV4kbm7IcRmC1LRlilvfcjDmMRWJUyC8NDvKu45v5GiDxx268uhVJTkhTBGaNgPz
    26 idKIcZk/6eV4Myw05MkyijGkKIEIpR3Fl0SO
    27
    28 /
    SP2-0640: Not connected
    SQL> CREATE OR REPLACE TYPE DBMS_CLRParamTable AS TABLE OF DBMS_CLRType;
    2 /
    SP2-0640: Not connected
    SQL> CREATE OR REPLACE PACKAGE DBMS_CLR wrapped
    2 a000000
    3 1
    4 abcd
    5 abcd
    6 abcd
    7 abcd
    8 abcd
    9 abcd
    10 abcd
    11 abcd
    12 abcd
    13 abcd
    14 abcd
    15 abcd
    16 abcd
    17 abcd
    18 abcd
    19 9
    20 3162 65e
    21 igQsRO8he8CDCdDl4nWpC6D62Xcwgz0T2UgFey9AmP9euDHhTNtIIypFDhpSVolmshjyUX7k
    22 SDMhxRY91oYjSjLiIwWaV61R3iM8yqEjBdxa/QqeVR3pZs7ue/BsPqTYpXW8XRTJmbmDO5
    23 y6g6sM26+9djcF+m6Fqq8mC6NyZn6S5/u5YqlKUW6Z0/jFVzc+7lxa51jAi2w83JxUetuepc
    24 Egxc0uEGzxAtwztimeUcybwG552DvNxfbRYPmlZcF9ms5bun8tEOU37kSxAxwg78sGNmXyJg
    25 Jp+fefVhVk3C9oZaBEqX7v/i8BgyRDcEjUz9lIky1qFGl+LwK6UjnlZNwvaMFeGiVd1F/AUF
    26 mHTk3md05YqDaT+DTqV8W1zC30fR3VfRvaLGYXiY3Q7FSir0QtQzyR8EXCMAYA3EXEaUFpex
    27 HwxcYAocVlx+EIrX0XzluGgiDXiY3Q4l/lmPizTlkrkJ9LGUPSicGFqTaYHrCe0hotIXVND2
    28 F7HUVK9cmOiDrcMQA+iDHp686BzH3ZSlKjFqVM6JTMPDsiJPMkNbw/6M6OgXOuH2yHO9AMlb
    29 OziQdfrmRltzw9EUNffiMMtRhoLdqYs1e2XMMqCVgGctzFg7P2tU+kbANpabiyUIvhhaAu7a
    30 xyvmPVJnmysL4u823iZM2GqZiZCpKW3Qv4NbJpkxn9LDl13NZ651CmCRtTHYpzbEOxcukq0t
    31 lwO08hc0bwA3SconEG/mRIBo82vHgSlwIZu7C4AMzIIYYHFCc85MYN2EANfivUZrD486W1F/
    32 gR3t490htjoHcFdVf1DiPqkXdtb79WooM4LoLHkw8U+qpiF2NYvSl6lJgb7BVdDiI3dux9EI
    33 z61yE26Ss4Fd8U7cZM56fUJJ7aWLcdeAiNbVenhTe3KFBHHuOq+tP/9upKGieCQXcjKNfxCw
    34 +1WK69iQf7XbU9OsMBAoNQ7Bo27SJLPVjEvTtkKuNfMrly1CbKAe9AzUNy5bE5S593CX54xc
    35 Vw68Qij7gam+GE04w25o+7JJ3oiAgi8jYYbYD2zZxIWMz4MmrVq3eE390NbSHyo7jwHegxKK
    36 f3h+yaUTftrGMN6jT2lokTEy1KiyE7MSEwHBtNF5y79IE8xyVuVpIMIMc0DE/TJ0uJ7SOfLE
    37 6SqgfhRxYRnsuAM1/GFNB7fwRPx19omV1+MCt2mBmwWKreim3q4NJgWKrexOr0FoZGET9buf
    38 RaRVyXcxl/K3Xu/C19hkaqBibbH9eQf9JAWUOtDPAvh/ThmIIy15+VGDFNmummh9SXftWiSE
    39 D0vX9JgmaYFFgfMECrWS664SELEFQKBDY2tyhUXo5a0E6EMyi2X4B+aqeJszH5WuDGcKF+d/
    40 7NklyocS0C9rvMWyDj1qV73XI6vfmBdSFS55SOx3O5uzoKk4Vw3sFlLVkwyA3w2fuV/6PcOI
    41 mayz9ZGxGT3tryZDopGviZT6Zd+BJdzRDexA9vz6kHEnKqSxtLQws8Nbtzm7e+9X7kd2yDnN
    42 zdju2xPRoVlXR/M41DFx8QRY5B1OfryhhCITa25oua0+Yrt8bQJCmke63jDNWP+92nHIEU+e
    43 eWu1mrm9oOz5JJXuag+ENbhu
    44
    45 /
    SP2-0640: Not connected
    SQL> show errors
    SP2-0640: Not connected
    SP2-0641: "SHOW ERRORS" requires connection to server
    SQL> CREATE OR REPLACE PACKAGE BODY DBMS_CLR wrapped
    2 a000000
    3 1
    4 abcd
    5 abcd
    6 abcd
    7 abcd
    8 abcd
    9 abcd
    10 abcd
    11 abcd
    12 abcd
    13 abcd
    14 abcd
    15 abcd
    16 abcd
    17 abcd
    18 abcd
    19 b
    20 933d 1c32
    21 LjzBBzQRtLt3jlDfh/c2/PSd1T8wg1VMr0iGl8DXM4HqbvrJkWfzixk0XWxmoBbxAb73ueCM
    22 RRbLF4Q2NZ+TRL3Ilc/PFpNhoqGGvhwPEl1/yYy50S2Sbuvp5ZgYt02SeKOCl+i5zJx/KFxp
    23 aZ/LBLWh73oUCRg8SdRqDz1a39OEKQKgLDQEZJMtce5ef+zwT5ZUAAEz+DyK3yH1r6W9A6po
    24 7D0uukDHeE98+B48WYNUwiLGik+f6u8SGxS1NCqCLEJ2L+t3M70DnS5Hitkt7rbJtWV/mbaY
    25 SUf5MnL9HkDmkEmHIjgzBbALmCL5OJiaYZ89pClOS+R5SYmyKWzrsIqf8r3w2E9C7RImcZ/S
    26 PpiQK13CjK4xzdtRdwDHc+QzxAc6TEsQl0hJnMUhQ4JSOrEScdGrIg3/vyM+IHMCRPgaVdyW
    27 QwNz5BCwH3l7DyS7I9rtz0o42vmIMPki/JV51sHtvfA3KX/YHCrw73K6F3iVIvxALReJLslq
    28 D2EfaNl9/jEPJM3UfluFv4B9udP9PIr9vlcV2XlOnFshHFvkM/i7mPMqWyxzU8ItLAPNQXOf
    29 A4H5hrHQlWGBGTicoCZTSI2zFvC3BnJxDdSCxCqMbq2nax8YekAYxpnXgFXwEMHX983iJnIF
    30 Ts5j/DsoNO5LzewGJJpMeW6xn6Ne2e99xjPoDZmlcmt+O5e/QFVwJD6lwfP9a0v4ds8mjJb+
    31 TsGz4AS6uQe5G5v+16q1EoEPtde1/k+1CJ21Tk+qqpq2WjzNMzO6zSKfGblhBsIIE7+ymAqb
    32 MI16BXhySREcqDBfg70JTltZSlJ1cGVlgN8YkeGv19z6B50dxsR+PZCbg8GzKuIseoOH4GHG
    33 7m409J5hUCL1Vd3BVQAUxMTEvJs0EDBpnYiE2+zEFupuYf95bFiJPPfLee+BGcmafJCGLD/4
    34 0tCd8E4WgA8BmMWC0GgEn+5JSeJhv+LJ/IM73/OOFbgktiRFUFUIKzGQXww4iT+5ToDIdyhu
    35 KNqYlEroIub+fYYzRYZ4hc58Kl8oKCFo380RfgvrSpFsTzq665o/s1fOvdttC8nl2uL5zX+j
    36 185OV4CGkhWj+1w8JQJcoLCMpHOhJrOzIxHTmh6G0MhSs7gMlSS167uqAIsVmgaznSgKW6rc
    37 cL7OeQVtIMwIxBIw6OtBZtN10ktKYbeY/o9XopbUaXifH/4P3w0WGyUsHblz0zGydaQrKbm5
    38 uPuL+L7kLd3CHT9fH2jwpJiWzwQyJDsIOVO/EGURdMaGsPq0MYyuTsYzlfeGgDuMxcSGZmNd
    39 Ae2Z5FdOIy3wMgkfsM0Dhn/EhwNVilWtwCOZ/I7E4CNytJpiHP2fSz/VyH740Zp4YQCaUzJ+
    40 mLzH/rRqJPREB7oGsJCfsFkiwbz5TZIkBNqwCMC/KbYppPMw5P3NIUaGXUrk1sTQ7uT5UsAK
    41 V9C/11OnxpR4TLP4lBLyOrTPBfINmWUokO9/KHkkofP+XnoQR5jAkHqojfq7m09jiZAHEpGA
    42 ePrJmr0Whow8Un6YMdwLLGA/WTKAFYNg/oLuzTOo4vIj2tCHXjDvPmQEUdzfnxlkm5+2Qvcz
    43 G4NFjoG5vwPi8hD+0e2x+IYpM4/4XJpzWYcUnSZF0Sm7P7rSe9K/u9kymbsmSQO3pIv+CjT4
    44 WDRaQl5MTAkZQXceyBnWs5iUmjE8Tvhcmj/FlvGa9FPRYLwK0w40KEQi83M/qESXT6g1Oh2r
    45 NBxzeWIZCtI/lDHtVMCaskLqjrsZA49dnL31ltDAmrJSaz5kFNvwQTQFL3itnqrGqEhuxtnm
    46 aPdu0QdTCrMTNBev+mRRV0ItXV0S7AVDxHH6bxk8jf7lvrd6a/4KvlWihxq+9BrRJ7knFXE6
    47 SoxxOm02vptjf+Lk3OMF6K+HB2hhQTQFA73CD4aR4G7G2f9sSl31oUgFRzweyAU8t/7FxN75
    48 TviNBZ8clvEFLW68bHhjuRiOeCNOQVx4+vKqmhX9sJvgzaTeHvHknzr8sai8n8HZEo0ZoQa1
    49 +JQZSGaW8VWiXpyiFygqhLGoNIC/GQozijQGHnP8u1JlliWPWNtBd2sQvt9suZ4hYSwIY/M/
    50 /hV64rLkRBreD/l2Uhz1/hp6ao38giE9YUoGnMzezpWRq/lWkECwAiMWi+3LWCLO1uwjVAMN
    51 9l1VIpOHxY0/sYiB+DEaHxs8T1q5PjgzCJdGMYIpK1gt939KvMc4HLEGnao4Mwiu84s1wJxG
    52 vpb+vMtcuZBCZGV61ZCqnatorkPp4Xr3PKHege67z9V9o5+omgg5XZbCOs4l8MYp6Ib3dzyG
    53 gkO8Gkhf980Qc825jJzsJIZCjfeaVg8/FodBp9EsJo+4+qSHPaB1cxowCKVibcY8kFidAB63
    54 30Z58Dqw788cxVnmtKsAibcse8sPUhZ4aEp7RApXNZtNWsHG3XriYSNiVnL2URnL/6GU6xyz
    55 XlDcNQB3VXME6ICBt2REKZPwhgWoI3GNU1vSNkteetD8QkG9fVKhPPY1Qod4gZ9U3MWQM3BB
    56 UTIYi4tNV49YuEgb6RxkRH2LNNOGzS9VWfJJM8hBNZ/oUB+pxSDW5eTDVENm4ptMcKqOdztV
    57 HgY6Tkt6xgjaBuQ4AbwiGJu2bEI10JrzhoTsg8eVznXgzifgeqE2z4R/HAn+HNtXNSlxXyTn
    58 UTQiGJtOcInHdkPeyiihRXIQhXpdVJ1vyBdYUCBbXVK5mxyFthr/qeQ1Nadk4sabsPotel4L
    59 OhoELILFT/TuqP0zPT/aQV3YvO6WxoSnKWq71L3ysAQi6L0itmqEGMH2ODDs4zqfBBxj3Ll/
    60 blH1vWoH8LAsNwhSBaUqa3oxjxK6ISgFwICp3MraldLIR4FZotC3CIeZcgOJvsSETlf6edBD
    61 vcOwWoMUYilYEYMhaooNpg0MQnAgW+WQkUjNN+2paHivVUlW5Hw0nCXoh6TN3jyFrt34f9eG
    62 jggLV530Qs5eZ511mdL8UAdPShDOG87uPtKuJcpB9HNevdFkMBbAqLJDLnJg6PTvB+/xghSd
    63 AjP5frWAs7zIDQiDEa7H1RkczcZ+47ag0Pd66fjOjvhYaa4J84eZBZm9HSBbitLjqtD8iOCV
    64 ldaSzV6X0ADKnZIDCK4S0SISGyIQHEE0zPjueoGpaEi0rcD+ZOsZ8E3tmwD7+Qa1HUsy7xmd
    65 65LTHSTh+DEMYa7cGrA/19BMMGc6MMCIJbTLLn4PG6plCvOS6O0HQ93d6fGn+LX1W5z/2CxD
    66 wlv5dWHWX0qHuxDlO/j5Zx8Ziu2qZP6zBTBJ2ByQKT8TtPg16tQeOinOKswSRh79S9oQwX1G
    67 j2qITsQ6VfC+ZSNy3Pxk9FUdTSBnuV0y1LZI0Eo0lsgSmhIBoEEXsnG2ZICpvPst6/4N3HVV
    68 dqQvDw6fTs4sYXGUvhNOjDP24P4Ed3gOv4IQ/UP1Qz8HcL4JQEOXPqd4i1RBZjo+rMQQ6tTN
    69 Kk6Sp24/ErMivuBkyMy+/GOS6B7SBW3S7qn+JWak+OJ590Fu8A89ZhCpm2JvKbMKA9xvKbZG
    70 l5RlxbFZJjRssJsuCSgmVpw/20jaZF93A1kO9maBqYv9yHtCJgaJd0lvJ0IQHqA0BgGjvO7F
    71 Yp0NWizrz9Glvs2YYXNqt3QmCoMAz6mbYjPLKDqjXiIsXkrRpb7NmGUirgMN4vRygBaaqXKG
    72 sbmQCDq4FU6y8mt31+6mFAlFq6MyI+anWj48h75lqrJHxTL0iWan1RQJGP2eYh/LcCYIsLcK
    73 d2wJGALHoRMYHiuIWM3IAirHptM+lbICp+4s8SWLuKbTPpWD1eqL/TcfiYda+K9tCOwyuaZU
    74 T1cJ8oc8pawlmd4kMH+HAxndF1vnv1xpHraM0Qsc5Q48SdFx+vaWyy+55Q48SdTARO7LMohO
    75 aUQNIghZE0jsladaPjyHvmWahXY8SUqJ8ZyBLu6mqm2i8lKEawHOdN50JUCm8av0ieDNjdVO
    76 8I9qni729IlmikqV+6m46kQNIghm9wmJ3zZW6s7DV6YvueUOPLCGenhW6loajniOo9F23Qlq
    77 mm/LWhqO8Jfdmfl+VjLuqaLys010egwajvCX3W1nMUhLcTVTCqyjT1O4ViplE9+QZLY+lRW+
    78 Gi6VMgpfz+zh38em0z6VsgKn7kSL7xeMdYu4ptM+lYPV6ov9Nx+Jh1r4r20I7DK5plRPVwny
    79 hzylrCWZ3iQwf4cDGd0XW+e/XGketozRCxzlDjxJ0XH69pbLL7nlDjxJ1MBE7ssyiE5pRA0i
    80 CEfPXX4h100MGo72ZHFYIJHLHraA8vrZkui4qZ3pZmNTa1Blz0AhOd6EpqHwovItM5eNOmCl
    81 OALoDaLy85/GBqZnUwqso08w91o+8v3hp9XPxXam579caR62lkeA/2B/bAkYVOLIWj5IFqBx
    82 V6ljQJ1idshaPquNNx/kMVT8Ffv0iVHMiDCnWj6rjTcfN8JVcL6suOpEDSJS7lQnkd9O9wPM
    83 WfSJ4O5xJegGRVxgyYIsrf2VB0cDWB3IWj5IFqB0lnwsrf2V65HtrXapovItM5eNOmCPRGMf
    84 JT+l4DTJkNqmPElKifGcKjQO0uaDKIdfsdh4pTH7FUyUd5LbTdl1B8VNgfSZ6puDbR1GmMVN
    85 gfQtcTVThEzUesM1O656giyt/ZUh13wMGo72ZHFYIJHLHraA8vrZkui4qZ3pZmNTa1Blz0Ah
    86 Od6EpqHwovItM5eNOmClOALoDaLy85/GBqZnUwqso08w91o+8v3hp9XPxXam579caR62lkeA
    87 /2B/MZyi8vPZlNM+lbICp5lbS3elxAId0z7pjGWlrCT9DExXyvcJIcUdctM+6YxlpWZpYf1W
    88 6i+55Q48SdF2DPBHySLA3xj6vBQJGALHoRMYHiuIWNfPXX6OXkuW9IkGR/taGo54jqNAH3XQ
    89 bvDg5sHNy2smFt3Pkc/eRsG4ptM+lYPV6ov9Nx+Jh1r4r20I7DK5plRPVwnyhzylrCWZ3iQw
    90 f4cDGd0XW+e/XGketozRCxzlDjxJ0XH69pbLL7nlDjxJ1MBE7ssyiE5pRA0iCEfPXX6OXkuW
    91 hGZjERAGrdM+lYPV6ov9WGqS2/cJm8K8PnaPIPdaPkgWoHRPqPA1gy9nCe/3CdjtwAG8889U
    92 a95i9wlGVS1Xpi+55Q48SYcUCRglQKlDnk+eW6YVTJR3ktuZTvcDzFmvvBQJGFTiyFo+SBag
    93 cVepY0CdYnbIWj6rjTcf5DFU/BX79IlRzIgwp1o+q403HzfCVXC+rLjqRA0iUu5UJ5HfTvcD
    94 zFmvvBQJGALHoRMYHiuIWNdtpLBD13apovJShGsBMDcfC1Vp3RT0UstbyvcJ2O3AAbzzMDUp
    95 AqAsNARkk8HLqYH072NdAbKWFu3IM9Yfhioess+fqbjqRA0if/xli2ueJxAGFY00Mwam7uKe
    96 JxDgVuovuWmcfyhcaR62cjQEHkDmkAei8rOS28cUCRgN6uJHeGUHgfTvY10BEF5X3Zpm9wlo
    97 aRGIIoSr1qCoqRpgZmNUEkOXz+/OS+bro2Zj55gLUqbKaFw1O640WhqOY7L72ZQilMumRpjF
    98 TYEpVpbT2ZTTPpWyAqeZW0t3pcQCHdM+6Yxlpawk/QxMV8r3CSHFHXLTPumMZaVmaWH9Vuov
    99 ueUOPEnRdgzwR4uVVHiO/G5aPm//bHnF4roBkd+W1EVrUMai8lKLphQJGP2eYmqZNAQeQNzT
    100 xcG4ptM+lYPV6ov9Nx+Jh1r4r20I7DK5plRPVwnyhzylrCWZ3iQwf4cDGd0XW+e/XGketozR
    101 CxzlDjxJ0XH69pbLL7nlDjxJ1MBE7ssyiE5pRA0iCEdtLUgWWYJ8tj6Vg9Xqi/1YapLb9wmb
    102 wrw+do8g91o+SBagdE+o8DWDL2cJ7/cJ2O3AAbzzz1Rr3mL3CUZVLVemL7nlDjxJhxQJGCVA
    103 qUOeT55bphVMlHeS25mW1EVrUMai8vPZlNM+lbICp5lbS3elxAId0z7pjGWlrCT9DExXyvcJ
    104 IcUdctM+6YxlpWZpYf1W6i+55Q48SdF2DPBHi5VUeI6SdVo+b/9secXiugFd3quGR/taGo54
    105 jqNAqlm5bmr7WhqO9mRxWCA2h2kjqM1bltT2PL4dcBAGWj5Ck2qZ8pYac4+VMNcjHU+kymhc
    106 NTuuX0VfQk2so09TuGxk7VdnUwqso08E2ZRG2ncBR4SUd5LbA3J/VE8wWhqOO656gGZjVBJD
    107 l8/vx1RPVwnyhzx4Ah3kYtM+lbICp+6CGn9a/eFaJxQJGA3q4kd4rCjVITtsCRhL2m9bphVM
    108 lHeS222i8syDL0tx7l7upmdTCqyjT+g6UVTiyFo+SBagcVepY0CdYnbIWj6rjTcf5DFU/BX7
    109 9IlRzIgwp1o+q403HzfCVXC+rLjqRA0iUu5UJ5Fhcn9UT1f3CVZux/+twW7H0d5eUlXF7wM+
    110 MvSJBkf7WhqOeI6jQB8WMsdXpvSJ4M2N1U7w3Znylhpzj5Uw13u/ptiudj5slvYwNSkCoCw0
    111 BGSTII8DHUaYxU2B9MycfyhcaR62H+JdDR9b579caR62C76suAdFX0JNrKNPMOh3svSJUYH0
    112 wVoajvZkcVggkcsetoDy+tmS6LipnelmY1NrUGXPQCE53oSmofCi8i0zl406YKU4AugNovLz
    113 n8YGpmdTCqyjTzD3Wj7y/eGn1c/Fdqbnv1xpHraWiUWi8vPZlNM+lbICp5lbS3elxAId0z7p
    114 jGWlrCT9DExXyvcJIcUdctM+6YxlpWZpYf1W6i+55Q48SdF2DPBH6HdsCRgCx/++8bGHdsit
    115 uAG0aRGIh9g3RnrDvxpYsqamaCe29/g0yMgH7F/yl3oUCRh+2EL4KxYBFAkYftgK0NObaVX3
    116 CQHYWM0L7qlHN9VqQObZpeCyFF015dolRzmm3Tvf875ymb1CTfpNGkeEZmOF1OyzVS1l0z7p
    117 3zw9gL1CyPxK9U25WiZlPEczLbhni92NOIRrATAtOdObDjKITks417zzaCe29/g0yMgH7F/y
    118 l3oUCRh+2IhVpNXv9wmRC0wQichbO9l1B7L2M/DPVm4dFF04+IfRVurIyFs72XUHsvYz8I8e
    119 6nd2z9LUGGQPqKgoPaTeyR28P+nXr4Ag2M6SlNObyj2k3snVbgsmbZ34qj5s6s0=
    120
    121 /
    SP2-0640: Not connected
    SQL> show errors
    SP2-0640: Not connected
    SP2-0641: "SHOW ERRORS" requires connection to server
    SQL> CREATE OR REPLACE PUBLIC SYNONYM DBMS_CLR FOR DBMS_CLR;
    SP2-0640: Not connected
    SQL> DECLARE
    2 ORCL_HOME_DIR VARCHAR2(1024);
    3 BEGIN
    4 DBMS_SYSTEM.GET_ENV('ORACLE_HOME', ORCL_HOME_DIR);
    5 EXECUTE IMMEDIATE 'CREATE OR REPLACE DIRECTORY ORACLECLRDIR AS ''' || ORCL_HOME_DIR || '\bin\clr''';
    6 END;
    7 /
    SP2-0640: Not connected
    SQL> show errors
    SP2-0640: Not connected
    SP2-0641: "SHOW ERRORS" requires connection to server
    SQL> @C:\oraclexe\app\oracle\product\10.2.0\server\rdbms\admin\patch\patch_4659228.sql;
    SQL> set echo off
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0640: Not connected
    ...wwv_flow_help
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0641: "SHOW ERRORS" requires connection to server
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0640: Not connected
    timing for: Load Start
    Elapsed: 00:00:00.00
    SP2-0640: Not connected
    SP2-0640: Not connected
    SP2-0641: "EXECUTE" requires connection to server
    SP2-0640: Not connected
    ERROR:
    ORA-28547: connection to server failed, probable Oracle Net admin error
    SP2-0640: Not connected
    SP2-0641: "EXECUTE" requires connection to server
    SP2-0640: Not connected
    ====================================
    postScripts.log -- END
    ====================================
    ====================================
    PostDBCreation.log -- START
    ====================================
    SQL> connect "SYS"/"&&sysPassword" as SYSDBA
    ERROR:
    ORA-28547: connection to server failed, probable Oracle Net admin error
    SQL> set echo on
    SQL> //create or replace directory DB_BACKUPS as 'C:\oraclexe\app\oracle\flash_recovery_area';
    SP2-0640: Not connected
    SQL> begin
    2      dbms_xdb.sethttpport('8080');
    3      dbms_xdb.setftpport('0');
    4 end;
    5 /
    SP2-0640: Not connected
    SQL> create spfile='C:\oraclexe\app\oracle\product\10.2.0\server\dbs/spfileXE.ora' FROM pfile='C:\oraclexe\app\oracle\product\10.2.0\server\config\scripts\init.ora';
    SP2-0640: Not connected
    SQL> shutdown immediate;
    ORA-24324: service handle not initialized
    ORA-24323: value not allowed
    ORA-28547: connection to server failed, probable Oracle Net admin error
    SQL> connect "SYS"/"&&sysPassword" as SYSDBA
    ERROR:
    ORA-28547: connection to server failed, probable Oracle Net admin error
    SQL> startup ;
    ORA-24324: service handle not initialized
    ORA-01041: internal error. hostdef extension doesn't exist
    SQL> select 'utl_recomp_begin: ' || to_char(sysdate, 'HH:MI:SS') from dual;
    SP2-0640: Not connected
    SQL> execute utl_recomp.recomp_serial();
    SP2-0640: Not connected
    SP2-0641: "EXECUTE" requires connection to server
    SQL> select 'utl_recomp_end: ' || to_char(sysdate, 'HH:MI:SS') from dual;
    SP2-0640: Not connected
    SQL> alter user hr password expire account lock;
    SP2-0640: Not connected
    SQL> alter user ctxsys password expire account lock;
    SP2-0640: Not connected
    SQL> alter user outln password expire account lock;
    SP2-0640: Not connected
    SQL> spool off;
    ====================================
    PostDBCreation.log -- END
    ====================================

    There were no CORE*.LOG files.. So here are the other two..
    ============================
    alert_xe.log START
    ============================
    Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
    Wed Nov 09 10:52:59 2005
    ORACLE V10.2.0.1.0 - Beta vsnsta=1
    vsnsql=14 vsnxtr=3
    Windows XP Version V5.1 Service Pack 2
    CPU : 1 - type 586
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:596M/1023M, Ph+PgF:2167M/2459M, VA:1936M/2047M
    Wed Nov 09 10:52:59 2005
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Shared memory segment for instance monitoring created
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_1 parameter default value as C:\oraclexe\app\oracle\product\10.2.0\server\RDBMS
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =10
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.1.0.
    System parameters with non-default values:
    sessions = 49
    sga_target = 285212672
    control_files = C:\ORACLEXE\ORADATA\XE\CONTROL.DBF
    compatible = 10.2.0.1.0
    undo_management = AUTO
    undo_tablespace = UNDO
    remote_login_passwordfile= EXCLUSIVE
    dispatchers = (PROTOCOL=TCP) (SERVICE=XEXDB)
    shared_servers = 4
    local_listener = (ADDRESS=(PROTOCOL=TCP)(HOST=DGAULT.hotsos.com)(PORT=1521))
    job_queue_processes = 4
    audit_file_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
    background_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
    user_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
    core_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
    db_name = XE
    open_cursors = 300
    pga_aggregate_target = 94371840
    PMON started with pid=2, OS id=2948
    PSP0 started with pid=3, OS id=3468
    MMAN started with pid=4, OS id=3600
    DBW0 started with pid=5, OS id=3148
    LGWR started with pid=6, OS id=4028
    CKPT started with pid=7, OS id=2588
    SMON started with pid=8, OS id=3868
    RECO started with pid=9, OS id=124
    CJQ0 started with pid=10, OS id=1892
    MMON started with pid=11, OS id=1732
    Wed Nov 09 10:53:08 2005
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    MMNL started with pid=12, OS id=2344
    Wed Nov 09 10:53:08 2005
    starting up 4 shared server(s) ...
    Oracle Data Guard is not available in this edition of Oracle.
    ============================
    alert_xe.log END
    ============================
    ============================
    xe_ora_3500.trc START (11:04 AM)
    ============================
    Dump file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_3500.trc
    Wed Nov 09 11:04:54 2005
    ORACLE V10.2.0.1.0 - Beta vsnsta=1
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Beta
    Windows XP Version V5.1 Service Pack 2
    CPU : 1 - type 586
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:598M/1023M, Ph+PgF:1863M/2459M, VA:1617M/2047M
    Instance name: xe
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 0
    Windows thread id: 3500, image: ORACLE.EXE (SHAD)
    opiino: Attach failed! error=-1 ifvp=0000000
    ============================
    xe_ora_3500.trc END
    ============================
    Message was edited by:
    Doug Gault

  • Calendar not syncronized - log file empty

    I'm using dm 4.5 and I'am trying to syncronize the calendar of 8310 (firmware 4.5) with outlook 2003 but the DM doesn't do it.
    The preferences are set up, I mapped the folders but when I launch the sync task, DM syncronize correctly my contacts but it misses to syncronize the calendar; looking at the log file I don't find any trace of the activity.
    Can someone help me?
    thanks,
    Enrico

    any solution please

  • Problems with Log files

    hello everybody.
    I am using java.util.logging package to create log files.
    I have to generate log files and append to the file based on the condition.
    When i am writing message to single log file it it updated to all the log files.
    Code
    ==========================
    //To create error_0.log file
    import java.util.logging.FileHandler;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import java.util.logging.SimpleFormatter;
    public class NetLogger
         private String pattern = "./log/error_%g.log";
         private int limit = 1000000; // 1 Mb
         private int numLogFiles = 300;
         private FileHandler fh = null;
         private Logger logger =null;
    public NetLogger()
         try
              fh = new FileHandler(pattern, limit, numLogFiles);
              fh.setFormatter(new SimpleFormatter());
              logger = Logger.getLogger("com.netenforcers");
                   logger.setUseParentHandlers(false);
              logger.addHandler(fh);
              //logger.setLevel(Level.ALL);
         catch(Exception ex)
              ex.printStackTrace();
    public Logger getLogger()
         return this.logger;
    =======
    // To create whoIs_0.log
    import java.util.logging.FileHandler;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import java.util.logging.SimpleFormatter;
    public class WhoIsLogger
         private String pattern = "./log/whoIs_%g.log";
         private int limit = 1000000; // 1 Mb
         private int numLogFiles = 300;
         private FileHandler fh = null;
         private Logger logger =null;
    public WhoIsLogger()
    try
                   fh = new FileHandler(pattern, limit, numLogFiles);
                   fh.setFormatter(new SimpleFormatter());
                   logger = Logger.getLogger("com.netenforcers");
                   logger.setUseParentHandlers(false);
                   logger.addHandler(fh);
                   //logger.setLevel(Level.ALL);
         catch(Exception ex)
              ex.printStackTrace();
    public Logger getLogger()
         return this.logger;
    ========
    I am calling thses two loggers using
    if(true)
    NetLogger logger = new NetLogger();
    logger.getLogger.info("Hi to be written to error log");
    else
    WhoIsLogger whoislogger = new WhoIsLogger ();
    whoislogger.getLogger.info("Hi to be written to whois log");
    =========
    But the two log files arre updating with two messages.
    Pls help me.
    thnx,
    raj

    hello everybody.
    I am using java.util.logging package to create log files.
    I have to generate log files and append to the file based on the condition.
    When i am writing message to single log file it it updated to all the log files.
    Code
    ==========================
    //To create error_0.log file
    import java.util.logging.FileHandler;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import java.util.logging.SimpleFormatter;
    public class NetLogger
         private String pattern = "./log/error_%g.log";
         private int limit = 1000000; // 1 Mb
         private int numLogFiles = 300;
         private FileHandler fh = null;
         private Logger logger =null;
    public NetLogger()
         try
              fh = new FileHandler(pattern, limit, numLogFiles);
              fh.setFormatter(new SimpleFormatter());
              logger = Logger.getLogger("com.netenforcers");
                   logger.setUseParentHandlers(false);
              logger.addHandler(fh);
              //logger.setLevel(Level.ALL);
         catch(Exception ex)
              ex.printStackTrace();
    public Logger getLogger()
         return this.logger;
    =======
    // To create whoIs_0.log
    import java.util.logging.FileHandler;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import java.util.logging.SimpleFormatter;
    public class WhoIsLogger
         private String pattern = "./log/whoIs_%g.log";
         private int limit = 1000000; // 1 Mb
         private int numLogFiles = 300;
         private FileHandler fh = null;
         private Logger logger =null;
    public WhoIsLogger()
    try
                   fh = new FileHandler(pattern, limit, numLogFiles);
                   fh.setFormatter(new SimpleFormatter());
                   logger = Logger.getLogger("com.netenforcers");
                   logger.setUseParentHandlers(false);
                   logger.addHandler(fh);
                   //logger.setLevel(Level.ALL);
         catch(Exception ex)
              ex.printStackTrace();
    public Logger getLogger()
         return this.logger;
    ========
    I am calling thses two loggers using
    if(true)
    NetLogger logger = new NetLogger();
    logger.getLogger.info("Hi to be written to error log");
    else
    WhoIsLogger whoislogger = new WhoIsLogger ();
    whoislogger.getLogger.info("Hi to be written to whois log");
    =========
    But the two log files arre updating with two messages.
    Pls help me.
    thnx,
    raj

  • Var/log/mail.log file empty

    My var/log/mail.log isn't logging anything, the file seems to empty since 4th March 2010 3:15 AM
    Have tried the below mentioned troubleshooting steps, but no luck though
    1. Stopped and restarted mail service
    2. Repaired disk permissions through disk utility application
    3. Repaired permissions through terminal diskutil
    4. Restarted daemons as suggested in this forum http://discussions.info.apple.com/thread.jspa?threadID=2088823&tstart=60
    5. Changed permissions as suggested on this forum http://forums.macosxhints.com/archive/index.php/t-13985.html
    Any help please!!!

    Change the archive log to 3 days. Make sure all three log levels are set to information. Restart mail and see if any thing appears in the logs.
    Also how are you viewing the logs - using SA or Console?
    Thanks,
    Henry

  • Log file utilization problem

    I've encountered a problem with log file utilization during a somwhat long transaction during which some data is inserted in a StoredMap.
    I've set the minUtilization property to 75%. During insertion, things seem to go smoothly, but at one point log files are created WAY more rapidly than what the amount of data would call for. The test involves inserting 750K entries for a total of 9Mb, the total size of log files is 359 Mb. Using DbSpace shows that the first few log files use approx 65% of their total space, but most only use 2%.
    I understand that during a transaction, the Cleaner may not clean the log files involved. What I don't understand is why are most of the log files only using 2%:
    File Size (KB) % Used
    00000000 9763 56
    00000001 9764 68
    00000002 9765 68
    00000003 9765 69
    00000004 9765 69
    00000005 9765 69
    00000006 9765 68
    00000007 9765 70
    00000008 9764 68
    00000009 9765 61
    0000000a 9763 61
    0000000b 9764 25
    0000000c 9763 2
    0000000d 9763 1
    0000000e 9763 2
    0000000f 9763 1
    00000010 9764 2
    00000011 9764 1
    00000012 9764 2
    00000013 9764 1
    00000014 9764 2
    00000015 9763 1
    00000016 9763 2
    00000017 9763 1
    00000018 9763 2
    00000019 9763 1
    0000001a 9765 2
    0000001b 9765 1
    0000001c 9765 2
    0000001d 9763 1
    0000001e 9765 2
    0000001f 9765 1
    00000020 9764 2
    00000021 9765 1
    00000022 9765 2
    00000023 9765 1
    00000024 9763 2
    00000025 7028 2
    TOTALS 368319 21
    I've created a test class that reproduces the problem. It might be possible to make it even more simple, but I haven't had time to work on it to much.
    Executing this test with 500K values does not reproduce the problem. Can someone please help me shed some light on this issue?
    I'm using 3.2.13 and the following properties file:
    je.env.isTransactional=true
    je.env.isLocking=true
    je.env.isReadOnly=false
    je.env.recovery=true
    je.log.fileMax=10000000
    je.cleaner.minUtilization=75
    je.cleaner.lookAheadCacheSize=262144
    je.cleaner.readSize=1048576
    je.maxMemory=104857600
    Test Class
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.FileNotFoundException;
    import java.io.IOException;
    import java.util.Properties;
    import com.sleepycat.bind.EntityBinding;
    import com.sleepycat.bind.EntryBinding;
    import com.sleepycat.bind.tuple.StringBinding;
    import com.sleepycat.bind.tuple.TupleBinding;
    import com.sleepycat.collections.CurrentTransaction;
    import com.sleepycat.collections.StoredMap;
    import com.sleepycat.je.Database;
    import com.sleepycat.je.DatabaseConfig;
    import com.sleepycat.je.DatabaseEntry;
    import com.sleepycat.je.DatabaseException;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    public class LogFileTest3 {
    private long totalSize = 0;
    private Environment env;
    private Database myDb;
    private StoredMap storedMap_ = null;
    public LogFileTest3() throws DatabaseException, FileNotFoundException, IOException {
    Properties props = new Properties();
    props.load(new FileInputStream("test3.properties"));
    EnvironmentConfig envConfig = new EnvironmentConfig(props);
    envConfig.setAllowCreate(true);
    File envDir = new File("test3");
    if(envDir.exists()==false) {
    envDir.mkdir();
    env = new Environment(envDir, envConfig);
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(true);
    dbConfig.setTransactional(true);
    dbConfig.setSortedDuplicates(false);
    myDb = env.openDatabase(null, "testing", dbConfig);
    EntryBinding keyBinding = TupleBinding.getPrimitiveBinding(String.class);
    EntityBinding valueBinding = new TestValueBinding();
    storedMap_ = new StoredMap(myDb, keyBinding, valueBinding, true);
    public void cleanup() throws Exception {
    myDb.close();
    env.close();
    private void insertValues(int count) throws DatabaseException {
    CurrentTransaction ct = CurrentTransaction.getInstance(this.env);
    try {
    ct.beginTransaction(null);
    int i = 0;
    while(i < count) {
    TestValue tv = createTestValue(i++);
    storedMap_.put(tv.key, tv);
    System.out.println("Written "+i+" values for a total of " totalSize" bytes");
    ct.commitTransaction();
    } catch(Throwable t) {
    System.out.println("Exception " + t);
    t.printStackTrace();
    ct.abortTransaction();
    private TestValue createTestValue(int i) {
    TestValue t = new TestValue();
    t.key = "key_"+i;
    t.value = "value_"+i;
    return t;
    public static void main(String[] args) throws Exception {
    LogFileTest3 test = new LogFileTest3();
    if(args[0].equalsIgnoreCase("clean")) {
    while(test.env.cleanLog() != 0);
    } else {
    test.insertValues(Integer.parseInt(args[0]));
    test.cleanup();
    static private class TestValue {
    String key = null;
    String value = null;
    private class TestValueBinding implements EntityBinding {
    public Object entryToObject(DatabaseEntry key, DatabaseEntry entry) {
    TestValue t = new TestValue();
    t.key = StringBinding.entryToString(key);
    t.value = StringBinding.entryToString(key);
    return t;
    public void objectToData(Object o, DatabaseEntry entry) {
    TestValue t = (TestValue)o;
    StringBinding.stringToEntry(t.value, entry);
    totalSize += entry.getSize();
    public void objectToKey(Object o, DatabaseEntry entry) {
    TestValue t = (TestValue)o;
    StringBinding.stringToEntry(t.key, entry);
    }

    Yup, that solves the issue. By doubling the
    je.maxMemory property, I've made the problem
    disapear.Good!
    How large is the lock on 64 bit architecture?Here's the complete picture for read and write locks. Read locks are taken on get() calls without LockMode.RMW, and write locks are taken on get() calls with RMW and all put() and delete() calls.
    Arch  Read Lock  Write Lock
    32b    96B       128B
    64b   176B       216B
    I'm setting the je.maxMemory property becauce I'm
    dealing with many small JE environments in a single
    VM. I don't want each opened environment to use 90%
    of the JVM RAM...OK, I understand.
    I've noticed that the je.maxMemory property is
    mutable at runtime. Would setting a large value
    before long transactions (and resetting it after) be
    a feasable solution to my problem? Do you see any
    potential issue by doing this?We made the cache size mutable for just this sort of use case. So this is probably worth trying. Of course, to avoid OutOfMemoryError you'll have to reduce the cache size of other environments if you don't have enough unused space in the heap.
    Is there a way for me to have JE lock multiple
    records at the same time? I mean have it create a
    lock for a insert batch instead of every item in the
    batch...Not currently. But speaking of possible future changes there are two things that may be of interest to you:
    1) For large transaction support we have discussed the idea of providing a new API that locks an entire Database. While a Database is locked by a single transaction, no individual record locks would be needed. However, all other transactions would be blocked from using the Database. More specifically, a Database read lock would block other transactions from writing and a Database write lock would block all access by other transactions. This is the equivalent of "table locking" in relational DBs. This is not currently high on our priority list, but we are gathering input on this issue. We are interested in whether or not a whole Database lock would work for you -- would it?
    2) We see more and more users like yourself that open multiple environments in a single JVM. Although the cache size is mutable, this puts the burden of efficient memory management onto the application. To solve this problem, we intend to add the option of a shared JE cache for all environments in a JVM process. The entire cache would be managed by an LRU algorithm, so if one environment needs more memory than another, the cache dynamically adjusts. This is high on our priority list, although per Oracle policy I can't say anything about when it will be available.
    Besides increasing the je.maxMemory, do you see any
    other solution to my problem?Use smaller transactions. ;-) Seriously, if you have not already ruled this out, you may want to consider whether you really need an atomic transaction. We also support non-transactional access and even a non-locking mode for off-line bulk loads.
    Thanks a bunch for your help!You're welcome!
    Mark

  • Help! SQL server database log file increasing enormously

    I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of
    disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log
    data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach
    to cleanup the log.
    FYI: All the SSIS packages in the job is using Transaction on
    some tasks. for eg. Sequence Cointainer
    I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues. Thanks in advance.

    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues.
    For SSIS part of question it would be better if you ask in SSIS forum although noting is going to change about logging behavior. You can increase some space on log file and also should batch your transactions as already suggested
    Regarding memory question about SQL Server, once it utilizes memory is not going to release unless there is windows OS faces memory pressure and SQLOS asks SQL Server to trim down its memory consumption. So again if you have set max server memory to some
    where near 50 SQL Server will utilize that much memory eventually. So what you are seeing is totally normal. Remember its costtly task for SQL Server to release and take memory so it avoids it by caching as much as possible plus it caches more so as to avoid
    physical reads which are costly
    When log file is getting full what does below transaction return
    select log_reuse_wait_desc from sys.databases where name='db_name'
    Can you manually introduce chekpoint in ETL query. Try this it might help you
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Exchange 2010 SP3, RU5 - Massive Transaction Log File Generation

    Hey All,
    I am trying to figure out why 1 of our databases is generating 30k Log Files a day! The other one is generating 20K log files a day. The database does not grow in size as the log files are generated, the problem is log file generation.
    I've tried running through some of the various solutions out there, reviewed message tracking logs, rpc client access logs, IIS Logs - all of which show important info, but none of which actually provide the answers.
    I Stopped the following services to see if that would affect the log file generation in any way, and it has not!
    MS Exchange Transport
    Mail Submission
    IIS (Site Stopped in IIS)
    Mailbox Assistants
    Content Indexing Service
    With the above services stopped, I still see dozens (or more) log files generated in under 10 minutes, I also checked mailbox size reports (top 10) and found that several users mailboxes were generating item count increases for one user of
    about 300, size increases for one user of about 150Mb (over the whole day).
    I am not sure what else to check here? Any ideas?
    Thanks,
    Robert
    Robert

    Hmm - this sounds like an device is chewing up the logs.
    If you use log parser studio, are there any stand out devices in terms of the # of hits?
    And for the ExMon was that logged over a period of time?  The default 60 second window normally misses a lof of stuff.  Just curious!
    Cheers,
    Rhoderick
    Microsoft Senior Exchange PFE
    Blog:
    http://blogs.technet.com/rmilne 
    Twitter:   LinkedIn:
      Facebook:
      XING:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
    Rhoerick, 
    Thanks for the response. When checking the logs the highest number of hits were the (Source) Load Balancers, Port 25 VIP. The problem i was experience was the following: 
    1) I kept expecting the log file generation to drop to an acceptable rate of 10~20 MB Per Minute (Max). We have a large environment and use the exchange sevrers as the mail relays for the hated Nagios monitoring environment
    2) We didn't have our enterprise monitoring system watching SMTP traffic, this is  being resolved. 
    3) I needed to look closer at the SMTP transport database counters, logs, log files and focus less on the database log generation, i did do some of that but not enough of that. 
    4) My troubleshooting kept getting thrown off due to the monitoring notifications seeming to be sent out in batches (or something similar) stopping the transport service for 10 ~ 15 minutes several times seemed to finally "stop the transactions logs
    from growing at a psychotic rate". 
    5) I am re-running my data captures now that i have told the "Nagios Team" to quit killing the exchange servers, with their notifications, sometimes as much as 100+ of the same notifications for the same servers, issues. so far at a quick glance
    the log file generation seems to have dropped by about 30%. 
    Question: What would be the best counters to review in order to "Put it all together"? Also note: our Server roles are split, MBX and CAS/HT. 
    Robert 
    Robert

  • Apex/OHS issue:log files in opmn consume disk space

    Hi.
    I have installed OHS from Companion CD (10g Rel1) and the new Apex 3.0.
    There seems to be a problem because log files in opmn/logs are eating up my disk space. Apparently there's something missing or not configured yet.
    The exact error message is:
    07/04/12 10:34:13 [4] Falta el factor de formato de la conexión local 0,127.0.0.1,6100
    The contents of etc/hosts is:
    [oracle@caman bin]$ cat /etc/hosts
    # Do not remove the following line, or various programs
    # that require network functionality will fail.
    127.0.0.1 localhost.localdomain localhost
    172.17.1.8 caman.rioturbio.com.ve caman
    172.17.1.1 tepuy.rioturbio.com.ve tepuy
    OS info:
    [oracle@caman bin]$ cat /etc/redhat-release
    Red Hat Enterprise Linux AS release 4 (Nahant Update 2)
    Please help ...! Thanks in advance ....!

    Hi,
    I believe this is due to a conflict with something else using that port number (6100). One solution would be to edit the opmn config file and change the port from 6100 to something else (such as 6101).

  • Log file is not created using LOG4J

    Hi all,
    I want to use Log4j to log the details about my portal application. I would prefer using Log4j instead of SAP Logging. I have created properties file in my portal application which extends Abstract Portal Component. This my Log4j.properties file which i created under dist\PORTAL-INF\classses folder. I could read the Properties file. There is no problem on that. I am able to refer the API s available in Log4j. I am using Log4j-1[1].2.14.jar.
    log4j.rootLogger = INFO, R1
    log4j.appender.R1 = org.apache.log4j.RollingFileAppender
    log4j.appender.R1.File = LoggerForMyApplicationC.log
    log4j.appender.R1.MaxFileSize=100KB
    log4j.appender.R1.MaxBackupIndex=1
    log4j.appender.R1.layout=org.apache.log4j.PatternLayout
    log4j.appender.R1.layout.ConversionPattern=[%-5p] [%d] [%c] - [%m]%n
    But the problem is Log file is not created (LoggerForMyApplicationC.log) under \logs foler in SAP J2EE Server.
    Do i miss any other configuration? Please bring me the solution.
    Thanks,
    Malar

    Malar,
    Check for the file in the server folder of your server installed directory.
    ie..<Drive where server installed>:\usr\sap\<name>\<instance>\j2ee\<server node>
    Regards
    Abu Bakar

  • Mpd issue not able to open log file

    I've been trying to setup mpd on my netbook arch install and have got config and everything setup but I keep getting this error,
    log: problem opening log file "/var/log/mpd/mpd.log" (config line 37) for writing
    My config line 37 is
    log_file "/var/log/mpd/mpd.log"
    the mpd user owns /var/log/mpd so I don't know what the issue is.
    Let me know if you need anything else from my system.

    Okay I'm at a loss here for whatever reason music does not show up. I tried starting mpd as root and I get this output
    [maxmarze@titan ~]$ sudo mpd
    listen: bind to '0.0.0.0:6600' failed: Address already in use (continuing anyway, because binding to '[::]:6600' succeeded)
    music directory is not a directory: "/home/maxmarze/music"
    output: No "audio_output" defined in config file
    output: Attempt to detect audio output device
    output: Attempting to detect a alsa audio device
    output: Successfully detected a alsa audio device
    My mpd config file is as such
    music_directory "/home/maxmarze/music/" # Your music dir.
    playlist_directory "/var/lib/mpd/playlists"
    db_file "/var/lib/mpd/mpd.db"
    log_file "/var/log/mpd/mpd.log"
    pid_file "/var/run/mpd/mpd.pid"
    state_file "/var/lib/mpd/mpdstate"
    user "mpd"
    # Binding to address and port causing problems in mpd-0.14.2 best to leave
    # commented.
    # bind_to_address "127.0.0.1"
    # port "6600"
    and my ncmpcpp config is
    mpd_host = "127.0.0.1"
    mpd_port = "6600"
    mpd_music_dir = "/home/maxmarze/music/"
    mpd_connection_timeout = "5"
    mpd_crossfade_time = "5"
    The music directory is owned by me and mpd as I assumed it would be.

  • Steps to empty SAPDB (MaxDB) log file

    Hello All,
    i am on Redhat Unix Os with NW 7.1 CE and SAPDB as Back end. I am trying to login but my log file is full. Ii want to empty log file but i havn't done any data backup yet. Can anybody guide me how toproceed to handle this problem.
    I do have some idea what to do like the steps below
    1.  take databackup (but i want to skip this step if possible) since this is a QA system and we are not a production company.
    2. Take log backup using same methos as data backup but with Log type (am i right or there is somethign else)
    3. It will automatically overwrite log after log backups.
    or should i use this as an alternative, i found this in note Note 869267 - FAQ: SAP MaxDB LOG area
    Can the log area be overwritten cyclically without having to make a log backup?
    Yes, the log area can be automatically overwritten without log backups. Use the DBM command
    util_execute SET LOG AUTO OVERWRITE ON
    to set this status. The behavior of the database corresponds to the DEMO log mode in older versions. With version 7.4.03 and above, this behavior can be set online.
    Log backups are not possible after switching on automatic overwrite. Backup history is broken down and flagged by the abbreviation HISTLOST in the backup history (dbm.knl file). The backup history is restarted when you switch off automatic overwrite without log backups using the command
    util_execute SET LOG AUTO OVERWRITE OFF
    and by creating a complete data backup in the ADMIN or ONLINE status.
    Automatic overwrite of the log area without log backups is NOT suitable for production operation. Since no backup history exists for the following changes in the database, you cannot track transactions in the case of recovery.
    any reply will be highly appreciated.
    Thanks
    Mani

    Hello Mani,
    1. Please review the document u201CUsing SAP MaxDB X Server Behind a Firewallu201D at MAXDB library
    http://maxdb.sap.com/doc/7_7/44/bbddac91407006e10000000a155369/content.htm
               u201CTo enable access to X Server (and thus the database) behind a firewall using a client program such as Database Studio, open the necessary ports in your  firewall and restrict access to these ports to only those computers that need to access the database.u201D
                 Is the database server behind a Firewall? If yes, then the Xserver port need to be open. You could restrict access to this port to the computers of your database administrators, for example.
    Is "nq2host" the name of the database server? Could you ping to the server "nq2host" from your machine?
    2. And if the database server and your PC in the local area NetWork you could start the x_server on the database server & connect to the database using the DB studio on your PC, as you already told by Lars.
    See the document u201CNetwork Communicationu201D at
    http://maxdb.sap.com/doc/7_7/44/d7c3e72e6338d3e10000000a1553f7/content.htm
    Thank you and best regards, Natalia Khlopina

  • Empty Log File - log settings will not save

    Description of Problem or Question:
    Cannot get logging to work in folder D:\Program Files\Business Objects\Dashboard and Analytics 12.0\server\log
    (empty log file is created)
    Product\Version\Service Pack\Fixpack (if applicable):
    BO Enterorise 12.0
    Relevant Environment Information (OS & version, java or .net & version, DB & version):
    Server: windows Server 2003 Enterprise SP2.
    Database Oracle 10g
    Client : Vista
    Sporadic or Consistent (if applicable):
    Consistent
    What has already been tried (where have you searched for a solution to your question/problem):
    Searched forum, SMP
    Steps to Reproduce (if applicable):
    From InfoViewApp, logged in as Admin
    Open ->Dashboard and Analytics Setp -> Parameters -> Trace
    Check "Log to folder" and "SQL Queries", Click Apply.
    Now, navigate away and return to this page - the "Log to folder" is unchecked. Empty log file is created.

    Send Apple feedback. They won't answer, but at least will know there is a problem. If enough people send feedback, it may get the problem solved sooner.
    Feedback
    Or you can use your Apple ID to register with this site and go the Apple BugReporter. Supposedly you will get an answer if you submit feedback.
    Feedback via Apple Developer
    Do a backup.
    Quit the application.
    Go to Finder and select your user/home folder. With that Finder window as the front window, either select Finder/View/Show View options or go command - J.  When the View options opens, check ’Show Library Folder’. That should make your user library folder visible in your user/home folder.  Select Library. Then go to Preferences/com.apple.systempreferences.plist. Move the .plist to your desktop.
    Restart, open the application and test. If it works okay, delete the plist from the desktop.
    If the application is the same, return the .plist to where you got it from, overwriting the newer one.
    Thanks to leonie for some information contained in this.

  • Log file cleaning problem

    Hi,
    I'm evaluating Berkeley DB Java Edition for my application, and I have the following code:
    public class JETest {
         private Environment env;
         private Database myDb;
         public JETest() throws DatabaseException {
              EnvironmentConfig envConfig = new EnvironmentConfig();
              envConfig.setAllowCreate(true);
              envConfig.setTransactional(false);
              envConfig.setCachePercent(1);
              env = new Environment(new File("/tmp/test2"),envConfig);
              DatabaseConfig dbConfig = new DatabaseConfig();
              dbConfig.setDeferredWrite(true);
              dbConfig.setAllowCreate(true);
              dbConfig.setTransactional(false);
              myDb = env.openDatabase(null, "testing", dbConfig);
         public void cleanup() throws Exception {
              myDb.close();
              env.close();
         private void insertDelete() throws DatabaseException {
              int keyGen = Integer.MIN_VALUE;
              byte[] key = new byte[4];
              byte[] data = new byte[1024];
              ByteBuffer buff = ByteBuffer.wrap(key);
              for (int i=0;i<20000;i++) {
                   buff.rewind();
                   buff.putInt(keyGen++);
                   myDb.put(null, new DatabaseEntry(key), new DatabaseEntry(data));
              int count = 0;
              System.out.println("done inserting");
              keyGen = Integer.MIN_VALUE;
    OperationStatus status;
              for (int i=0;i<20000; i++) {
                   buff.rewind();
                   buff.putInt(keyGen++);
                   count++;
                   status = myDb.delete(null, new DatabaseEntry(key));
    if (status != OperationStatus.SUCCESS) {
                        System.out.println("Delete failed.");
              System.out.println("called delete "+count+" times");
              env.sync();
         public static void main(String[] args) throws Exception {
              JETest test = new JETest();
              test.insertDelete();
              test.cleanup();
    After running the above, I expected that the log file utilization be 0%, because I delete each and every record in the database. The status returned by the delete() method was OperationStatus.SUCCESS for all the invocations.
    I ran the DbSpace utility, and this is what I found:
    $ java -cp je-3.2.13/lib/je-3.2.13.jar com.sleepycat.je.util.DbSpace -h /tmp/test2 -d
    File Size (KB) % Used
    00000000 9765 99
    00000001 3236 99
    TOTALS 13001 99
    Obviously, the cleaner thread won't clean log files that are 99% used.
    I did expect the logs to be completely empty, though. What is going on here?
    Thanks,
    Lior

    Lior,
    With the default heap size (64m) I was able to reproduce the problem you're seeing. I think I understand what's happening.
    First see this note in the javadoc for setDeferredWrite:
    http://www.oracle.com/technology/documentation/berkeley-db/je/java/com/sleepycat/je/DatabaseConfig.html#setDeferredWrite(boolean)
    Note that although deferred write databases can page to disk if the cache is not large enough to hold the databases, they are much more efficient if the database remains in memory. See the JE FAQ on the Oracle Technology Network site for information on how to estimate the cache size needed by a given database. In the current implementation, a deferred write database which is synced or pages to the disk may also add additional log cleaner overhead to the environment. See the je.deferredWrite.temp property in <JEHOME>/example.properties for tuning information on how to ameliorate this cost.
    The statement above about "additional log cleaner overhead" is not quite accurate. Specifically, we do not currently keep track of obsolete information about DW (Deferred Write) databases. We are considering improvements in this area.
    Your test has brought out this issue to an extreme degree because of the very small cache size, and the fact that you delete all records. Most usage of DW that we see involves a bulk data load (no deletes), or deletes performed in memory so the record never goes to disk at all. With the tiny cache in your test, the records do go to disk before they are deleted.
    You can avoid this situation in a couple of different ways:
    1) Use a larger cache. If you insert and delete before calling sync, the inserted records will never be written to disk.
    2) If this is a temporary database (no durability is required), then you can set the je.deferredWrite.temp configuration parameter mentioned above. Setting this to true enables accurate utilization tracking for a DW database, for the case where durability is not required.
    # If true, assume that deferred write database will never be
    # used after an environment is closed. This permits a more efficient
    # form of logging of deferred write objects that overflow to disk
    # through cache eviction or Database.sync() and reduces log cleaner
    # overhead.
    # je.deferredWrite.temp=false
    # (mutable at run time: false)
    Will either of these options work for your application?
    Mark

Maybe you are looking for

  • I upgraded to snow leopard but now I have only 1 GB of storage space?

    It seems like all my files/apps are duplicated or something. I found all my old files, but do I really have to load all my music back in to itunes? Same thing with my photos, I have them all but they aren't showing up into iphoto.

  • Outlook is asking that I upgrade my safari , how do I do this

    When I log on to outlook it keeps asking me the following Download the latest version of Microsoft Internet Explorer Or upgrade your current version of Apple Safari How do I go about this St Albans Tracy

  • Is it possible to show Top 1 in GH3?

    Post Author: kcornett CA Forum: General I am using CR 11.0.0.1282 on XP Pro SP2 Group Header 1 separates by Product Code ("Mold", "Repair", or "Production") Group Header 2 separates by Order (list of order numbers) Details shows Jobs (list of order n

  • Doubt on OLAP  Business layer

    Dear Experts, I am new to IDT. I have started getting the concepts. My doubt is, OLAP Business layer will created without using Data foundation. In this case, don't we need schema for the relevant tables huh? . How the normalization of tables and joi

  • I can't get any CDs into my Macbook's CD drive

    As far as I can tell, earlier this week I removed my game CD out of the drive. Today, as I was trying to sort out some issues, I tried to put my Mac OS X instalation cd in and it just won't go in. Its almost like there is a blockage keeping it from g