Standby Database fails to read dictionary from redo log

hi,
I am attempting to create a Logical standby database on same machine as the primary database. I have executed the steps outlined in Oracle Documentation several times, but end up with the same error. Detailes of setup and error are provided below. Please help. Thanks.
==========
OS: REdhat 8 (2.4.18-14)
RDBMS: Oracle EE Server 9.2.0.3.0
primary db init details:
*.log_archive_dest_1='LOCATION=/usr3/oracle/admin/lbsp/archive/ MANDATORY'
*.log_archive_dest_2='SERVICE=STDBY'
standby db init details:
log_archive_dest_1='LOCATION=/usr3/oracle/admin/stdby/archive/'
standby_archive_dest='/usr3/oracle/admin/lbsp/archive_pdb/'
Standby alert log file (tail)
LOGSTDBY event: ORA-01332: internal Logminer Dictionary error
Sun Jul 13 11:37:20 2003
Errors in file /usr3/oracle/admin/stdby/bdump/stdby_lsp0_13691.trc:
ORA-01332: internal Logminer Dictionary error
LSP process trace file:
Instance name: stdby
Redo thread mounted by this instance: 1
Oracle process number: 18
Unix process pid: 13691, image: oracle@prabhu (LSP0)
*** 2003-07-13 11:37:19.972
*** SESSION ID:(12.165) 2003-07-13 11:37:19.970
<krvrd.c:krvrdfdm>: DDL or Dict mine error exit. 384<krvrd.c:krvrdids>: Failed to mine dictionary. flgs 180
knahcapplymain: encountered error=1332
*** 2003-07-13 11:37:20.217
ksedmp: internal or fatal error
. (memory dump)
KNACDMP: Unassigned txns = { }
KNACDMP: *******************************************************
error 1332 detected in background process
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-01332: internal Logminer Dictionary error
another trace file created by error is: stdby_p001_13695.trc
Instance name: stdby
Redo thread mounted by this instance: 1
Oracle process number: 20
Unix process pid: 13695, image: oracle@prabhu (P001)
*** 2003-07-13 11:37:19.961
*** SESSION ID:(22.8) 2003-07-13 11:37:19.908
krvxmrs: Leaving by exception: 604
ORA-00604: error occurred at recursive SQL level 1
ORA-01031: insufficient privileges
ORA-06512: at "SYS.LOGMNR_KRVRDREPDICT3", line 68
ORA-06512: at line 1
there are no errors anywhere during the creation, mounting or opening of standby database. After the initial log register, any log switch on primary is communicated to standby and visible in DBA_LOGSTDBY_LOG. Also, archived logs from primary are successfuly copied by system to directory pointed by standby db's standby_archive_dest parameter.
I noticed, somehow everytime I issue "ALTER DATABASE START LOGICAL STANDBY APPLY" command the procedures and packages related to logmnr get invalid. I compile them and again after "APPLY" they become invalid.
Invalid object list:
OBJECT_TYPE OBJECT_NAME
VIEW DBA_LOGSTDBY_PROGRESS
PACKAGE BODY DBMS_INTERNAL_LOGSTDBY
PACKAGE BODY DBMS_STREAMS_ADM_UTL
VIEW LOGMNR_DICT
PACKAGE BODY LOGMNR_DICT_CACHE
PROCEDURE LOGMNR_GTLO3
PROCEDURE LOGMNR_KRVRDA_TEST_APPLY
Anybody point out what I am doing wrong. Thanks for the help

ORA-15001: diskgroup "ORAREDO3" does not exist or is not mounted
ORA-15001: diskgroup "ORAREDO3" does not exist or is not mountedhave you mentioned parameter LOG_FILE_NAME_CONVERT in standby when online redo log locations are different?
post from standby:-
SQL> select name, state From v$asm_diskgroup;
FAL[client, MRP0]: Error 1031 connecting to MKS01P_PRD for fetching gap sequence
ORA-01031: insufficient privilegesPost from primary & standby
SQL> select * from v$pwfile_users;
User Profile for 919131
919131     
Handle:     919131  
Status Level:     Newbie
Registered:     Mar 6, 2012
Total Posts:     16
Total Questions:     8 (8 unresolved)
OTN failed 100% to help you, then why you posted another question?
First close all your old answered threads and then better continue your updates in your thread.
Edited by: CKPT on Jul 9, 2012 11:45 AM

Similar Messages

  • Physical standby database fail-over

    Hi,
    I am working on Oracle 10.2.0.3 on Solaris SPARC 64-bit.
    I have a Dataguard configuration with a single Physical standby database that uses real time application. We had a major application upgrade yesterday and before the start of upgrade, we cancelled the media recovery and disabled the log_archive_dest_n so that it doesn't ship the archive logs to standby site. We left the dataguard configuration in this mode incase of a rollback.
    Primary:
    alter system set log_archive_dest_state_2='DEFER';
    alter system switch logfile;
    Standby:
    alter database recover managed standby database cancel;Due to application upgrade induced problems we had to failover to the physical standby, which was not in sync with primary from yesterday. I used the following method to fail-over since i do not want to apply any redo from yesterday.
    Standby:
    alter database activate physical standby database;
    alter database open;
    shutdown immediate;
    startupSo, after this step, the database was a stand alone database, which doesn't have any standby databases yet (but it still has log_archive_config parameter set and log_archive_dest_n parameters set but i have 'DEFER' the log_archive_dest_n pointing to the old primary). I have even changed the "archive log deletion policy to NONE"
    RMAN> configure archivelog deletion policy to none;After the fail-over was completed, the log sequence started from Sequence 1. We cleared the FRA to make space for the new archive logs and started off a FULL database backup (backup incremental level 0 database plus archivelog delete input). The backup succeded but we got these alerts in the backup log that RMAN cannot delete the archivelogs.
    RMAN-08137: WARNING: archive log not deleted as it is still neededMy question here is
    1) Even though i have disabled the log_archive_dest_n parameters, why is RMAN not able to delete the archivelogs after backup when there is no standby database for this failed-over database?
    2) Are all the old backups marked unusable after a fail-over is performed?
    FYI... flashback database was not used in this case as it did not server our purpose.
    Any information or documentation links would be greatly appreciated.
    Thanks,
    Harris.

    Thanks for the reply.
    The FINISH FORCE works in some cases but if there is an archive gap (though it didn't report in our case), it might not work some times (DOCID: 846087.1). So, we followed the Switch-over & Fail-Over best practices where it mentioned about this "ACTIVE PHYSICAL STANDBY" for a fail-over if you intend not to apply any archivelogs. The process we followed is the Right one.
    Anyhow, we got the issue resolved. Below is the resolution path.
    1) Even though if you DEFER the LOG_ARCHIVE_DEST_STATE_N parameter's on the primary, there are some situations where the Primary database in a dataguard configuration where it will not delete the archive logs due to some SCN issues. This issue may or may not arise in all fail-over scenarios. If it does, then do the following checks
    Follow DOCID: 803635.1, which talks about a PLSQL procedure to check for problematic SCN's in a dataguard configuration even though the physical standby databases are no available (i.e., if the dataguard parameters are set, log_archive_config, log_archive_dest_n='SERVICE=..." still set and even though corresponding LOG_ARCHIVE_DEST_STATE_N parameters are DEFERRED).
    If this procedure returns any rows, then the primary database is not able to delete the archivelogs because it is still thinking there is a standby database and trying to save the archive logs because of the SCN conflict.
    So, the best thing to do is, remove the DG related parameters from the spfile (log_archive_config, log_archive_dest_n parameters).
    After i made these changes, i ran a test backup using "backup archivelog all delete input", the archive logs got deleted after backup without any issues.
    Thanks,
    Harris.
    Edited by: user11971589 on Nov 18, 2010 2:55 PM

  • GoldenGate Extract Process will not read from redo log with manual help

    Here is my issue.
    I have GoldenGate replication successfully setup one-way from 1 Source to Many Targets. There is 1 source extract on the DB and many pumps that push the trail file data to the Targets. Replication does work but after manual help with starting up the Source Extract process.
    If I execute the command:
    GGSCI> alter extract <source extract name> begin now
    GGSCI> view report <source extract name>
    The extract starts and reads the source trail file but will not process data, I continually see in the ggserr.log file "OGG-01515: Positioning to begin time MMM DD, YYYY, HH:MM:SS" The date and time are irrevelant for this problem.
    When I see this, I SQ*Plus into the database and look in the v$log table for the current log and sequence #.
    I return to GGSCI and issue the following command:
    GGSCI> alter extract <source extract name> thread 1 extseqno <sequence # from v$log query>
    GGSCI> start <source extract name>
    It then works as expected. Why is this so? I thought the alter extract <source extract name> begin now would do the same output.
    We do use ASM but like I said when I issue the:
    GGSCI> alter extract <source extract name> thread 1 extseqno <sequence # from v$log query>
    It works like it should.
    Very weird.
    - Jason

    Yes, supplemental logging is enabled on both the source and the targets, but why would supplemental logging on the targets have any affect on why the Source Extract on the source can't read from the source redo log?
    This is not a RAC database, rather single-instance with one thread. Also, we are using DBLOGREADER functionality as it is an 11.2.0.3 database.
    My issue is simply, when I start the source extract from being down, meaning it isn't running, I issue this command:
    alter <source extract> begin now
    start <source extract>
    view report <source extract>
    OGG-01515 Positioning to begin time <today's date and time> ie Mar 4, 2013, 3:26:39 PM. (this is repeated over and over and over)
    If I perform a
    info <source extract> detail---> I see the following:
    Log Read Checkpoint Oracle Redo Logs 2013-03-04 15:26:39 Thread 1, Seqno 0, RBA 0 (why is it showing 0, becuase it can't read the redo, WHY NOT?)
    Extract Source BEGIN END
    Not Available <today's date> <today's date> (repeat....)
    However, if I retreive the Redo Log number and I issue:
    alter spe thread 1 extseqno (redo log sequence #)
    start spe.
    Then it works okay. I have to manually tell it what redo log to begin reading from. Why?
    - Jason
    Edited by: 924317 on Mar 4, 2013 9:03 AM

  • Failed to read data from Lock tables

    Hi,
    When i trying to update data using T_CODE j1i8. I am getting message (Failed to read data from Lock tables ) but where as my database tables are not been locked. can anyone plz help me out.
    Regards,
    Anaveer

    Hi
    Probably some other transation is modifying the Database table you are using & it may have locke the table using FOR UPDATE stmt.
    Using Deque_all FM module you can remove the locks
    Thanks
    Sandeep
    Reward if helpful

  • TT16060: Failed to read data from the network. select() timed out

    hi!
    i am working on active standby pair.....i created
    [activedsn]
    Driver=/d01/oracle/tt70/TimesTen/tt70/lib/libtten.so
    DataStore=/d01/oracle/tt70/TimesTen/tt70/info/activedsn
    DatabaseCharacterSet=WE8MSWIN1252
    PermSize=10
    [standbydsn]
    Driver=/d01/oracle/tt70/TimesTen/tt70/lib/libtten.so
    DataStore=/d01/oracle/tt70/TimesTen/tt70/info/standbydsn
    DatabaseCharacterSet=WE8MSWIN1252
    PermSize=10
    [sub3]
    Driver=/d01/oracle/tt70/TimesTen/tt70/lib/libtten.so
    DataStore=/d01/oracle/tt70/TimesTen/tt70/info/sub3
    DatabaseCharacterSet=WE8MSWIN1252
    PermSize=10
    replication schemes; .............all are on same hosts..........
    activedsn datastore.............
    Command> create table readtab(a number not null primary key,b varchar2(31));
    Command> insert into readtab values(101,'aaaa');
    1 row inserted.
    Command> commit;
    command>create active standby pair activedsn on "tap2.test3.com",standbydsn on "tap2.test3.com" return receipt subscriber sub3 on "tap2.test3.com";
    standbydsn datastore..........
    command>ttrepadmin -duplicate -from activedsn -host "tap2.test3.com" -uid adm -pwd adm "dsn=standbydsn";
    sub3 datastore..........
    command>ttrepadmin -duplicate -from standbydsn -host "tap2.test3.com" -uid adm -pwd adm "dsn=sub3";
    now they are working fine when i insert something from activedsn it is replicated to standbydsn and from standbydsn to sub3 ......
    problem:
    but when i test "Recovering from a failure of the standby master data store"
    i ttdestroy standbydsn..............and on activedsn i executed
    command>call ttrepstatesave('failed','standbydsn','tap2.test3.com');
    after that all updates from activedsn were replicated to sub3....in the meanwhile i again duplicated standbydsn from activedsn
    command>ttrepadmin -duplicate -from activedsn -host "tap2.test3.com" -uid adm -pwd adm "dsn=standbydsn";
    now what happens updates from activedsn are replicated to standbydsn but no updates are replicated to sub3 it is giving error
    15:24:29.29 Warn: REP: 7077: SUB3:receiver.c(1931): TT16060: Failed to read data from the network. select() timed out
    15:29:36.09 Warn: REP: 7008: STANDBYDSN:receiver.c(1931): TT16060: Failed to read data from the network. TimesTen replication agent is stopping
    replication agents for all are running.............
    please helppppp..........
    Edited by: Muhammad.Usman on Oct 13, 2009 2:57 AM
    Edited by: Muhammad.Usman on Oct 13, 2009 3:08 AM

    [timesten@tap2 bin]$ ttrepadmin -showconfig activedsn
    Self host "TAP2.TEST3.COM", port auto, name "ACTIVEDSN", LSN 0/916688, timeout 120, threshold 0
    List of subscribers
    Peer name Host name Port State Proto
    SUB3 TAP2.TEST3.COM Auto Start 24
    Last Msg Sent Last Msg Recv Latency TPS RecordsPS
    00:00:03 - -1.00 -1 -1
    Peer name Host name Port State Proto
    STANDBYDSN TAP2.TEST3.COM Auto Start 24
    Last Msg Sent Last Msg Recv Latency TPS RecordsPS
    00:00:03 00:00:04 -1.00 -1 -1
    List of objects and subscriptions
    Table details
    Table : ADM.READTAB Timestamp updates : -
    Master Name Subscriber name
    STANDBYDSN SUB3
    STANDBYDSN ACTIVEDSN
    Table details
    Table : ADM.READTAB Timestamp updates : -
    Master Name Subscriber name
    ACTIVEDSN SUB3
    ACTIVEDSN STANDBYDSN
    Datastore details
    Master Name Subscriber name
    STANDBYDSN SUB3
    STANDBYDSN ACTIVEDSN
    Datastore details
    Master Name Subscriber name
    ACTIVEDSN SUB3
    ACTIVEDSN STANDBYDSN
    [timesten@tap2 bin]$ ttrepadmin -showstatus activedsn
    Replication Agent Status as of: 2009-10-13 20:42:02
    DSN : activedsn
    Process ID : 19000 (Started)
    Replication Agent Policy : manual
    Host : TAP2.TEST3.COM
    RepListener Port : 58698 (AUTO)
    Last write LSN : 0.973840
    Last LSN forced to disk : 0.973840
    Replication hold LSN : 0.968456
    Replication Peers:
    Name : SUB3
    Host : TAP2.TEST3.COM
    Port : 58371 (AUTO) (Connected)
    Replication State : STARTED
    Communication Protocol : 24
    Name : STANDBYDSN
    Host : TAP2.TEST3.COM
    Port : 59000 (AUTO) (Connected)
    Replication State : STARTED
    Communication Protocol : 24
    TRANSMITTER thread(s):
    For : SUB3
    Start/Restart count : 6
    Send LSN : 0.971432
    Transactions sent : 2
    Total packets sent : 158
    Tick packets sent : 112
    MIN sent packet size : 48
    MAX sent packet size : 568
    AVG sent packet size : 59
    Last packet sent at : 20:42:00
    Total Packets received: 158
    MIN rcvd packet size : 48
    MAX rcvd packet size : 96
    AVG rcvd packet size : 64
    Last packet rcvd'd at : 20:42:00
    TRANSMITTER thread(s):
    For : STANDBYDSN
    Start/Restart count : 4
    Send LSN : 0.971432
    Transactions sent : 2
    Total packets sent : 106
    Tick packets sent : 84
    MIN sent packet size : 48
    MAX sent packet size : 560
    AVG sent packet size : 63
    Last packet sent at : 20:42:00
    Total Packets received: 104
    MIN rcvd packet size : 48
    MAX rcvd packet size : 96
    AVG rcvd packet size : 66
    Last packet rcvd'd at : 20:42:00
    Most recent errors (max 5):
    TT16122 in transmitter.c (line 3313) at 20:28:28 on 10-13-2009
    TT16121 in transmitter.c (line 3048) at 20:28:28 on 10-13-2009
    TT16060 in transmitter.c (line 5028) at 20:33:59 on 10-13-2009
    TT16122 in transmitter.c (line 3313) at 20:33:59 on 10-13-2009
    TT16121 in transmitter.c (line 3048) at 20:33:59 on 10-13-2009
    RECEIVER thread(s):
    For : STANDBYDSN
    Start/Restart count : 1
    Transactions received : 0
    Total packets sent : 33
    Tick packets sent : 0
    MIN sent packet size : 48
    MAX sent packet size : 68
    AVG sent packet size : 67
    Last packet sent at : 20:42:00
    Total Packets received: 33
    MIN rcvd packet size : 48
    MAX rcvd packet size : 135
    AVG rcvd packet size : 51
    Last packet rcvd'd at : 20:42:00
    [timesten@tap2 bin]$
    [timesten@tap2 bin]$ ttrepadmin -showstatus standbydsn
    Replication Agent Status as of: 2009-10-13 20:42:35
    DSN : standbydsn
    Process ID : 19102 (Started)
    Replication Agent Policy : manual
    Host : TAP2.TEST3.COM
    RepListener Port : 59000 (AUTO)
    Last write LSN : 0.1007904
    Last LSN forced to disk : 0.1007904
    Replication hold LSN : 0.1002472
    Replication Peers:
    Name : SUB3
    Host : TAP2.TEST3.COM
    Port : 58371 (AUTO) (Connected)
    Replication State : STARTED
    Communication Protocol : 24
    Name : ACTIVEDSN
    Host : TAP2.TEST3.COM
    Port : 58698 (AUTO) (Connected)
    Replication State : STARTED
    Communication Protocol : 24
    TRANSMITTER thread(s):
    For : SUB3
    Start/Restart count : 2
    Send LSN : 0.1005496
    Transactions sent : 1
    Total packets sent : 48
    Tick packets sent : 33
    MIN sent packet size : 48
    MAX sent packet size : 568
    AVG sent packet size : 65
    Last packet sent at : 20:42:30
    Total Packets received: 48
    MIN rcvd packet size : 48
    MAX rcvd packet size : 96
    AVG rcvd packet size : 64
    Last packet rcvd'd at : 20:42:30
    Most recent errors (max 5):
    TT16229 in transmitter.c (line 6244) at 20:38:01 on 10-13-2009
    TRANSMITTER thread(s):
    For : ACTIVEDSN
    Start/Restart count : 1
    Send LSN : 0.1005496
    Transactions sent : 0
    Total packets sent : 36
    Tick packets sent : 34
    MIN sent packet size : 48
    MAX sent packet size : 135
    AVG sent packet size : 50
    Last packet sent at : 20:42:30
    Total Packets received: 36
    MIN rcvd packet size : 48
    MAX rcvd packet size : 68
    AVG rcvd packet size : 67
    Last packet rcvd'd at : 20:42:30
    RECEIVER thread(s):
    For : ACTIVEDSN
    Start/Restart count : 1
    Transactions received : 1
    Total packets sent : 42
    Tick packets sent : 0
    MIN sent packet size : 48
    MAX sent packet size : 96
    AVG sent packet size : 66
    Last packet sent at : 20:42:30
    Total Packets received: 47
    MIN rcvd packet size : 48
    MAX rcvd packet size : 190
    AVG rcvd packet size : 58
    Last packet rcvd'd at : 20:42:30
    [timesten@tap2 bin]$
    [timesten@tap2 bin]$ ttrepadmin -showstatus sub3
    Replication Agent Status as of: 2009-10-13 20:43:05
    DSN : sub3
    Process ID : 18898 (Started)
    Replication Agent Policy : manual
    Host : TAP2.TEST3.COM
    RepListener Port : 58371 (AUTO)
    Last write LSN : 0.707088
    Last LSN forced to disk : 0.707088
    Replication hold LSN : -1.-1
    Replication Peers:
    Name : ACTIVEDSN
    Host : TAP2.TEST3.COM
    Port : 0 (AUTO)
    Replication State : STARTED
    Communication Protocol : 24
    Name : STANDBYDSN
    Host : TAP2.TEST3.COM
    Port : 0 (AUTO)
    Replication State : STARTED
    Communication Protocol : 24
    RECEIVER thread(s):
    For : ACTIVEDSN
    Start/Restart count : 1
    Transactions received : 0
    Total packets sent : 46
    Tick packets sent : 0
    MIN sent packet size : 48
    MAX sent packet size : 96
    AVG sent packet size : 65
    Last packet sent at : 20:43:00
    Total Packets received: 46
    MIN rcvd packet size : 48
    MAX rcvd packet size : 134
    AVG rcvd packet size : 51
    Last packet rcvd'd at : 20:43:00
    RECEIVER thread(s):
    For : STANDBYDSN
    Start/Restart count : 1
    Transactions received : 1
    Total packets sent : 45
    Tick packets sent : 0
    MIN sent packet size : 48
    MAX sent packet size : 96
    AVG sent packet size : 66
    Last packet sent at : 20:43:00
    Total Packets received: 50
    MIN rcvd packet size : 48
    MAX rcvd packet size : 190
    AVG rcvd packet size : 56
    Last packet rcvd'd at : 20:43:00
    [timesten@tap2 bin]$
    [timesten@tap2 bin]$
    Edited by: Muhammad.Usman on Oct 13, 2009 9:57 PM

  • Failed to read data from report file Reason: The table could not be found.

    BO Enterprise XI R2, cannot publish crystal reports using the publishing wizard.
    Failed to read data from report file Reason: The table could not be found.
    Any ideas to get around this would really help out.
    Regards

    Connection used Views, ODBC System DSN is setup properly.
    Approach for import from business view manager and import wizard.  both methods failed to import the Business View and underlying reports.
    I figure I may have imported the Business View wrong? From Business View Manager I exported from my dev server then imported to prod server.
    Apparently I learned exporting my business view also includes the Data Connections that the Business Views are dependent upon.
    So which ever folder you specify it copies it there. Originally the all Data Connections Resides on the root folder. To return it to the original location. I deleted what I had exported. Exported this time to the root folder, then only deleted the business views, foundation, elements. Then exported again to the folder where I intended then only deleted the Data Connection.
    Makes any sense? So I then had to re point the business views and all the dependent objects to the data connection that resides in the root folder.
    I tested the connection, it works fine. I properly updated my crystal reports to the business view in production. Did a sample extract it works as expected.
    However when i try to publish, either from Crystal or Publish wizard i get the same error?
    As a work around i am thinking, after updating the business view in the crystal reports, shall i re map the fields?? or reexport the business views again?
    Any help will be surely appreciated.

  • Transportable Tablespace Importing - Failed to read stderr from process

    Hello,
    I'm using the normal EM console packaged with Oracle (10gR2) and am trying to import a tablespace on RedHat rel4.
    I follow the Transport Tablespaces link on the maintenance tab and get the option of export or import and also the place to put the host credentials in. I select import, enter the credentials and click continue and am given this error: Failed to read stderr from process.
    I've tried logging into EM as SYSTEM, SYS, and a DBA account I use and have used both root and the oracle account with no luck.
    Searching hasn't gotten me any useful results here or in metalink so I'm stuck. What am I missing?
    Thanks

    Thank you thank you!
    The short version is I traced it back to an error with /bin/nmo which traced back to a failure to run root.sh when upgrading to 10.2.0.3.
    Thanks again!

  • Failed to read PID from file /run/nginx.pid: Invalid argument

    Hi,
    tried to get an nginx server running to set up an owncloud environment.
    When starting the nginx server
      $  systemctl start nginx.service
    I get the message: "Failed to read PID from file /run/nginx.pid: Invalid argument"
    [root@klaus /etc/nginx]# systemctl status nginx
    nginx.service - A high performance web server and a reverse proxy server
    Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled)
    Active: active (running) since Sat 2013-10-12 17:50:46 CEST; 8min ago
    Process: 1823 ExecStart=/usr/bin/nginx -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=0/SUCCESS)
    Process: 1821 ExecStartPre=/usr/bin/nginx -t -q -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=0/SUCCESS)
    Main PID: 1825 (nginx)
    CGroup: /system.slice/nginx.service
    ├─1825 nginx: master process /usr/bin/nginx -g pid /run/nginx.pid; daemon on; master_process on;
    └─1826 nginx: worker process
    Oct 12 17:50:46 klaus systemd[1]: Failed to read PID from file /run/nginx.pid: Invalid argument
    Oct 12 17:50:46 klaus systemd[1]: Started A high performance web server and a reverse proxy server.
    but /run/nginx.pid is readable:
    # cat /run/nginx.pid
    2058
    # ll /run/nginx.pid
    -rw-r--r-- 1 root root 4 Oct 12 17:39 /run/nginx.pid
    It seems nginx is runnung thought but in my browser I only get a blank page.
    Any help appreciated.
    Last edited by wombalton (2013-10-13 14:17:46)

    Actually it's the first time I tried to set up one.
    I Basically followed this guide [1] adopting it to arch.
    That's where i got my nginx.conf from. Taking the standard nginx.conf that comes by install and replacing the server part.
    After some searching I found the owncloud manual[2] with an example nginx.conf. Comparing that with my one I found some differences. Applying them works out fine.
    I get now the owncloud page.
    The systemd error  still ocurs, but I think it does not matter to run the server.
    Thanks for your help.
    [1] https://docs.google.com/file/d/0B0ZsTQd … ring&pli=1
    [2] http://doc.owncloud.org/server/5.0/admi … figuration
    EDIT:
    Just in case someone stumbles over this and tries to use the config:
    This one [3] really works
    [3] http://doc.owncloud.org/server/5.0/admi … thers.html
    Last edited by wombalton (2013-10-14 13:31:22)

  • Failed to read data from report file, failed to read parameter object

    Error message when doing a u201CSchedule List of Valuesu201D
    failed to read data from report file,
    c:\docume~1\user........
    failed to read parameter object
    Earlier in the day it worked just fine nothing had been changed.
    I have admin right on the windows server and in central management console
    Can someone please help with this issue.

    Hi Glenn
    Please let us know the following:
    1.What is the exact version of Crystal Reports Designer?
    2.What is the version of Business objects Enterprise installed on the machine?
    3.Are reports based on Business views,Deski,Webi?
    4.Are the reports migrated from older version of Crystal Reports to new Version?
    Thanks
    Shraddha

  • DCA-40002: Failed to read WSDL from xsd

    Hi,
    I am a newbie to JDeveloper, we are using Oracle JDeveloper 11g to consume a web service from PeopleSoft 9.0.
    We have a WSDL based on a PeopleSoft Component Interface. We can view the wsdl document through the PeopleSoftServiceListeningConnector URL and it appears ok.
    However, when we create a new Web Service Data Control in JDeveloper, it presents us with the following error:
    DCA-40002: The WSDL document is invalid due to the following reason: java.io.IOException: Failed to read WSDL from
    http://machine:port/PSIGW/PeopleSoftServiceListeningConnector/M999999.V1.xsd:
    HTTP connection error code is 500.
    Any advice would be appreciated.
    Thanks in advance.
    Regards,
    Dan

    Hi, after 1 1/2 years you can assume the thread to be closed. So please open your own thread and make sure the WSDL file is accessible from a browser URL.
    Frank

  • Failed to read data from report file : Reason: Crystal Reports: Print Engin

    Hi,
    When we try to migrate the crystal reports from BO R2 to BO R3, some of the reports are faling with the error:
    Failed to create a new Report.Reason: Failed to read data from report file C:\DOCUME~1\xxxxx.rpt. Reason: Crystal Reports: Print Engine Error
    Would please someone help me in fixing the issue.
    Thanks and Regards,

    Few quick checks to identify the cause -
    1. Are you able to run report in your R2 system?
    2. In XI3.1, check SIA running on which account.
    3. Check the acount have sufficient rights on file system and registry.
    4. What's your web server?
    Give proper rights and import again. Hopefully it will resolve.
    Edited by: Das on Dec 6, 2010 9:47 PM

  • Failed to read data from report file

    Hello ,
    When iam trying to publish crystal report to Bo enterprise using Publish wizard, i get below error
    *"Failed to read data from report file C:\Users\prakash\Desktop\new_obj.rpt. Reason: File I/O error. File ~cidbc64f360135e0.rpt."*
    can you please help me in resolving it
    Regards,
    KumarTP

    Please post this to the [Business Objects Enterprise Administration|BI Platform; forum.
    - Ludek

  • Archive from redo log

    1. How can i read the DML's in the redo log?
    I need the statements from the redo log like we get through LOGMINOR in SQL_REDO column.
    2. How to create the archive log file from Redo logs through a command?
    Means i want to clean the Redo log and want to create archive log so that i can use those archive log in LOGMINOR.
    Thanks,

    Hemant K Chitale wrote:You can find out which row it is and what are the contents of that row currently in your database by querying
    Once you get the values, you'd have to work backwards (or forwards) to figure out what values you need to set in the other database.Actually i also have other tables then how i can consider perticular column name and search that?I could have other statements.
    update "TEST2" set "LEADCOUNTSTDFAILED" = '234' where "LEADCOUNTSTDFAILED" = '233' and ROWID = 'AAAQDRAAKAAAUSdAAs';
    Then how i can get it.
    Also that view conists of thousands of DML's.
    Thanks,

  • Read data from Archive logs

    Does anyone have any recommandations on how to read data from archive logs. When i use log minor, i am getting only bind variables for DML operations. But i need actual data from the archive logs..
    Any thoughts
    Thanks
    -Prasad

    Log miner is the closest to command issued as possible. Depending on the Oracle version you will be able to see DML or DML and DDL. From 9i and on Oracle was able to translate the DML against data dictionary as the actual DDL command. On its first 8i release only DML was visible.
    ~ Madrid

  • Read data from a log file to flex textarea while deploying

    Hi,
    I am new to flex. I am able to read data from a log file that placed in C drive and able to write that data into flex textarea, but I am unable to read the same file while it is deployed into weblogic sever.
    Could anyone please tell me how to solve this problem.
    Thanks,
    Sri.

    Do you have it trying to read that same file on your C drive or is the file actually deployed somewhere on the server?

Maybe you are looking for