Building a hidden database level tracing log

Hi,
I am currently using oracle 8, does any one know how to create an automatically log file that keep track of who executive which sql query on what time and date?
I need this becasue I would like to find some record that tells me what each user did to the database and which terminer and time they used.
Thanks

Oups looks like the formating did not show up corectly ... It should look like this :
A�rien // The base level
          Planeur
          Parachute
          H�lico
          Fus�e
          ULM
          avion // An other level
                    militaire
                    tourisme
                    civil And :
A�rien // The base level
          Planeur
          Parachute
          H�lico
          Fus�e
          ULM
          NEWOBJ // Here would have to be inserted the new object
         avion // An other level
                   militaire
                   tourisme
                   civil

Similar Messages

  • Database Level Tracing or Instance Level Tracing

    Hello,
    How do I know whether database level tracing or instance level tracing is enabled ? This is on 10g R2
    Thanks,
    R

    I amnot sure that I have heard about instance level tracing but normally tracing is enabled either through sql_trace parameter set in the parameter file or through some trace event. So you need to check your parameter file for any such setting.
    HTH
    Aman....

  • What level suplemental logging requires to setup Streams at Schema level

    Hi,
    Working on setting-up streams from 10g to 11g db @ schema level. And the session is hanging with statement "ALTER DATABASE ADD SUPPLEMENTAL LOG DATA" while running following command - generated using DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS.
    Begin
    dbms_streams_adm.add_schema_rules(
    schema_name => '"DPX1"',
    streams_type => 'CAPTURE',
    streams_name => '"CAPTURE_DPX1"',
    queue_name => '"STRMADMIN"."CAPTURE_QUEUE"',
    include_dml => TRUE,
    include_ddl => TRUE,
    include_tagged_lcr => TRUE,
    source_database => 'DPX1DB',
    inclusion_rule => TRUE,
    and_condition => get_compatible);
    END;
    The generated script also setting each table with table-level logging "'ALTER TABLE "DPX1"."DEPT" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, FOREIGN KEY, UNIQUE INDEX) COLUMNS'".
    So my question is: Is Database level supplemental logging required to setup schema-level replication? If answer is no then why the following script is invoking "ALTER DATABASE ADD SUPPLEMENTAL LOG DATA" command.
    Thanks in advance.
    Regards,
    Sridhar

    Hi sri dhar,
    From what I found, the "ALTER DATABASE ADD SUPPLEMENTAL LOG DATA" is required for the first capture you create in a database. Once it has been run, you'll see V$DATABASE with the column SUPPLEMENTAL_LOG_DATA_MIN set to YES. It requires a strong level of locking - for example, you cannot run this alter database while an index rebuild is running (maybe an rebuild online?)
    I know it is called implicitly by DBMS_STREAMS_ADM.add_table_rules for the first rule created.
    So, you can just run the statement once in a maintenance window and you'll be all set.
    Minimal Supplemental Logging - http://www.oracle.com/pls/db102/to_URL?remark=ranked&urlname=http:%2F%2Fdownload.oracle.com%2Fdocs%2Fcd%2FB19306_01%2Fserver.102%2Fb14215%2Flogminer.htm%23sthref2006
    NOT to be confused with database level supplemental log group.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14228/mon_rep.htm#BABHHCCC
    Hope this helps,
    Regards,

  • Table level supplemental logging

    How is table level supplemental logging different from Database level supplemental logging? Is Database level supplemental logging required for enabling table level supplemental logging?
    I have done 3 test cases, please suggest!
    Case 1
    Enabled only DB level supplemental logging(sl)
    observations--->
    DML on all tables can be tracked with logminer.
    I find this perfect.
    case 2
    Enabling only table level supplemental logging
    Setting---->
    2 tables ---AAA(with table level sl) & BBB (without table level sl)
    Only DDL is recorded with the help of logminer & few of the operations are listed as internal.
    case3
    Enabling database level sl first & then enabling table level sl only on one table --->AAA & no table level sl on BBB
    observation---> All the tables DDL & DML are getting tracked--point is if this is getting the same result
    as DB level SL, what is the significance of enabling Table level SL? or am I missing something?

    I have the same experience: when database level supplemental logging is enabled, adding supplemental logging at the table level does not affect functionality or performance.  Inserting 1 M rows into test table takes 25 sec ( measured on target database ) with table level supplemental logging, and 26 sec without it.  My GoldenGate version is 11.2, Oracle database version 11.2.0.3.0
    If someone can show the benefit of having table level supplemental logging in addition to database level logging, I would very much appreciate.

  • Super 1.5 - source code level tracing for EJB, JSP and others

     

    Would you want to try new installation for Super 1.6?
    Please visit www.acelet.com
    Thanks.
    "Dominique Jean-Prost" <[email protected]> wrote:
    If only your installation tool was easy to use ...
    dom
    "Wei Jiang" <[email protected]> a écrit dans le message news:
    [email protected]...
    Super supports source code level tracing for Java and JSP!
    Announcement: Super 1.5 - an EJB/J2EE monitoring tool with
    SuperPeekPoke
    SuperLogging
    SuperStress
    SuperEnvironment
    It is free for development.
    You can anomyously down load it from:
    http://www.acelet.com.
    Super is a component based administration tool for EJB/J2ee.
    It provides built-in functionality as well as
    extensions, as SuperComponents. Users can install
    SuperComponents onto it, or uninstall them from it.
    Super has the following functions:
    * A J2EE/EJB monitor.
    * A gateway to EJB servers from different vendors.
    * A framework holding user defined SuperComponents.
    * A PeekPoke tool to read/write attributes from EJBs.
    * A full-featured logging/tracing tool for centralized, chronologicallogging.
    * A Stress test tool.
    * A global environment tool.
    It is written in pure Java.
    The current version support:
    * Universal servers.
    * Weblogic 5.1
    * Weblogic 6.0
    What is new:
    Version 1.50 August, 2001
    Enhancement:
    1. Source code level tracing supports EJB, JSP, java helper and other
    programs which are written in native languages (as long as you
    write correct log messages in your application).
    2. Redress supports JSP now.
    3. New installation with full help document: hope it will be easier.
    4. Support WebSphere 4.0
    Version 1.40 June, 2001
    Enhancement:
    1. Add SuperEnvironment which is a Kaleidoscope with TableView,TimeSeriesView
    and PieView for GlobalProperties.
    GlobalProperties is an open source program from Acelet.
    2. SuperPeekPoke adds Kaleidoscope with TableView, TimeSeriesView andPieView.
    Changes:
    1. The structure of log database changed. You need delete old installationand
    install everything new.
    2. The format of time stamp of SuperLogging changed. It is not localedependent:
    better for report utilities.
    3. Time stamp of SuperLogging added machine name: better for clusteringenvironment.
    Bug fix:
    1. Under JDK 1.3, when you close Trace Panel, the timer may not bestopped
    and
    Style Panel may not show up.
    Version 1.30 May, 2001
    Enhancement:
    1. Add ConnectionPlugin support.
    2. Add support for Borland AppServer.
    Version 1.20 April, 2001
    Enhancement:
    1. Redress with option to save a backup file
    2. More data validation on Dump Panel.
    3. Add uninstall for Super itself.
    4. Add Log Database Panel for changing the log database parameters.
    5. Register Class: you can type in name or browse on file system.
    6. New tour with new examples.
    Bug fix:
    1. Redress: save file may fail.
    2. Install Bean: some may fail due to missing manifest file. Now, itis
    treated
    as foreign beans.
    3. Installation: Both installServerSideLibrary and installLogDatabasecan
    be worked
    on the original file, do not need copy to a temporary directory anymore.
    4. PeekPoke: if there is no stub available, JNDI list would be emptyfor
    Weblogic5-6.
    Now it pick up all availble ones and give warning messages.
    5. Stress: Launch>Save>Cancel generated a null pointer exception.
    Changes:
    1. installLogDatabase has been changed from .zip file to .jar file.
    2. SuperLogging: If the log database is broken, the log methods willnot
    try to
    access the log database. It is consistent with the document now.
    3. SuperLogging will not read system properties now. You can put logdatabase
    parameters in SuperLoggingEJB's deployment descriptor.
    Version 1.10 Feb., 2001
    Enhancement:
    1. Re-written PeekPoke with Save/Restore functions.
    2. New SuperComponent: SuperStress for stress test.
    3. Set a mark at the highlighted line on<font size=+0> the Source Code
    Panel (as a work-a-round for JDK 1.3).</font>
    4. Add support for WebLogic 6.0
    Bug fix:
    1. Uninstall bean does physically delete the jar file now.
    2. WebLogic51 Envoy may not always list all JNDI names. This is fixed.
    Version 1.00 Oct., 2000
    Enhancement:
    1. Support Universal server (virtual all EJB servers).
    2. Add Lost and Found for JNDI names, in case you need it.
    3. JNDI ComboBox is editable now, so you can PeekPoke not listed JNDIname
    (mainly
    for Envoys which do not support JNDI list).
    Version 0.90: Sept, 2000
    Enhancement:
    1. PeekPoke supports arbitrary objects (except for Vector, Hashtable
    and alike) as input values.
    2. Reworked help documents.
    Bug fix:
    1. Clicking Cancel button on Pace Panel set 0 to pace. It causes
    further time-out.
    2. MDI related bugs under JDK 1.3.
    Version 0.80: Aug, 2000
    Enhancement:
    1. With full-featured SuperLogging.
    Version 0.72: July, 2000
    Bug fix:
    1. Ignore unknown objects, so Weblogic5.1 can show JNDI list.
    Version 0.71: July, 2000
    Enhancement:
    1. Re-worked peek algorithm, doing better for concurent use.
    2. Add cacellable Wait dialog, showing Super is busy.
    3. Add Stop button on Peek Panel.
    4. Add undeploy example button.
    Bug fix:
    1. Deletion on Peek Panel may cause error under JDK 1.3. Now it worksfor
    both
    1.2 and 1.3
    Version 0.70: July, 2000
    Enhancement:
    1. PeekPoke EJBs without programming.
    Bug fix:
    1. Did not show many windows under JDK 1.3. Now it works for both 1.2and
    1.3
    Changes:
    1. All changes are backward compatible, but you may need to recompilemonitor
    windows defined by you.
    Version 0.61: June, 2000
    Bug fix:
    1. First time if you choose BUFFER as logging device, message willnot
    show.
    2. Fixed LoggingPanel related bugs.
    Version 0.60: May, 2000
    Enhancement:
    1. Add DATABASE as a logging device for persistent logging message.
    2. Made alertInterval configurable.
    3. Made pace for tracing configurable.
    Bug fix:
    1. Fixed many bugs.
    Version 0.51, 0.52 and 0.53: April, 2000
    Enhancement:
    1. Add support to Weblogic 5.1 (support for Logging/Tracing and
    user defined GUI window, not support for regular monitoring).
    Bug fix:
    1. Context sensitive help is available for most of windows: pressF1.
    2. Fix installation related problems.
    Version 0.50: April, 2000
    Enhancement:
    1. Use JavaHelp for help system.
    2. Add shutdown functionality for J2EE.
    3. Add support to Weblogic 4.5 (support for Logging/Tracing and
    user defined GUI window, not support for regular monitoring).
    Bug fix:
    1. Better exception handling for null Application.
    Version 0.40: March, 2000
    Enhancement:
    1.New installation program, solves installation related problems.
    2. Installation deploys AceletSuperApp application.
    3. Add deploy/undeploy facilities.
    4. Add EJB and application lists.
    Change:
    1.SimpleMonitorInterface: now more simple.
    Version 0.30: January, 2000
    Enhancement:
    1. Add realm support to J2EE
    2. Come with installation program: you just install what you want
    the first time you run Super.
    Version 0.20: January, 2000
    Enhancement:
    Add support to J2EE Sun-RI.
    Change:
    1. Replace logging device "file" with "buffer" to be
    compliant to EJB 1.1. Your code do not need to change.
    Version 0.10: December, 1999
    Enhancement:
    1. provide SimpleMonitorInterface, so GUI experience is
    not necessary for developing most monitoring applications.
    2. Sortable table for table based windows by mouse
    click (left or right).
    Version 0.01 November., 1999:
    1. Bug fix: An exception thrown when log file is large.
    2. Enhancement: Add tour section in Help information.
    Version 0.00: October, 1999
    Thanks.

  • Database Level Security not working ???

    The 10 g (10.1.2.1) documentation states the following:
    Chapter 7 Controlling access to information:
    "Regardless of the access permissions and task privileges that you set in Discoverer Administrator, a Discoverer end user only sees folders if that user has been granted the following database privileges (either directly or through a database role):
    ex: SELECT privilege on all the underlying tables used in the folder "
    So how come a folder (view in my case - not table) cannot be queried directly by a user, but the folder still shows up a choice when building a report using PLUS ? I am misreading the above ? For is sounds lilke to me if the user account does not have SELECT privilege then they will not see the folder in Discoverer ?
    Anyone run into the same issue or have an explanantion ?
    thanks
    OBX

    I think the user has access to see all the folders in the business area in Discoverer if he has permission to do so. This is a Discoverer level security to filter people who should not have access to the business area at all. You'll find that although they can see these Discoverer folders because the permission is set in Discoverer Administrator, that the database tables they are based on will not allow the users to see any of the data if they don't have those rights at the database level.

  • Set value of hidden database item

    I have a simple non-tabular form for creating rows in a table. I need to set the value of a hidden database item to be the same as a database item which the user enters.
    I know this can be done by a database trigger and also by a custom update process, but is there other (simpler) way to do this ?
    Vincent

    Vincent,
    By "hidden database item", I'm assuming that you mean a column in a table. By "database item which the user enters", I'm assuming that you mean a page item on your form. If that's correct, just edit that item in the builder, making its source type 'Database Column' and its source the column name, and the automatic row fetch and DML processes on the page will take care of the rest. If you mean something else, let us know.
    Scott

  • Problem building logical standby database

    Hi all,
    i am trying to build a logical standby database on platform Sun OS 10/Oracle 10gR2. I am following the Oracle document http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ls.htm#BEIGHEIA
    I have created a physical standby database and converting it to a logical standby database. I ensured that my physical Standby is in Sync with primary.
    Procedure DBMS_LOGSTDBY.BUILD executes successfully on primary.
    The problem is that the command *'alter database recover to logical standby test;'* DOESN'T END. No error in archive log. I have identified the archived redo log that contains the LogMiner dictionary and the starting SCN and applied that archive log on standby. Still the the above command doesn't end.
    Any Help is appreciated.

    SQL> alter database recover to logical standby m2test;
    This command doesn't return an sql> prompt. Alert log says it is waiting for log sequence 25. The command is running is for more than 5 hours, but still not competed.
    Alertlog:
    Thu Feb 5 22:14:25 2009
    alter database recover to logical standby m2test
    Thu Feb 5 22:14:25 2009
    Media Recovery Start: Managed Standby Recovery (mtest)
    Thu Feb 5 22:14:25 2009
    Managed Standby Recovery not using Real Time Apply
    parallel recovery started with 2 processes
    Media Recovery Waiting for thread 1 sequence 25
    Document says :-
    If a dictionary build is not successfully performed on the primary database, this command will never complete.
    But the dictionary build on primary is successful.
    SQL> execute dbms_logstdby.build;
    PL/SQL procedure successfully completed.
    I used the following queries and to find which archive log contains dictionary build and made sure that the log archive sequence 22 is applied on standby.
    SQL> SELECT NAME FROM V$ARCHIVED_LOG
    WHERE (SEQUENCE#=(SELECT MAX(SEQUENCE#)
    FROM V$ARCHIVED_LOG
    WHERE DICTIONARY_BEGIN = 'YES' AND STANDBY_DEST='NO')); 2 3 4
    NAME
    /oradata/mtest/archive/mtest_1_22_677975686.arc
    SQL> SELECT MAX(FIRST_CHANGE#) FROM V$ARCHIVED_LOG
    WHERE DICTIONARY_BEGIN='YES'; 2
    MAX(FIRST_CHANGE#)
    177407
    SQL>
    Edited by: user592715 on Feb 6, 2009 3:22 PM

  • Standby database SRL & Online logs

    Hi,
    I have just tried my hand at building a Physical standby database in Oracle 10gR2 using RMAN. I will detail out the steps that i have performed before asking my question.
    I configured every pre-requisite and i did not create any SRL's on primary before building a standby database. I am using LGWR ASYNC for redo transmission. I have configured FAL_CLIENT and FAL_SERVER. Protection mode is MAX PERFORMANCE and it is on Solaris 10 x86_64
    1. Took a RMAN full backup
    and created a standby control file as
    SQL> alter database create standby controlfile as '/tmp/standby.ctl';
    2. On another server, I copied the pfile, standby controlfile (renamed it) from primary and mounted the database.
    sqlplus /as sysdba'
    SQL> startup mount pfile='...';
    rman target /
    RMAN> restore database;
    SQL> alter database recover managed standby database disconnect from session;
    Everything worked and MRP was applying the archived logs as they were received from the primary. But, i have seen the SRL's created with default names on the primary database & standby database by Oracle even though i did not explicitly create them. Is this a normal behaviour? I saw them using v$standby_log.
    As a Physical standby database will not use any ONLINE REDO LOGS and i haven't created any with the procedure i have used. i have performed a SWITCHOVER, which has worked with out any problem. My question here is
    1. How did Oracle open the database database when there were no redo logs physically present on the standby site? Is this a normal behavior in a standby environment where Oracle creates ONLINE REDO LOG files for a Standby database being transitioned to Primary whenever a SWITCHOVER or FAILOVER occurs? If this is the case, it is obvious that Oracle will take the LOG SEQUENCE from the last applied ARCHIVED LOG and will start the ONLINE LOG from that sequence?
    Please correct me if i have understood anything wrong here or if i have configured anything wrong. But with the above configuration the Standby database worked perfectly well and switchover was successful too.
    Thanks,
    Harris.

    Correction: I have not created the Standby Database using RMAN but only performed a FULL backup, which i have restored before starting the MRP.

  • Schema level and table level supplemental logging

    Hello,
    I'm setting up bi- directional DML replication between two oracle databases. I have enabled supplemental logging database level by running this command-
    SQL>alter database add supplemental log data (primary key) columns;
    Database altered.
    SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;
    SUPPLEME SUP SUP
    IMPLICIT YES NO
    -My question is should I enable supplemental logging table level also(for DML replication only)? should I run the below command also?
    GGSCI (db1) 1> DBLOGIN USERID ggs_admin, PASSWORD ggs_admin
    Successfully logged into database.
    GGSCI (db1) 2> ADD TRANDATA schema.<table-name>
    what is the deference between schema level and table level supplemental logging?

    For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log group includes one of the following sets of columns, in the listed order of priority, depending on what is defined on the table:
    1. Primary key
    2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased
    columns, and no nullable columns
    3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased
    columns, but can include nullable columns
    4. If none of the preceding key types exist (even though there might be other types of keys
    defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
    the database allows to be used in a unique key, excluding virtual columns, UDTs,
    function-based columns, and any columns that are explicitly excluded from the Oracle
    GoldenGate configuration.
    The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause that
    is appropriate for the type of unique constraint (or lack of one) that is defined for the table.
    When to use ADD TRANDATA for an Oracle source database
    Use ADD TRANDATA only if you are not using the Oracle GoldenGate DDL replication feature.
    If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA command to log the required supplemental data. It is possible to use ADD
    TRANDATA when DDL support is enabled, but only if you can guarantee one of the following:
    ● You can stop DML activity on any and all tables before users or applications perform DDL on them.
    ● You cannot stop DML activity before the DDL occurs, but you can guarantee that:
    ❍ There is no possibility that users or applications will issue DDL that adds new tables whose names satisfy an explicit or wildcarded specification in a TABLE or MAP
    statement.
    ❍ There is no possibility that users or applications will issue DDL that changes the key definitions of any tables that are already in the Oracle GoldenGate configuration.
    ADD SCHEMATRANDATA ensures replication continuity should DML ever occur on an object for which DDL has just been performed.
    You can use ADD TRANDATA even when using ADD SCHEMATRANDATA if you need to use the COLS option to log any non-key columns, such as those needed for FILTER statements and KEYCOLS clauses in the TABLE and MAP parameters.
    Additional requirements when using ADD TRANDATA
    Besides table-level logging, minimal supplemental logging must be enabled at the database level in order for Oracle GoldenGate to process updates to primary keys and
    chained rows. This must be done through the database interface, not through Oracle GoldenGate. You can enable minimal supplemental logging by issuing the following DDL
    statement:
    SQL> alter database add supplemental log data;
    To verify that supplemental logging is enabled at the database level, issue the following statement:
    SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
    The output of the query must be YES or IMPLICIT. LOG_DATA_MIN must be explicitly set, because it is not enabled automatically when other LOG_DATA options are set.
    If you required more details refer Oracle® GoldenGate Windows and UNIX Reference Guide 11g Release 2 (11.2.1.0.0)

  • Row-level security at the Database level

    We need Row-level security at the Database level, where the user who logs in to Crystal reports, should be able to fetch only those rows from the database that he is entitled to see. For this, the login name of the user is passed to a stored procedure which sets the context of the DB session and restricts the data retrieved.
    We are not looking for row-level security where the data is first retrieved and then filtered based on the user login name. However, we are definitely looking for a way to set a context for a database session based on the user login name, even before we start fetching data. So effectively, the user who logs in will fetch only those rows which he is supposed to see.
    Issue:
    We face a problem of not being able to pass a variable (something like 'BOUSER' for BO which works, whereas, 'CurrentCEUserName' for Crystal Reports, which doesn't work), to the database stored procedure to set the context.
    Please let us know if we can use 'CurrentCEUserName' variable in Crystal in the same way as 'BOUSER' is used in ConnectInit for BO? We would like to know how we could pass any variable in Crystal Reports which holds the user login information to a stored procedure.
    Also, please suggest alternate ways to achieve this security restriction, if any.

    Hi
    A previous database had a personnel table with their station name, district and region, with a field holding their logon name.  We also had an activity table with the fields referring to the activity, and a field of Station, district and region it occured in.
    By linking the individual rows in an activity table to the personnel table on the station name field, we then used the CurrentCEUserName to filter on the personnel.  This returned only the records in the activity table where the station the activity took place at was the same as the station associated with the selected personnel who has logged on.
    The additional bonus was if we linked it on District or region we had the same result but at a greater level. ie all activity in the logged on personell's District or if linked on region, then their region.
    The personnel table was maintained by the system administrators, so maintenance was low.
    I hope this helps.
    Kevin

  • Database level settings

    Please advise database level settings for all our databases for following items:
     Virtual Log File
     Database file growth settings
    - And suggest what are the best practices around these items that we need to follow for future new databases. What are different things to consider?
    - And also what would we need to do before making these changes on current databases?
    Thanks,

    Can you refer the below link
    https://www.simple-talk.com/sql/database-administration/sql-server-database-growth-and-autogrowth-settings/
    http://www.sqlskills.com/blogs/kimberly/8-steps-to-better-transaction-log-throughput/
    --Prashanth

  • Backup Not Starting for 'Whole database offline + redo log backup' @ DB13

    Hi Experts,
    I am not able to perform 'Whole database offline + redo log backup' by DB13.
    I have recently configured my 'init<SID>.sap'  to take 'Whole database online + redo log backup' and its working perfectly fine.
    I tried taking test backup for  'Whole database offline + redo log backup'  but it didn't even  started.
    Thus I created another profile with name init<SID>back.sap  and changed the Parameter
    from 'backup_type = online'  to 'backup_type = offline' and also tried by 'backup_type = offline_force'
    rest all parameters being same as the profile  init<SID>.sap
    Kindly Suggest as I need to take set the backup Strategy as  Mon-Fri  -> 'Whole database offline + redo log backup'  and Sat ->  'Whole database offline + redo log backup'
    One more Query : While taking the redo log backup by DB13 why is it that some times it only saves the Files and some time it
    saves and delete the files from the '/oracle/<SID>/oraarch'  location. Please throw some light over this matter also.
    Thanks,
    Jitesh

    Hi Mr Bhavik,
    Thanks for your reply..  Here are the details you have asked for.
    1.My SAP BASIS Patch Level  is :  10. ( We shall be updating it by the end of this Year)
    2. Br*tools version is :
    BRTOOLS   7.00 (11)
    kernel release    700
    patch level   11
    3. I don't have any file with name alert<dbsid>.log file (located at /oracle/<SID>/saptrace/background/) but i do have alert_<SID>.log
    I execute the command more -p G alert_JMD.log
    after my  'Whole database offline + redo log backup' again failed at DB13 but I was not able to see any specific complains while executing the above action.
    I got the Error Detailed Log in DB13 as :
    Detail log:                    beeneedv.aft
    BR0051I BRBACKUP 7.00 (20)
    BR0055I Start of database backup: beeneedv.aft 2010-11-08 13.16.43
    BR0484I BRBACKUP log file: /oracle/JMD/sapbackup/beeneedv.aft
    BR0280I BRBACKUP time stamp: 2010-11-08 13.16.43
    BR0261E BRBACKUP cancelled by signal 13
    BR0056I End of database backup: beeneedv.aft 2010-11-08 13.16.44
    BR0280I BRBACKUP time stamp: 2010-11-08 13.16.45
    BR0054I BRBACKUP terminated with errors
    4. No I have not yet Tried 'execute such Offline+REdo log backups using brback command', will Try and post it Definately
    5. Query : select grantee, granted_role from dba_role_privs;
    result :
    SQL> select grantee, granted_role from dba_role_privs;
    GRANTEE                        GRANTED_ROLE
    SYS                            SAPDBA
    SYS                            EXP_FULL_DATABASE
    SYS                            CONNECT
    IMP_FULL_DATABASE              SELECT_CATALOG_ROLE
    DBSNMP                         OEM_MONITOR
    SAPSR3                         CONNECT
    OPS$SAPSERVICEJMD              SAPDBA
    SYS                            SELECT_CATALOG_ROLE
    DBA                            DELETE_CATALOG_ROLE
    DBA                            EXECUTE_CATALOG_ROLE
    SYSTEM                         DBA
    GRANTEE                        GRANTED_ROLE
    OPS$ORAJMD                     SAPDBA
    SAPDBA                         GATHER_SYSTEM_STATISTICS
    SYS                            SCHEDULER_ADMIN
    SYS                            AQ_USER_ROLE
    SYS                            GATHER_SYSTEM_STATISTICS
    SYS                            DELETE_CATALOG_ROLE
    DBA                            GATHER_SYSTEM_STATISTICS
    DBA                            IMP_FULL_DATABASE
    EXECUTE_CATALOG_ROLE           HS_ADMIN_ROLE
    IMP_FULL_DATABASE              EXECUTE_CATALOG_ROLE
    OPS$JMDADM                     CONNECT
    GRANTEE                        GRANTED_ROLE
    SYS                            LOGSTDBY_ADMINISTRATOR
    SYS                            EXECUTE_CATALOG_ROLE
    SYS                            RESOURCE
    DBA                            SCHEDULER_ADMIN
    DBA                            SELECT_CATALOG_ROLE
    EXP_FULL_DATABASE              EXECUTE_CATALOG_ROLE
    SAPDBA                         SELECT_CATALOG_ROLE
    SYS                            SAPCONN
    SYS                            OEM_ADVISOR
    SYS                            IMP_FULL_DATABASE
    SELECT_CATALOG_ROLE            HS_ADMIN_ROLE
    GRANTEE                        GRANTED_ROLE
    OUTLN                          RESOURCE
    LOGSTDBY_ADMINISTRATOR         RESOURCE
    SAPSR3                         RESOURCE
    OPS$SAPSERVICEJMD              RESOURCE
    SYS                            RECOVERY_CATALOG_OWNER
    DBA                            EXP_FULL_DATABASE
    EXP_FULL_DATABASE              SELECT_CATALOG_ROLE
    TSMSYS                         RESOURCE
    OPS$ORAJMD                     RESOURCE
    SAPCONN                        SELECT_CATALOG_ROLE
    SYS                            OEM_MONITOR
    GRANTEE                        GRANTED_ROLE
    SYS                            AQ_ADMINISTRATOR_ROLE
    SYS                            DBA
    SYSTEM                         AQ_ADMINISTRATOR_ROLE
    OPS$ORAJMD                     CONNECT
    OPS$JMDADM                     SAPDBA
    OPS$JMDADM                     RESOURCE
    SAPSR3                         SAPCONN
    SYS                            HS_ADMIN_ROLE
    SYSTEM                         SAPDBA
    OPS$SAPSERVICEJMD              CONNECT

  • Refreshing mview is hanging after a database level gather stats

    hi guys,
    can you please help me identify the root cause of this issue.
    the scenario is this:
    1. we have a scheduled unix job that will refresh an mview everyday, from tuesday to saturdays.
    2. database maintenance being done during weekends (sundays), gathering stats at a database level.
    3. the refresh mview unix job apparently is hanging every tuesdays.
    4. our workaround is to kill the job, request for a schema gather stats, then re-run the job. and voila, refresh mview will be successful then.
    5. and the rest of the weekdays until saturdays, refresh mview is having no problems.
    we already identified during testing that the scenario where the refresh mview is failing is when after we are gathering stats in a database level.
    during gather stats in a schema level, refresh mview is successful.
    can you please help me understand why we are failing refreshing mview after we gather stats in the database level??
    we are using oracle 9i
    the creation of the mview goes something like below:
    create materialized view hanging_mview
    build deferred
    refresh on demand
    query rewrite disabled
    appreciate all your help.
    thanks a lot in advance.

    1. we have a scheduled unix job that will refresh an mview everyday, from tuesday to saturdays.
    2. database maintenance being done during weekends (sundays), gathering stats at a database level.
    3. the refresh mview unix job apparently is hanging every tuesdays.
    4. our workaround is to kill the job, request for a schema gather stats, then re-run the job. and voila, refresh mview will be successful then.
    5. and the rest of the weekdays until saturdays, refresh mview is having no problems.
    You know Tuesday's MV refresh "hangs".
    You don't know why it does not complete.
    You desire solution so that it does complete.
    You don't really know what is is doing on Tuesdays, but hope automagical solution will be offered here.
    The ONLY way I know how to possibly get some clues is SQL_TRACE.
    Only after knowing where time is being spent will you have a chance to take corrective action.
    The ball is in your court.
    Enjoy your mystery!

  • Error while Assigning database level role (db_datareader) to SQL login (Domain Account)

    Team,
    I got an error while creating a User for Domain Account. Below is the screen shot of the error (error : 15401)
    Database instance is on SQL 2000 SP3. ( I know it is out of support, But the customer is relutanct to upgrade)
    On Google search, i found below article which is best matching for this error
    http://support.microsoft.com/kb/324321
    I have follows each step of troubleshooting. But still the issue persists.
    Step 1. The login does not exist == The login is very much exist in the domain as i am able to add the same domain id to other database instances
    Step 2. Duplicate security identifiers == i have used this query to find duplicate SID
    /*  SELECT name FROM syslogins WHERE sid = SUSER_SID ('YourDomain\YourLogin') */
    But there was only one row returned with create date of today's.
    Error while Assigning database level role (db_datareader) to SQL login (Domain Account) 
    Step 3. Authentication failure == Domain is available. User is able to login on other servers via RDP connection.
    Step 4. Case sensitivity == Database collation is set to Case insensitivity. (CI)
    Other two 5. Local Accounts & 6. Name resolution == is not applicable to me.
    I tried other ways also.
    A. Creating login and providing permission in one go only = User account is not created
    B. Instead of GUI, use query to create login and provide required permission = Same error.
    Does anybody has faced any such situation
    Chetan

    See the below output
    srvid
    sid
    xstatus
    xdate1
    xdate2
    name
    password
    dbid
    language
    isrpcinmap
    ishqoutmap
    selfoutmap
    NULL
    0x010500000000000515000000A1F66E1BFC1DC75D26E72530A2B80400
    14
    20:25.9
    57:33.4
    UKBAA\LHRAPPMuttavarapuS
    NULL
    1
    us_english
    0
    0
    0
    Chetan

Maybe you are looking for