Need info related to log files

Hi All,
If a person tries to take backup of tables metadata from SQL Developer, then is such information stored in the logs?
To be specific, is there any logging mechanism @ oracle server which tells that table metadata has been accessed from a particular machine by a particular user?
If yes, where are such logs located?
Thanks in advance,
---Eden

The archive log records changes to the database for the purposes of recovery and are not human readable.
You may be able to achieve what you want (whatever it is) using auditing features but first you need to get your knowledge of oracle up to a reasonable level. Read the documentation starting with the Concepts guide.

Similar Messages

  • How to suppress  OK/INFO - 1200658 in log files

    Hi Guru's,
    I want to suppress the below kind of information from the log files,due to this kind of info the log file size was increasing i would like to see the log file as cleaner.
    OK/INFO - 1200658 - The formula for member [SU.XXXX] is Complex. If possible, add a non-empty directive to optimize for sparse data..
    OK/INFO - 1200658 - The formula for member [SU.XYYY] is Complex. If possible, add a non-empty directive to optimize for sparse data..
    OK/INFO - 1200658 - The formula for member [RL.AAAA] is Complex. If possible, add a non-empty directive to optimize for sparse data..
    OK/INFO - 1200658 - The formula for member [SU.BBBB] is Complex. If possible, add a non-empty directive to optimize for sparse data..
    OK/INFO - 1200658 - The formula for member [RL.CCCC] is Complex. If possible, add a non-empty directive to optimize for sparse data..
    OK/INFO - 1200658 - The formula for member [SU.DDDD] is Complex. If possible, add a non-empty directive to optimize for sparse data..
    What kind of settings i have to change? Any help on this will be appreciated.
    Thanks,
    Sai

    http://download.oracle.com/docs/cd/E12825_01/epm.111/esb_techref/homecfgc_logerr.htm
    check AGENTDISPLAYMESSAGELEVEL and AGENTLOGMESSAGELEVEL settings.

  • Need to copy archive log file "arch1_601.dbf" when restore database?

    Hi all,
    I have the following case:
    1) Full hot backup today (Feb-12-2009) in directory /u03/db/backup
    2) But ongoing archive log are generated in directory /u02/oracle/uat/uatdb/9.2.0/dbs
    3) I need to restore database becuase some data files are missing.
    4) Use today ful backup to restore
    5) Am I need to copy all archive log files in /u02/oracle/uat/uatdb/9.2.0/dbs to /u03/db/backup because "/u03/db/backup/RMAN" is restore directory?
    FAN

    Here is backup scripts:
    Run
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u02/db/backup/RMAN/%F.bck';
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
    allocate channel ch1 type disk format '/u02/db/backup/RMAN/backup_%d_%t_%s_%p_%U.bck';
    backup incremental level 1 cumulative database plus archivelog delete all input;
    backup current controlfile;
    backup spfile;
    release channel ch1;
    allocate channel for maintenance type disk;
    delete noprompt obsolete;
    delete noprompt archivelog all backed up 2 times to disk;

  • Need information from the .log file

    Hi,
    This is a sample log file:
    [Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
    ODBC Database termination - started.
    [Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
    ODBC Database termination - completed.
    [Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
    ODBC Database termination - started.
    [Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
    ODBC Database termination - completed.
    [Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
    ODBC Database termination - started.
    [Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
    ODBC Database termination - completed.
    [Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
    ODBC Database termination - started.
    [Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
    ODBC Database termination - completed.
    [Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/0/Build-EIS93130B006
    Terminating Analytic Services API.
    [Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Informational/0/Build-EIS93130B006
    Terminated Analytic Services API.
    [Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Trace/1051037/Build-EIS93130B006
    Terminating Analytic Services API.
    [Tue May 24 10:27:53 2011] /IS//0x0/1306229361/Informational/1051037/Build-EIS93130B006
    Terminated Analytic Services API.
    [Tue May 24 10:27:53 2011] /IS/Coordinator/0/Informational/0/Build-EIS93130B006
    Executed client request 'Logout' in 0 seconds
    [Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
    Coordinator Service is waiting...
    [Tue May 24 10:27:53 2011] /IS/Listener/0/Trace/1051001/Build-EIS93130B006
    Service is waiting for client request...
    [Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
    Service is busy processing 'The service is now available.'.
    [Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
    Coordinator Service is waiting...
    [Tue May 24 10:27:53 2011] /IS/Listener/0/Trace/1051001/Build-EIS93130B006
    Received client request Disconnect
    [Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
    Service is busy processing 'The service is busy.'.
    [Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
    Coordinator Service is waiting...
    [Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
    Service is busy processing 'The service is now available.'.
    [Tue May 24 10:27:53 2011] /IS/Coordinator/0/Trace/0/Build-EIS93130B006
    Coordinator Service is waiting...
    [Tue May 24 10:27:53 2011] /IS/Listener/0/Trace/1051001/Build-EIS93130B006
    Waiting for Essnet client connections.
    What is this number 'EIS93130B006' present in this line and how to get the number for a perticular metamodel and metaoutline?
    Regards
    Shakti

    Hi Shakti
    Tagging on to what Glenn is saying (FWIW I agree with his theory), What Glenn means is the number EIS93130B006 translates to:
    EIS v9.3.1.3(the EIS9313 part) Build 6 (the "B006" part)
    These are debug log statements coming out of EIS "trace" is generally the most verbose & chatty level of logging. everything any developer has written to be logged will show up in the log if the log level is at TRACE. I think you might want to check the logging level of your EIS and raise it to something akin to "Warning" to eliminate these log entries.
    Regards,
    Robb Salzmann

  • Reading info from a .log file

    hey i have a bunch of files that im trying to read through the text for specific keywords in the file. the problem is for some reason i cant seem to parse through a file that has the extension of .log. so if i have a file of text.log, it wont read through it to find all the info. i was wondering if anyone could tell me what im doing wrong and how to correct it. here is the current code i have:
    public class LogChecker {
          * @param args
         public static void main(String[] args) {
              File directory = new File("C:\\MyDirectory\\");
              String[] fileNameList = directory.list();
              Scanner fileScanner, lineScanner;
              String currentLine, currentWord;
              try{
                   for(int i = 0; i<fileNameList.length; i++){
                        File currFile = new File("C:\\MyDIrectory\\"+(String)fileNameList);
                        fileScanner = new Scanner(currFile);
                        while(fileScanner.hasNext()){
                             System.out.println(currFile.getName());
                             currentLine = fileScanner.nextLine();
                             lineScanner = new Scanner(currentLine);
                             while(lineScanner.hasNext()){
                                  currentWord = lineScanner.next();
                                  if(currentWord.indexOf("ban")!=-1){
                                       System.out.println("someone got banned "+currFile.getName());
              }catch(Exception e){
                   System.err.println(e.getMessage());
              //System.out.println(fileNameList.length);

    new File("C:\\MyDIrectory\\"+(String)fileNameList);
    That should be giving you compile errors. Why not just get an array of Files from your directory?                                                                                                                                                                                                                                                                                                                                           

  • Need info related to HAL

    Hi all
    I need small info
    1) If you have a planning application and if we want to load the metadata and data from flat file into planning application we can use HAL right ... how about the other scenario like .if we want to load data and metadata from any other essbase database. in to a planning application.......we can use essbase adapter to load the data and metadata in to planning application.......does it make any sense....does it work??? ....
    thanks

    Hi,
    Would it not be easier to build a HAL routine to get the metadata from the same source the other essbase cube builds its metadata from.
    You can use HAL to extract from the data from essbase cube and then load it into your planning essbase cube, HAL basically creates a report script which extracts the data, another way would be to use the DATAEXPORT command and export the data from one cube and load it into another, if you are after optimisation then DATAEXPORT will probably be quicker than HAL and easier to set up.
    Cheers
    John

  • Need to shrink huge log file

    Hi, 
    Have a database which is published using transactional replication.  The replication was broken yesterday due to a restore.  In order to try and fix this I issued  the "EXEC sp_replrestart" command and left it running, unfortunately
    it has now filled up the disk the log sits creating a 250GB file. 
    Getting this error: 
    Msg 9002, Level 17, State 6, Procedure sp_replincrementlsn_internal, Line 1
    The transaction log for database 'RKHIS_Live' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
    I really need to free up space on this disk and shrink the log, however I can't backup the database. 
    I've not tried shrinking the files yet as I can't do a full backup.
    Any ideas? 
    I don't care about replication at this point and will happily ditch it if it gets me out of this situation. 
    Thanks 

    I disabled replication at the subscriber and then the publisher and disabled all the agent jobs. 
    Then I shrank the database and files to 1GB.  Phew, database is functioning just fine.  
    Need a solution to this.  Problem is the published database (managed by a 3rd party) is backed up, worked on and then restored as part of their software upgrade procedure.  In this case there is bound to be discrepency between the transactions
    in the log.   
    Learned a valuable lesson regarding sp_replrestart today though :-( 

  • Need info related to Infoset

    Hi all,
    i have created an Infoset on 2 ODSes.
    i have created a query based on this Infoset.
    i have a requirement where i want data only from these ODSes to be given in the picklist of the selection screen.
    I have changed the property "Query Exec.FilterVal" of infoobjects used in query to "Only values in infoprovider" on ODS level.
    but i am getting values from master data also.
    Is there any property of Infoset which has to be changed?
    Please help.
    Regards,
    Priyanka.

    Hi Priyanka,
    Please check SAP Note : 984229.
    Might help you..!!
    -Pradnya

  • Need info related to creating indexes on ODS.

    Hi All,
    I have transported manually created secondary indexes on my ODS to quality system.
    Now I have a requirement where i have to optimise the query performance.
    In quality, I activated the ODS.There was data in the ODS before creating of index.
    But still my Query is taking too long to display records.
    Now my main doubt is since I have transported the manually created indexes,
    when these indexes will start working whether at the time of data loading or after the activation of ODS?
    Please advice on how i can optimise the query performance.

    Hi Priyanka,
    I think, your indexes should be working immediately after the transport.
    You can always create indexes even when there is preexisting data. As soon as you save the indexs they should be created on the data base.
    You can check whether your indexes are being used by your query in RSRT in display run schedule in data manager in execute and debug mode (Check the execution plan).
    You can check whether your indexes have been created in ODS Active table in SE11 in index maintainance and also in DB02 , I believe.
    To optimize your query performance - Filter in the queries should be on the primary indexes or the seconday indexes. You will get to know whether the indexes are bein guse and the cost saving by the usage of these indexes in the execution plan.Try to create indexes only when absolutely essential , because as mentioned in the post below, it affect loading performance, since it has to create these indexes for the newly loaded data
    If you are using Oracle data base you can consider partitioning your infoprovider
    If you are using DB2 - You can try multi dimensional clustering
    You can choose appropriate read mode and Cache mode for the query
    You can archive historic data no longer reported, to increase the reporting speed
    You can design the query correctly - with correct placement of filters
    Points to note when creating secondary indices is the order and the number of characteristics  in the secondary indices is also the determining factor for the usage of the index by your query
    Hope this helps,
    Best regards,
    Sunmit.

  • How to configure weblogic log file?

    HI
    How do i configure Weblogic server log file to log weblogic related information as well as my application?
    I need to maintain one log file for weblogic and my application.
    Thanks,

    Then glassfish instance is either not configured yet or installed somewhere else. Try looking under your user's home directory like C:\Documents and Settings\<username>\Application Data\glassfish\domains\domain1\logs

  • Storing of log file in A/P Server while running BDC session in SM35

    Hi All,
    I have issue when running BDC session in SM35.
    The actual issue is
    I need to store of log file generated while running BDC session in <b>SM35</b> in <b>Application/Presentation</b> Server path.
    When ever we run single session the Log file regarding that session we need to store in Application/Presentation Server.
    Can anybody have solution for this issue.
    Thanks in advance.
    Thanks & Regards,
    Rayeez.

    Hi
    See the std report RSBDC_ANALYSE, here you can know how to find out the log of B.I..
    You can create a program like that to load the log into file instead of showing it.
    Max

  • Need info on table: LATP_ENQ

    HI Experts,
    I need info related to the table LATP_ENQ.
    We create sales order in CRM, through some B-Docs order will be replicated in ECC.
    Entries are getting created in this table based on availability of the materials.
    Through Z-Transaction code, we delete entries in this table.
    Sometimes blank entries are getting created in this table, I would like to know in which scenario this table is getting updated.
    Regards,
    Swaraj

    Hi Swaraj
    As it is a Z transaction t.code and as you have deleted the table when we upgrade our ECC version then those blank tables have to be upgraded
    Regards
    Srinath

  • Hoping for a quick response : EXP and Archived REDO log files

    I apologize in advance if this question has been asked and answered 100 times. I admit I didn't search, I don't have time. I'm leaving on vacation tomorrow, and I need to know if I'm correct about something to do with backup / restore.
    we have 10g R2 running a single instance on a single server. The application vendor has "embedded" oracle with their application. The vendor's backup is a batch file using EXP - thus:
    exp system/xpwdxx@db full=y file=D:\Orant\admin\db\EXP\db_full.dmp log=D:\Orant\admin\db\EXP\db_full.txt direct=y compress=y
    This command is executed nightly at midnight. The files are then backed up by our nightly backup to offsite storage media.
    Te database is running in autoarchive mode. The problem is, the archived redo files filled the drive they were being stored on, and it is the drive the database is on. I used OS commands to move 136G of archived redo logs onto other storage media to free the drive.
    My question: Since the EXP runs at midnight, when there is likely NO activity, do I need to run in AutoArchive Mode? From what I have read, you cannot even apply archived redo log files to this type of backup strategy (IMP) Is that true? We are ok losing changes since our last EXP. I have read a lot of stuff about restoring consistent vs. inconsistent, and just need to know: If my disk fails, and I have to start with a clean install of Oracle and nothing else, can I IMP this EXP and get back up and running as of the last EXP? Or do I need the autoarchived redo log files back to July 2009 (136G of them).
    Hoping for a quick response
    Best Regards, and thanks in advance
    Bruce Davis

    Bruce Davis wrote:
    Amardeep Sidhu
    Thank you for your quick reply. I am reading in the other responses that since I am using EXP without consistent=y, I might not even have a backup. The application vendor said that with this dmp file they can restore us to the most recent backup. I don't really care for this strategy as it is untested. I asked them to verify that they could restore us and they said they tested the dmp file and it was OK.
    Thank you for taking the time to reply.
    Best Regards
    BruceThe dump file is probably ok in the sense it is not corrupted and can be used in an imp operation. That doesn't mean the data in it is transactionally consistent. And to use it at all, you have to have a database up and running. If the database is physically corrupted, you'll have to rebuild a new database from scratch before you can even think about using your dmp file.
    Vendors never understand databases. I once had a vendor tell me that Oracle's performance would be intolerable if there were more than 5 concurrent connections. Well, maybe in HIS product ..... Discussions terminated quickly after he made that statement.

  • Log files

    Hi all,
    I seem to have a problem in that the inventory is always empty but the
    workstations are importing.
    What log files do I need to have a look for any errors.
    I've been searching the Novell website for answers and have tried their
    solutions without any sucess.
    I know I have missed something but I don't know what so I need to find some
    log file errors to pinpoint the problem.
    Thanks in advance

    When a station is imported and then scanned for inventory, the data is
    exported to an .str file and placed in the scandir directory of zen server.
    That data is then transferred to the dbdir subdirectory so it can update
    the database. For some reason, your .str files aren't being built in the
    scandir directory (go check). Some log files you may want to view are
    located in \\server\sys\ZENworks\Inv\server\WmInv\logs\zenwor ksinvservice
    John Ferrer
    Network Support
    Medical Faculty Associates
    > Hi all,
    >
    > I seem to have a problem in that the inventory is always empty but the
    > workstations are importing.
    >
    > What log files do I need to have a look for any errors.
    >
    > I've been searching the Novell website for answers and have tried their
    > solutions without any sucess.
    >
    > I know I have missed something but I don't know what so I need to find some
    > log file errors to pinpoint the problem.
    >
    > Thanks in advance
    >
    >

  • How to remove all log files at application end ?

    I need to remove all log files from database dir.
    Just the data file must be in database diretory after the application ends.
    I´v tried:
    1 - set_flags(DB_LOG_AUTOREMOVE, 1);
    2 - txn_checkpoint(0, 0, DB_FORCE);
    But ways one log file reminds.
    Any bory nows how remove all log files at application end ?
    I really need this. How can i do that in C++ ?
    Thanks,
    DelNeto

    Here's how I solved it
    // At end of app.
    // Commit tables.
    pdbParam     ->sync(0);
    pdbUser     ->sync(0);
    // Close tables.
    pdbParam     ->close(0);
    pdbUser     ->close(0);
    // Delete table objects.
    delete     m_pdbParam;
    delete     m_pdbUser;
    // Commit all changes to the database.
    penvDbEnv->txn_checkpoint(0, 0, DB_FORCE);
    penvDbEnv->close(0);
    delete penvDbEnv;
    // Remove all logs files comes here.
    DbEnv *penvDbEnv;
    penvDbEnv = new DbEnv(0);
    ui32EnvFlags = DB_CREATE |
    DB_PRIVATE |
    DB_INIT_LOCK |
    DB_INIT_LOG |
    DB_INIT_MPOOL |
    DB_THREAD |
    DB_INIT_TXN;
    // Open the environment with full transactional support.
    iResult = penvDbEnv->open("..\\database", ui32EnvFlags, 0);
    // Get the list of log files.
    char **pLogFilLis;
    char **pLogFilLisBegin;
    iResult = penvDbEnv->log_archive(&pLogFilLis, DB_ARCH_ABS | B_ARCH_LOG);
    // This line resets the log sequence numbers from the database file.
    // No actual log file is associated with the database.
    iResult = penvDbEnv->lsn_reset("..\\database\\DATABASE.db", 0);
    // Remove the log files.
    if(pLogFilLis!= NULL)
    // I don´t now how put spaces and tabs here, sorry about the "___".;-).
    __for(pLogFilLisBegin = pLogFilLis; *pLogFilLis != NULL; ++pLogFilLis)
    ____iResult = remove(*pLogFilLis);
    __free(pLogFilLisBegin);
    // At this point no more log files exists at database directory.
    penvDbEnv->close(0);
    delete penvDbEnv;
    // If i need remove the environment files, do this.
    penvDbEnv = new DbEnv(0);
    penvDbEnv->remove(("..\\database", 0);
    delete m_penvDbEnv;
    Thanks to Bogdan Coman for show me the way
    DelNeto.

Maybe you are looking for

  • Memory on 4g

    I have a 4gig nano and erased all the songs on it to refresh everything, when i did this it shows the memory as 1.5 g and i can not get rid of it, down to no memory with no songs and no photos. Anyone had this problem in the past, and how do i fix it

  • Time Machine and File Transfer to a new hard drive -- Permissions

    24May2011 Apple Discussions:         I have a 2 TByte external hard drive that I use to back up a 1TB internal hard drive.  I use time machine.         I got a serious warning that my 2 TB drive is going bad and i need to reformat the disk AFTER I st

  • Column Masking in GUI _UPLOAD

    Hi All, I have to donwload an internal table with header data using column masking. But GUI_DOWNLOAD is downloading data for all the columns. I use one other alternate approach, call GUI_DOWNLOAD function two times, one for Header Data and another fo

  • Move Photos from one Catalogue to another

    Is it possible to move Photos from one Catalogue to another one?

  • Having issues finding out how many bytes are sent/recieved from a socket.

    Hello everyone. I've searched the forums and also google and it seems I can't find a way to figure out how many bytes are sent from a socket and then how many bytes are read in from a socket. My server program accepts a string (an event) and I parse