Question about my log entry showing data file is excluded.

I am new user of Sophos for Mac.  I have OS 10.10.3 installed. My log says: com.sophos.intercheck: Info: Exclusion: /Volumes/Data/ at 12:48 on 13 June 2015
com.sophos.intercheck:
com.sophos.intercheck: Info: Exclusion: /Volumes/Time Machine Backups at 12:48 on 13 June 2015
com.sophos.intercheck: I understand the time machine exclusion, but not the Volumes/Data. I have excluded nothing in preferences.  In "On-Access," the excluded items pane is blank.  Is the above normal for a log in a situation where no exclusions have been made (although the check network volumes (like time machine?) has not been ticked). Thanks.

Hi again bobalaska,
Is /Volumes/Data located on a network? that would be the obvious reason for that exclusion to exist..

Similar Messages

  • Question about imap log format.

    I hava a question about imap log file.
    Here are sample imap log lines
    [26/Dec/2005:13:21:24 +0900] mail imapd[513]: Account Notice: close [203.231.11.113] ywmoon 2005/12/26 13:21:24 0:00:00 1013 2843 2
    [26/Dec/2005:13:21:32 +0900] mail imapd[513]: Account Notice: close [203.231.14.251] hkchoi 2005/12/26 13:20:37 0:00:55 446 1196 0
    Above line...
    First line
    ywmoon : UserID
    2005/12/26 : Date
    13:21:24 : login time (hour:minute:second)
    0:00:00 : ?
    1013 : bytes from client to server
    2843 : bytes from server to client
    2 : ?
    next line
    13:20:37 : same above
    0:00:55 : ?
    446 : same above
    1196 : same above
    0 : ?
    From above, I don't know "?" mark fields.
    0:00:00 or 0:00:55 ==> expenditure time to login ? And is this "hour:minute:second" OR "minute:second:milliSecond" ?
    2 or 0 ==> 1 means INBOX. 0 and 2, what they are mean ?
    Thank you in advance...

    I hava a question about imap log file.
    Here are sample imap log lines
    [26/Dec/2005:13:21:24 +0900] mail imapd[513]: Account
    Notice: close [203.231.11.113] ywmoon 2005/12/26
    13:21:24 0:00:00 1013 2843 2
    [26/Dec/2005:13:21:32 +0900] mail imapd[513]: Account
    Notice: close [203.231.14.251] hkchoi 2005/12/26
    13:20:37 0:00:55 446 1196 0
    Above line...
    First line
    ywmoon : UserID
    2005/12/26 : Date
    13:21:24 : login time (hour:minute:second)
    0:00:00 : ?how long the connection lasted.
    1013 : bytes from client to server
    2843 : bytes from server to client
    2 : ?Number of messages?
    that's a guess. we don't actually document this stuff.
    >
    next line
    13:20:37 : same above
    0:00:55 : ?
    446 : same above
    1196 : same above
    0 : ?
    From above, I don't know "?" mark fields.
    0:00:00 or 0:00:55 ==> expenditure time to login ?
    And is this "hour:minute:second" OR
    "minute:second:milliSecond" ?
    2 or 0 ==> 1 means INBOX. 0 and 2, what they are mean
    Thank you in advance...

  • CBWFQ: Question about the output of "show policy-map interface" command

    Hi everyone,
    I have a question about the output of "show policy-map interface" command.
    The following is the output of this command and lower side of the output shows
    (total queued/total drops/no-buffer drops) 0/342/0
    If the packets drop occur due to the situation of no enough buffer,
    "no-buffer drops" counted up. But "no-buffer drops" has not been counted up.
    The "no-buffer drops" is 0 (zero) but "total drops" are counted as 342.
    I guess there are other factors except "no-buffer drops" to add "total drops".
    But I can not find any information about "other factors".
    So I would like to know the "other factors" added to "total drops".
    reserch-3725#sh policy-map interface fastethernet0/1
    FastEthernet0/1
    Service-policy output: shaping
    Class-map: kdpc (match-all)
    146956873 packets, 115209221595 bytes
    5 minute offered rate 156000 bps, drop rate 0 bps
    Match: access-group name YOKOHAMA_to_CHINO
    Traffic Shaping
    Target/Average Byte Sustain Excess Interval Increment
    Rate Limit bits/int bits/int (ms) (bytes)
    9360000/9360000 58500 234000 234000 25 29250
    Adapt Queue Packets Bytes Packets Bytes Shaping
    Active Depth Delayed Delayed Active
    - 0 146956724 3539850811 2960247 3851843541 no
    Class-map: class-default (match-any)
    552458414 packets, 249687580329 bytes
    5 minute offered rate 242000 bps, drop rate 0 bps
    Match: any
    Traffic Shaping
    Target/Average Byte Sustain Excess Interval Increment
    Rate Limit bits/int bits/int (ms) (bytes)
    3072000/3072000 19200 76800 76800 25 9600
    Adapt Queue Packets Bytes Packets Bytes Shaping
    Active Depth Delayed Delayed Active
    - 0 552453209 573909865 30358216 2926188156 no
    Service-policy : policy1
    Class-map: dlsw (match-all)
    979578 packets, 264843255 bytes
    5 minute offered rate 0 bps, drop rate 0 bps
    Match: access-group name acl-dlsw
    Queueing
    Output Queue: Conversation 137
    Bandwidth 128 (kbps) Max Threshold 64 (packets)
    (pkts matched/bytes matched) 20922/17371500
    (depth/total drops/no-buffer drops) 0/0/0
    Class-map: telnet (match-all)
    29938 packets, 1806058 bytes
    5 minute offered rate 0 bps, drop rate 0 bps
    Match: access-group name acl-telnet
    Queueing
    Output Queue: Conversation 138
    Bandwidth 64 (kbps) Max Threshold 64 (packets)
    (pkts matched/bytes matched) 639/38900
    (depth/total drops/no-buffer drops) 0/0/0
    Class-map: class-default (match-any)
    551448911 packets, 249420939729 bytes
    5 minute offered rate 242000 bps, drop rate 0 bps
    Match: any
    Queueing
    Flow Based Fair Queueing
    Maximum Number of Hashed Queues 128
    (total queued/total drops/no-buffer drops) 0/342/0
    Your information would be appreciated.

    Details infomatiuon regarding show policy-map interface
    http://www.cisco.com/en/US/tech/tk543/tk545/technologies_tech_note09186a008010dd6a.shtml
    http://www.cisco.com/en/US/tech/tk543/tk760/technologies_tech_note09186a0080108e2d.shtml
    http://www.cisco.com/univercd/cc/td/doc/product/software/ios123/123cgcr/qos_r/qos_s2g.htm#wp1146884

  • Time Capsule by default backup only data files and excludes program files like Photoshop, Lightroom, and MS Office, etc... right? or do I have to exclude manually?

    Time Capsule by default backup only data files and excludes program files like Photoshop, Lightroom, and MS Office, etc... right since if crash I can reinstall and not want to take up disk space for applications that I have CD to reinstall?  Or do I have to manually exclude these?

    Time Machine will back all files except files like caches, trash or temp files, you can select with file you would like to exclude by going into the Time Machine preferences.http://support.apple.com/kb/HT1427

  • A question about restoring from cold backup(control file backup not clear)

    Hi,
    I had another question about restoring the cold backup. My database is in noarchivelog mode and after taking a consistent cold backup, all I need to do is to restore the backup right? -Why I got this question is because: when I backup my control file to trace, I see statements like this:-----
    -- Commands to re-create incarnation table
    -- Below log names MUST be changed to existing filenames on
    -- disk. Any one log file from each branch can be used to
    -- re-create incarnation records.
    -- ALTER DATABASE REGISTER LOGFILE '/uo1/app1/arch1_1_647102958.dbf';
    -- Recovery is required if any of the datafiles are restored backups,
    -- or if the last shutdown was not normal or immediate.
    RECOVER DATABASE
    -- Database can now be opened normally.
    ALTER DATABASE OPEN;
    My database is in noarchivelog mode now so don't know why these statements (of register the logfile) is there in the backup of control file? so when I restore the cold backup of this database, it will still work correct? (there is no logfile I have only CRD files in cold backup -no archive log files.)
    thanks
    Nirav

    Thanks for your inputs! It is most useful to me.
    Regards
    Nirav

  • Three questions about removing a numbered Mail message file

    ClamXav, the anti-virus program, has found several old messages filed by Mail.app v4.4 that are infected with various exploits. Finder shows that the filenames for these messages are numbered, followed by the email file extension, like this: 359959.emlx
    Question 1: Can I simply delete these files individually via the Finder and do nothing else, calmly confident in the knowledge that Mail.app, robust as Apple engineers can make it, doesn't care a whit whether an individual mail file is there or not?
    Or
    Question 2: Do I have to locate them via Mail's interface and delete them from there because otherwise Mail.app, so fragile, will have a fit, die, explode or wither to a pathetic whimper of its former self because I had the temerity to delete a file without genuflecting and consulting it?
    Or
    Question 3: If I have to locate the files and delete them via Mail, how do I use the file name to point to a specific message in Mail's inscrutable interface?
    Thanks.

    I'd just delete them. However, if you're worried about it, just leave them alone. They can't hurt anything.

  • Question about Payment term and due date

    Hi experts,
         I encouter a question about payment term:
    I have the payment term ,the baseline date,document date and posting date.How can I get the due date??Is any SAP function module can calculate the due date?
    Thanks a lot in advance.
    Villy.Lv.

    Hi guys,
       we can use this FM FI_TERMS_OF_PAYMENT_PROPOSE to get the Days for net due date and add it to base line date,then we get the due date.
    BR and thanks a lot.
    Villy.Lv.

  • A question about piecewise insert(OCI), only data in the first piece ..

    When i do a piecewise insert operation, only data in the first piece was inserted into the column, There is no error occured. a OCI_SUCCESS returned when the last piece operation completed.
    I am really puzzled now:(.
    Who can get me out of this?
    The data to be insert are stored in many structs:
    typedef struct test_st{
         char * buffer;
         struct test_st * next;
    } TEST_ST;
    I use malloc(size) to allocate the buffer of each struct, and I use strcpy() to copy some strings to these buffers.
    table mc_test is like this:
    id number;
    message varchar(64);
    The full source_code goes there:
    #include <stdio.h>
    #include <unistd.h>
    #include <stdlib.h>
    #include <oci.h>
    static OCIEnv *p_env;
    static OCIError *p_err;
    static OCISvcCtx *p_svc;
    static OCIStmt *p_sql;
    static OCIDefine p_dfn    = (OCIDefine ) 0;
    static OCIBind p_bnd    = (OCIBind ) 0;
    const char * orausername="out_user";
    const char * orapassword="user_out";
    const char * oraserver="bigfish";
    int oraOK=0;
    int rc;
    char errbuf[100];
    int errcode;
    int checkerr(OCIError *errhp, sword status);
    int db_init(void);
    int db_open(void);
    int db_close(void);
    typedef struct test_st{
         char * buffer;
         struct test_st * next;
    } TEST_ST;
    int db_save_to_test(){
         char               sql_str[512];
         ub4                    typep;
         ub4                    piec_status;
         ub1                    in_outp;
         ub4                    rownum;
         ub4                    arr;
         sb2                    indp;
         ub2                    r_code;
         int                    t_buff_len;
         int                    total_len=15;
         int                    buffer_pos=0;
         TEST_ST * content, * t;
         content=(TEST_ST *) malloc(sizeof(TEST_ST));
         content->buffer= (char *) malloc(5);
         strcpy(content->buffer,"1234");
         content->next=(TEST_ST *) malloc(sizeof(TEST_ST));
         content->next->buffer= (char *) malloc(5);
         strcpy(content->next->buffer,"5678");
         content->next->next=(TEST_ST *) malloc(sizeof(TEST_ST));
         content->next->next->buffer= (char *) malloc(5);
         strcpy(content->next->next->buffer,"9012");
         content->next->next->next=NULL;
         if(!_ora_OK){
              return 0;
         printf("-------------------------\n");
         printf("[db]save to mc_test..\n");
         printf("total: %d bytes\n",total_len);
         /* create sql */
         sprintf(sql_str,"insert into mc_test(id,message)values(1,:x)");
         //printf("%s\n",sql_str);
         rc = OCIStmtPrepare(p_sql, p_err, sql_str,
              (ub4) strlen(sql_str), (ub4) OCI_NTV_SYNTAX, (ub4) OCI_DEFAULT);
         checkerr(p_err,rc);
         rc = OCIBindByPos(p_sql, &p_bnd, p_err, (ub4) 1,
                   (dvoid *) content->buffer, total_len, SQLT_CHR, (dvoid *) 0,
                   (ub2 *) 0, (ub2 *) 0, (ub4) 0, (ub4 *) 0, OCI_DATA_AT_EXEC);
         checkerr(p_err,rc);
         rc = OCIStmtExecute(p_svc, p_sql, p_err, (ub4) 1, (ub4) 0,
              (CONST OCISnapshot *) NULL, (OCISnapshot *) NULL, OCI_DEFAULT);
         checkerr(p_err,rc);
         if(rc == OCI_NEED_DATA){
              printf("[pw] start........\n");
              // insert next pieces
              t=content;
              printf("%d bytes total.\n",total_len);
              while(t!=NULL){
                   if(t==content){
                        piec_status=OCI_FIRST_PIECE;
                        t_buff_len=strlen(t->buffer);
                        buffer_pos=t_buff_len+1;
                        printf("ready for first piece: %d bytes\n",t_buff_len+1);
                        printf("__________________\n%s\n__________________\n",t->buffer);
                   }else if(t->next==NULL){
                        piec_status=OCI_LAST_PIECE;
                        t_buff_len=strlen(t->buffer);
                        buffer_pos+=t_buff_len+1;
                        printf("ready for last piece: %d bytes\n",t_buff_len+1);
                        printf("__________________\n%s\n__________________\n",t->buffer);
                   }else{
                        piec_status=OCI_NEXT_PIECE;
                        t_buff_len=strlen(t->buffer);
                        buffer_pos+=t_buff_len+1;
                        printf("ready for next piece: %d bytes\n",t_buff_len+1);
                        printf("__________________\n%s\n__________________\n",t->buffer);
                   t_buff_len++;
                   rc = OCIStmtSetPieceInfo((dvoid *)p_bnd,
    (ub4)OCI_HTYPE_BIND, p_err, (dvoid *)t->buffer,
    & t_buff_len, piec_status, (dvoid *) 0, &r_code);
                   checkerr(p_err,rc);
                   rc = OCIStmtExecute(p_svc, p_sql, p_err, (ub4) 1, (ub4) 0,
                        (CONST OCISnapshot *) NULL, (OCISnapshot *) NULL, OCI_DEFAULT);
                   checkerr(p_err,rc);
                   t=t->next;
              if(rc==OCI_SUCCESS){
                   printf("All insert OK\n");
              printf("-------------------------\n");
              return 0;
         }else if(rc==OCI_SUCCESS){
              printf("Simple inserted.\n");
              printf("-------------------------\n");
              return 1;
         }else{
              checkerr(p_err,rc);
              printf("-------------------------\n");
              return 0;
    int main(){
         db_init();
         db_open();
         db_save_to_test();
         db_close();
    int db_close(){
         rc = OCILogoff(p_svc, p_err); /* Disconnect */
         rc = OCIHandleFree((dvoid *) p_sql, OCI_HTYPE_STMT); /* Free handles */
         rc = OCIHandleFree((dvoid *) p_svc, OCI_HTYPE_SVCCTX);
         rc = OCIHandleFree((dvoid *) p_err, OCI_HTYPE_ERROR);
         oraOK=0;
         return rc;
    int db_open(){
         /* Connect to database server */
         rc = OCILogon(p_env, p_err, &p_svc, orausername, strlen(_ora_username), orapassword, strlen(_ora_password), oraserver, strlen(_ora_server));
         if (rc != 0) {
         OCIErrorGet((dvoid *)p_err, (ub4) 1, (text *) NULL, &errcode, errbuf, (ub4) sizeof(errbuf), OCI_HTYPE_ERROR);
         printf("Error - %.*s\n", 512, errbuf);
         return(8);
         /* Allocate SQL */
         rc = OCIHandleAlloc( (dvoid *) p_env, (dvoid **) &p_sql,
              OCI_HTYPE_STMT, (size_t) 0, (dvoid **) 0);
         checkerr(p_err,rc);
         oraOK=1;
         return rc;
    int db_init(){
         rc = OCIInitialize((ub4) OCI_DEFAULT, (dvoid *)0, /* Initialize OCI */
              (dvoid * (*)(dvoid *, size_t)) 0,
              (dvoid * (*)(dvoid *, dvoid *, size_t))0,
              (void (*)(dvoid *, dvoid *)) 0 );
         /* Initialize evironment */
         rc = OCIEnvInit( (OCIEnv **) &p_env, OCI_DEFAULT, (size_t) 0, (dvoid **) 0 );
         /* Initialize handles */
         rc = OCIHandleAlloc( (dvoid *) p_env, (dvoid **) &p_err, OCI_HTYPE_ERROR,
              (size_t) 0, (dvoid **) 0);
         rc = OCIHandleAlloc( (dvoid *) p_env, (dvoid **) &p_svc, OCI_HTYPE_SVCCTX,
              (size_t) 0, (dvoid **) 0);
         checkerr(p_err,rc);
         return rc;
    int checkerr(OCIError *errhp, sword status){
         text errbuf[512];
         sb4 errcode = 0;
         switch(status){
              case     OCI_SUCCESS:
                        return 0; break;
              case     OCI_SUCCESS_WITH_INFO:
                        (void) printf("Error - OCI_SUCCESS_WITH_INFO\n");
                        break;
              case     OCI_NEED_DATA:
                        (void) printf("Error - OCI_NEED_DATA\n");
                        break;
              case     OCI_NO_DATA:
                        (void) printf("Error - OCI_NODATA\n");
                        break;
              case     OCI_ERROR:
                        (void) OCIErrorGet((dvoid *)errhp, (ub4) 1, (text *) NULL, &errcode,
                                       errbuf, (ub4) sizeof(errbuf), OCI_HTYPE_ERROR);
                        (void) printf("Error - %.*s\n", 512, errbuf);
                        break;
              case     OCI_INVALID_HANDLE:
                        (void) printf("Error - OCI_INVALID_HANDLE\n");
                        break;
              case     OCI_STILL_EXECUTING:
                        (void) printf("Error - OCI_STILL_EXECUTE\n");
                        break;
              case     OCI_CONTINUE:
                        (void) printf("Error - OCI_CONTINUE\n");
                        break;
              default:
                        break;
         return 1;
    ref: http://www.oracle.com.cn/onlinedoc/appdev.920/a96584/oci05bnd.htm#427755

    On Windows, the Flash player plugin DLL is under C:\Windows. When everything is working correctly, Firefox finds the Flash player by checking entries under a registry key. I don't know whether this check takes place every time Firefox restarts, or at other intervals.
    Other plugins may install differently, e.g., copying a DLL into a folder under c:\Program Files (x86). It's rare for a plugin to be profile-specific.
    If your plugin list is not updating, the pluginreg.dat file that stores plugin information might be corrupted. This article has a section on how to delete that file so Firefox will regenerate it: [https://support.mozilla.org/en-US/kb/troubleshoot-issues-with-plugins-fix-problems#w_re-initializing-the-plugins-database]. Does that help?

  • Question about httpd logs

    Hi,
    I'm running Message Server 5.2 on Solaris 8. I poked through the documents and I couldn't find much on the httpd log.
    In my httpd logs I have quite a number of entries like:
    [28/Mar/2005:12:03:43 -0800] pobox httpd[12762]: Account Notice: close [10.1.75.247] [unauthenticated] 2005/3/28 12:02:43 0: 01:00 593 1459 0
    What I want to know is what the "close [10.1.75.247] [unauthenticated]" part means.
    Thanks

    At default "loglevel" of "notice", all Messaging processes log the logout function, but not the log in function.
    Every few minutes, the server checks itself to see if the process is running, and you will see these log entries. Normal . . .
    These are NOT error messages.
    jay

  • Question about cyrus-sasl2: sasldblistusers2 shows check_db unsuccessful

    I run "port install" to build cyrus-sasl2 without problems.
    saslpasswd2 creates the sasldb2 file, but shows an error message in the auth.log.
    error deleting entry from sasldb: DB_NOTFOUND: No matching key/data pair found
    Running sasldblistuser2 to list the user created with sasldb2 still not working and shows up "check_db unsuccessful".
    What causes the error ?
    root# port install cyrus-sasl2
    ---> Fetching cyrus-sasl2
    ---> Verifying checksum(s) for cyrus-sasl2
    ---> Extracting cyrus-sasl2
    ---> Configuring cyrus-sasl2
    ---> Building cyrus-sasl2
    ---> Staging cyrus-sasl2 into destroot
    ---> Installing cyrus-sasl2 @2.1.22_0
    ---> Activating cyrus-sasl2 @2.1.22_0
    ---> Cleaning cyrus-sasl2
    root# port installed | grep cyrus-sasl2
    cyrus-sasl2 @2.1.22_0 (active)
    root# saslpasswd2 -c -u localhost _cyrus
    Password:
    Again (for verification):
    root# ls -ltr /opt/local/etc/sasldb2
    -rw-r----- 1 root admin 12288 Mar 24 15:00 /opt/local/etc/sasldb2
    root# file /opt/local/etc/sasldb2
    /opt/local/etc/sasldb2: Berkeley DB (Hash, version 8, little-endian)
    root# sasldblistusers2
    check_db unsuccessful
    saslpasswd2 logs the following:
    What does it mean : error deleting entry from sasldb: DB_NOTFOUND: No matching key/data pair found
    Mar 24 15:00:19 mac02 saslpasswd2[42052]: Setpass for SRP successful
    Mar 24 15:00:19: --- last message repeated 2 times ---
    Mar 24 15:00:19 mac02 saslpasswd2[42052]: Setpass for OTP successful
    Mar 24 15:00:19: --- last message repeated 2 times ---
    Mar 24 15:00:19 mac02 saslpasswd2[42052]: error deleting entry from sasldb: DB_NOTFOUND: No matching key/data pair found
    sasldblistusers2 gives:
    Mar 24 15:01:42 mac02 sasldblistusers2[42062]: auxpropfunc error invalid parameter supplied
    Mar 24 15:01:42 mac02 sasldblistusers2[42062]: sasl_pluginload failed on saslauxprop_pluginit for plugin: ldapdb
    Mar 24 15:01:42 mac02 sasldblistusers2[42062]: auxpropfunc error invalid parameter supplied
    Mar 24 15:01:42 mac02 sasldblistusers2[42062]: sasl_pluginload failed on saslauxprop_pluginit for plugin: ldapdb
    Mar 24 15:01:42 mac02 sasldblistusers2[42062]: auxpropfunc error invalid parameter supplied
    Mar 24 15:01:42 mac02 sasldblistusers2[42062]: sasl_pluginload failed on saslauxprop_pluginit for plugin: ldapdb
    The shared libraries that the object uses:
    root# otool -L /opt/local/bin/saslpasswd2
    root# otool -L /opt/local/sbin/saslpasswd2
    /opt/local/sbin/saslpasswd2:
    /usr/lib/libresolv.9.dylib (compatibility version 1.0.0, current version 25.0.2)
    /usr/lib/libpam.1.dylib (compatibility version 1.0.0, current version 1.0.0)
    /opt/local/lib/libsasl2.2.dylib (compatibility version 3.0.0, current version 3.22.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 111.1.3)
    /opt/local/lib/db44/libdb-4.4.dylib (compatibility version 0.0.0, current version 0.0.0)
    /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0)
    root# otool -L /opt/local/sbin/sasldblistusers2
    /opt/local/sbin/sasldblistusers2:
    /usr/lib/libresolv.9.dylib (compatibility version 1.0.0, current version 25.0.2)
    /usr/lib/libpam.1.dylib (compatibility version 1.0.0, current version 1.0.0)
    /opt/local/lib/libsasl2.2.dylib (compatibility version 3.0.0, current version 3.22.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 111.1.3)
    /opt/local/lib/db44/libdb-4.4.dylib (compatibility version 0.0.0, current version 0.0.0)
    /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0)

    Hi Xun,
    As a Workaround, you can try below:
    On the Target system, perform the following steps:
    1. Edit $ORACLE_HOME/appsutil/clone/context/db/CTXORIG.xml on db & $COMMON_TOP/clone/context/apps/CTXORIG.xml for apps
    2. Change the value for s_dbSid,s_contextname,s_dbhost to the correct target system value
    3. Rerun "perl adcfgclone.pl dbTier" & "perl adcfgclone.pl appsTier"

  • Question about frequent log switches

    I support an Oracle 10g database (10.2.0.4) and database activity has increase to the point that, during the heaviest parts of the day, log switches are occurring too frequently (15 - 20 times per hour!). We also utilize Data Guard to replicate the database to our DR site. We currently have 2 log groups with 2 membes in each group.
    I understand that I can tackle this issue 2 ways: either to increase the size of my redo log files (they are currently at 50 Meg each), or I can add additional redo log groups to the database.
    I would like to get some opinions on whether on solution is better than the other, or the pros and cons of each course of action.
    Thank you in advance for your advice with my question!

    CowTown_dba wrote:
    Thanks for helping me to understand my true problem. The issue is that because of the frequent log switches, database performance is degrading.Maybe that's the cause, maybe it isn't.
    Users are complaining about slow response. So if I add more groups it will buy the archiver extra time but it will not help with the slow response issue.Depends on the root cause of the slow response issue. That has yet to be determined. While it may be true that your car has a short in the electrical system and your car doesn't start, it doesn't necessarily follow that the car doesn't start because of the short in the electrical system.
    >
    I really appreciate everyone contributing to my understanding of the issue, and helping clarify the root problem so that I can fix it the first time around.
    You guys rock!

  • Question about Archive Log Deletion policy

    I've a problem to understand the Archive Log Deletion policy, and I I'd like to this problem explain with the following example.
    Messages of the database are in German, but I guess you'll understand them.
    SQL> startup
    ORACLE-Instance hochgefahren.
    Total System Global Area 5344731136 bytes
    Fixed Size                  2129240 bytes
    Variable Size            2684355240 bytes
    Database Buffers         2617245696 bytes
    Redo Buffers               41000960 bytes
    Datenbank mounted.
    Datenbank geöffnet.
    SQL> archive log list
    Datenbank-Log-Modus              Archive-Modus
    Automatische Archivierung             Aktiviert
    Archivierungsziel            E:\oracle\thetis_iv\arch
    Älteste Online-Log-Sequenz     17917
    Nächste zu archivierende Log-Sequenz   17919
    Aktuelle Log-Sequenz           17919
    SQL> alter system switch logfile;
    System wurde geändert.I created a brand new archive log.
    SQL> exit
    Verbindung zu Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options beendet
    D:\OracleDB\product\11.1.0\db_1\BIN>dir E:\oracle\thetis_iv\arch
    Datenträger in Laufwerk E: ist Volume
    Volumeseriennummer: 3EBD-77E5
    Verzeichnis von E:\oracle\thetis_iv\arch
    06.04.2011  15:04    <DIR>          .
    06.04.2011  15:04    <DIR>          ..
    06.04.2011  15:04        17.137.152 ARC17919_0721667907.001
                   1 Datei(en),     17.137.152 Bytes
                   2 Verzeichnis(se), 41.073.258.496 Bytes freiand this is the only archive log in the directory. Now I start rman:
    D:\OracleDB\product\11.1.0\db_1\BIN>rman target / catalog rmanrepo@rmanrepo
    Recovery Manager: Release 11.1.0.7.0 - Production on Mi Apr 6 15:05:35 2011
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Mit Ziel-Datenbank verbunden: ENTWIV (DBID=21045568)
    Kennwort für Recovery-Katalog-Datenbank:
    Verbindung mit Datenbank des Recovery-Katalogs
    RMAN> show all;
    RMAN-Konfigurationsparameter für Datenbank mit db_unique_name ENTWIV sind:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 1 DAYS;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
    CONFIGURE CONTROLFILE AUTOBACKUP OFF;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'E:\oracle\thetis_iv\backup\CF_%F_ENTWIV.ORA';
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
    CONFIGURE DEVICE TYPE SBT_TAPE PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'ENV=(TPDO_OPTFILE=D:\Services\Tivoli\TSM\AgentOBA64\tpdo.opt)';
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE COMPRESSION ALGORITHM 'BZIP2'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO 'SBT_TAPE';
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'D:\ORACLEDB\PRODUCT\11.1.0\DB_1\DATABASE\SNCFENTWIV.ORA'; # defaultThe archive log deletion policy says the the logfiles have to be backed up for two times before they get deleted.
    Now I backup all archive logs, that havn't been backed up for at least two times.
    RMAN> run { backup archivelog all not backed up 2 times
    2>       format '%d_AR_%Y%M%D_%s_%t'
    3>       tag 'ARCHIVE LOGS'
    4>       DELETE ALL INPUT;
    5>     }
    Starten backup um 06.04.2011 15:08:01
    Aktuelles Log archiviert
    Zugewiesener Kanal: ORA_SBT_TAPE_1
    Kanal ORA_SBT_TAPE_1: SID=253 Device-Typ=SBT_TAPE
    Kanal ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.1.0
    Kanal ORA_SBT_TAPE_1: Backup Set für Archive Log wird begonnen
    Kanal ORA_SBT_TAPE_1: Archive Logs in Backup Set werden angegeben
    Eingabe-Archive-Log-Thread=1 Sequence=17919 RECID=147 STAMP=747759899
    Eingabe-Archive-Log-Thread=1 Sequence=17920 RECID=148 STAMP=747760081
    Kanal ORA_SBT_TAPE_1: Piece 1 wird auf 06.04.2011 15:08:02 begonnen
    Kanal ORA_SBT_TAPE_1: Piece 1 auf 06.04.2011 15:08:09 beendet
    Piece Handle=ENTWIV_AR_20110406_23_747760082 Tag=ARCHIVE LOGS Kommentar=API Version 2.0,MMS Version 5.5.1.0
    Kanal ORA_SBT_TAPE_1: Backup Set vollstõndig, abgelaufene Zeit: 00:00:08
    Kanal ORA_SBT_TAPE_1: Archive Logs werden gel÷scht
    Archive Log-Dateiname=E:\ORACLE\THETIS_IV\ARCH\ARC17919_0721667907.001 RECID=147 STAMP=747759899
    Archive Log-Dateiname=E:\ORACLE\THETIS_IV\ARCH\ARC17920_0721667907.001 RECID=148 STAMP=747760081
    Beendet backup um 06.04.2011 15:08:10
    RMAN> exit
    Recovery Manager abgeschlossen.
    D:\OracleDB\product\11.1.0\db_1\BIN> dir E:\oracle\thetis_iv\arch
    Datenträger in Laufwerk E: ist Volume
    Volumeseriennummer: 3EBD-77E5
    Verzeichnis von E:\oracle\thetis_iv\arch
    06.04.2011  15:08    <DIR>          .
    06.04.2011  15:08    <DIR>          ..
                   0 Datei(en),              0 Bytes
                   2 Verzeichnis(se), 41.090.396.160 Bytes freirman deleted all archive logs, even I they are on tape only once by now.
    Thats not what I expected. Where is my mistake?

    Hi,
    I do new tests it's very strange.
    BACKUP ARCHIVELOG command is not obeying the policy of archivelog.
    You can open a SR on MOS. (to check bugs)
    I reproduce the same test and the result was the same, it seems that this is a bug.
    CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO DISK;
    RMAN> backup archivelog all not backed up 2 times delete all input;
    Starting backup at 06-APR-11
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=15 RECID=16 STAMP=747753711
    input archived log thread=2 sequence=20 RECID=17 STAMP=747753714
    input archived log thread=1 sequence=16 RECID=19 STAMP=747753729
    input archived log thread=2 sequence=21 RECID=18 STAMP=747753729
    channel ORA_DISK_1: starting piece 1 at 06-APR-11
    channel ORA_DISK_1: finished piece 1 at 06-APR-11
    piece handle=+DATA/orcl/backupset/2011_04_06/annnf0_tag20110406t132210_0.304.747753731 tag=TAG20110406T132210 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=+DATA/orcl/archivelog/2011_04_06/thread_1_seq_15.293.747753711 RECID=16 STAMP=747753711
    archived log file name=+DATA/orcl/archivelog/2011_04_06/thread_2_seq_20.295.747753715 RECID=17 STAMP=747753714
    archived log file name=+DATA/orcl/archivelog/2011_04_06/thread_1_seq_16.294.747753729 RECID=19 STAMP=747753729
    archived log file name=+DATA/orcl/archivelog/2011_04_06/thread_2_seq_21.298.747753729 RECID=18 STAMP=747753729
    Finished backup at 06-APR-11
    RMAN> list archivelog all;
    specification does not match any archived log in the repositoryOracle Docs Says:
    The BACKUP ARCHIVELOG ... DELETE INPUT command deletes archived log files after they are backed up.
    This command eliminates the separate step of manually deleting archived redo logs.
    With DELETE INPUT, RMAN deletes only the specific copy of the archived log chosen for the backup set.
    With DELETE ALL INPUT, RMAN deletes each backed-up archived redo log file from all log archiving destinations.
    As explained in "Configuring an Archived Redo Log Deletion Policy",
    the BACKUP ... DELETE INPUT and DELETE ARCHIVELOG commands obey the archived redo log deletion policy
    for logs in all archiving locations. For example, if you specify that logs should only be deleted when backed
    up at least twice to tape, then BACKUP ... DELETE honors this policy.http://download.oracle.com/docs/cd/E11882_01/backup.112/e10642/rcmbckba.htm#BRADV89524
    But in ours case it's not honors this policy.
    Only with the FORCE command should this happen. But it is not our case.
    Oracle Docs:
    If FORCE is not specified on the deletion commands,
    then these deletion commands obey the archived log deletion policy.
    If FORCE is specified, then the deletion commands ignore the archived log deletion policy.http://download.oracle.com/docs/cd/E11882_01/backup.112/e10643/rcmsynta010.htm#RCMRF113
    Alternatively you can do the following:
    Set the commands separately.
    Check this:
    RMAN>  run {
    2> backup archivelog all not backed up 2 times ;
    3> delete archivelog all backed up 2 times to disk;
    4> }
    Starting backup at 06-APR-11
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=2 sequence=22 RECID=21 STAMP=747755128
    input archived log thread=1 sequence=17 RECID=20 STAMP=747755127
    channel ORA_DISK_1: starting piece 1 at 06-APR-11
    channel ORA_DISK_1: finished piece 1 at 06-APR-11
    piece handle=+DATA/orcl/backupset/2011_04_06/annnf0_tag20110406t134528_0.295.747755129 tag=TAG20110406T134528 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 06-APR-11
    released channel: ORA_DISK_1
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=78 instance=orcl1 device type=DISK
    RMAN-08138: WARNING: archived log not deleted - must create more backups
    archived log file name=+DATA/orcl/archivelog/2011_04_06/thread_1_seq_17.298.747755127 thread=1 sequence=17
    RMAN-08138: WARNING: archived log not deleted - must create more backups
    archived log file name=+DATA/orcl/archivelog/2011_04_06/thread_2_seq_22.294.747755129 thread=2 sequence=22
    RMAN>Edited by: Levi Pereira on Apr 6, 2011 1:35 PM

  • [solved]how to extract recent log entries from a file (based on time)?

    I have a daily log file with hundreds of thousands of entries in the following format. 
    field1,field2,field3,field4,field5,field6,field7,field8,field9,20110516192001.100
    field1,field2,field3,field4,field5,field6,field7,field8,field9,20110516192002.200
    field1,field2,field3,field4,field5,field6,field7,field8,field9,20110516192003.300
    field1,field2,field3,field4,field5,field6,field7,field8,field9,20110516192004.400
    field1,field2,field3,field4,field5,field6,field7,field8,field9,20110516192005.500
    It's always in the same format and the 10th field is always the timestamp (YYYYMMDDHHMMSS.MS)
    Since the file rotates daily, the 10th field will always be 20110516xxxxxx.xxx for today and will be 20110517xxxxxx.xxx tomorrow
    What I want to do is only look at entries that have been written in the last 30 minutes.
    At a high level, here's my plan
    1) Get the date/time from 30 minutes ago... write it to a variable
    2) Iterate through the file line by line comparing the 10th field to the variable, if it's larger write the line to a tmp file
    3) Use tmp file for my analysis
    This seems incredibly inefficient to me...  what would be a more graceful way to do it?  I have regular solaris tools at my disposal (plus python)
    Thanks
    Last edited by oliver (2011-05-17 12:41:43)

    The algorith you describe really is a viable approach.  Since this is a log file, each line should have a time stamp later than all lines that preceed it in the file. A more efficient algoithm could do a binary search through the file for the time stamp you are interested in.  This would be easy enough in to do in C or python, but your algoithm could be fast enough. If this is the case, you could try the following quick & dirty bash script.
    #!/bin/bash
    seconds() {
    secs=$(($1 % 100))
    mins=$(($1 / 100 % 100))
    hrs=$(($1 / 10000 % 100))
    days=$(($1 / 1000000 % 100))
    month=$(($1 / 100000000 % 100))
    year=$(($1 / 10000000000))
    (LC_TIME=C date +%s -d $(printf "%d-%02d-%02d %2d:%02d:%02d" $year $month $days $hrs $mins $secs))
    found=0
    now=$(date +%s)
    while read line
    do
    if [ "$found" -eq "0" ]
    then
    ts=${line##*,}
    ts=$(seconds ${ts%.*})
    diff=$(( ($now - $ts)/60 ))
    [[ $diff -lt "30" ]] && found=1
    fi
    [[ $found -ne 0 ]] && echo "$line"
    done < $1
    It will write (to stdout) all lines following the first line that has been time stamped within the last 30 minutes (ignoring milliseconds). You could redirect the output of this script to a file of your choice for analysis as follows:
    $ ./script logfile > tmp
    Last edited by rockin turtle (2011-05-17 06:58:41)

  • Question about Creating DataSources for FlatFile Data Acquisition

    Hello eveyone,
    I am trying to create flatfile datasources.  I've been checking out the existing ds in my dev system and there are some infrastructure here that have loaded data in them, all via their won flatfile DS.  I've been looking at the DS and am trying to figure out logic in which they are defined. I saw that most of the fields are defined in CHAR even if they pertain to fields like quantity, Amount and so on...  They are also in the Internal format.  Yet they have been successfully loaded...
    My question is:  what are the rules of thumb in defining the DS?  How do i go about in mapping the fields to objects in the system?  Can anyone please give me the step-by-step scenario as to how to do this please?
    Many thanks,
    Philips

    Hi Philip,
    Pls. try the following steps ..may be helpful.
    1. when ever you want to create a flat file structure..keep in mind that, what are
        fields data available in your file. second thing is you need to know the IO of
        the info provider in which you are uploading.
    2. simply create your own DS with PC File, and other hand open up your infprovider, and keep on check with your flat file what is the sequence of the data
    fields and what are they corresponds to in the cube/ODS
    3. As your flat file sequence, keep on identify the related IO of your provider and
    copy it individual and paste it in your infosouce one by one as sequence of your
    flat file..
    4. No need to think of what type of data it is and how it is..
    5. simply copy all your correponding IO (what ever of their type) even including
        some of the ref. IO.
    6. And finally save and activate your DS creation.
    I hope it would help in creating your own Flat File DS.
    Need further info revert...
    Cheers,
    Thanks=Points.

  • Question about how Mac OS X stores file information for iMovie and iPhoto

    I'm relatively new to Macs (am using a 3yr old iMac and a new MBA running 10.6.4 and iLife 11) and I have this general question.
    I'm using the iMac for a lot of Movie projects and photos. I've very quickly filled up my 500Gb hard drive and I don't understand why other than this:
    I import a movie using iMovie and it's stored in an Events folder. I then make a project and it's stored in an Projects folder, I then share it to iTunes and it's stored in the iTunes Music/Movies folder. And I also see the movie in an iMovie Shared folder. In other words, it appears that a 2Gb movie (as an axample) becomes about 10Gb because it's saved in multiple places. Must I keep all of them?!
    With iPhotos, I've imported about 6Gb of photos from my Windows PC. No problem. I copied across all the directories and then imported them into iPhoto. I then thought I could delete all the directories as the iPhoto folder became 24Gb in size! Not sure how but it seems that as well as keeping my original 6Gb of photo directories it's also put them into the iPhoto directory and somehow made it a big folder size?! If I delete the original folders (since they're now in the iPhoto app folder) then I can't actually browse the JPGs files can I except from within iPhoto only?
    I've got about 80Gb of music, about 6Gb of photos and about 80Gb of movies. Somehow it's pretty much filled up 500Gb of hard drive (and very little other programs or files other than the normal iMac stuff it comes with) because of the way the Mac (iTunes, iMovie and iPhoto) seems to have the same thing copied everywhere.
    All input greatfully receive and apologies if I'm being thick.
    Richard.

    Richard
    I'll answer the iPhoto part for you.
    If you use iPhoto in its default setting then the images are copied into the Library on import. Yes you can now delete the directories you have copied over.
    then I can't actually browse the JPGs files can I except from within iPhoto only?
    Yes and no...
    The point of iPhoto is that it's your Photo Manager. It's the "go-to" app when you want to do anything with your Photos. - View them, edit them, email them, print them, use them in other apps, upload them whatever - all of these things are done from iPhoto.
    There are two advantages to working this way.
    1. You get to work with your Photos and don't have to think about files. This means the organisation possibilities are much greater than just files in folders. You can use Keywording, Smart Searching, Albums, Events and a plethora of other bases for categorising and therefore find your pics.
    2. iPhoto is integrated throughout the entire OS. That means your Library is available throughout the OS without iPhoto being open by means of Media Browsers. You can search your Library from a MEdia Browser and so on.
    So, while, yes, the files are not browsable via the Finder, there's no need to. Everything you need to do can be done with iPhoto.
    Want to edit the pics in something other than iPhoto:
    You can set Photoshop (or any image editor) as an external editor in iPhoto. (Preferences -> General -> Edit Photo: Choose from the Drop Down Menu.) This way, when you double click a pic to edit in iPhoto it will open automatically in Photoshop or your Image Editor, and when you save it it's sent back to iPhoto automatically. This is the only way that edits made in another application will be displayed in iPhoto.
    Note that iPhoto sends a copy+ of the file to Photoshop, so when you save be sure to use the Save command, not Save As... If you use Save As then you're creating a new file and iPhoto has no way of knowing about this new file. iPhoto is preserving your original anyway.
    There are many, many ways to access your files in iPhoto:
    *For Users of 10.5 and later*
    You can use any Open / Attach / Browse dialogue. On the left there's a Media heading, your pics can be accessed there. Command-Click for selecting multiple pics.
    Uploaded with plasq's Skitch!
    (Note the above illustration is not a Finder Window. It's the dialogue you get when you go File -> Open)
    You can access the Library from the New Message Window in Mail:
    Uploaded with plasq's Skitch!
    *For users of 10.4 and later* ...
    Many internet sites such as Flickr and SmugMug have plug-ins for accessing the iPhoto Library. If the site you want to use doesn’t then some, one or any of these will also work:
    To upload to a site that does not have an iPhoto Export Plug-in the recommended way is to Select the Pic in the iPhoto Window and go File -> Export and export the pic to the desktop, then upload from there. After the upload you can trash the pic on the desktop. It's only a copy and your original is safe in iPhoto.
    This is also true for emailing with Web-based services. However, if you're using Gmail you can use iPhoto2GMail
    If you use Apple's Mail, Entourage, AOL or Eudora you can email from within iPhoto.
    If you use a Cocoa-based Browser such as Safari, you can drag the pics from the iPhoto Window to the Attach window in the browser.
    *If you want to access the files with iPhoto not running*:
    For users of 10.6 and later:
    You can download a free Services component from MacOSXAutomation which will give you access to the iPhoto Library from your Services Menu. Using the Services Preference Pane you can even create a keyboard shortcut for it.
    For Users of 10.4 and later:
    Create a Media Browser using Automator (takes about 10 seconds) or use this free utility Karelia iMedia Browser
    Other options include:
    1. *Drag and Drop*: Drag a photo from the iPhoto Window to the desktop, there iPhoto will make a full-sized copy of the pic.
    2. *File -> Export*: Select the files in the iPhoto Window and go File -> Export. The dialogue will give you various options, including altering the format, naming the files and changing the size. Again, producing a copy.
    3. *Show File*: Right- (or Control-) Click on a pic and in the resulting dialogue choose 'Show File'. A Finder window will pop open with the file already selected.
    Regards
    TD

Maybe you are looking for