Essbase 7.1 no line separation on log files

We have recently upgraded to Essbase 7.1We have the DELIMITEDMSG TRUE entry in our essbase.cfg file.Since the upgrade to 7.1 the log files for each application no longer have a CR/LF at the end of each entry.They are just run together into an extremely long line.However, when we look at the application window on the Essbase server the messages appear on separate lines.Has anyone else run into this problem?Is there a new essbase.cfg setting that we need to use?We take the log file for each application and load it into a logfile cube each month and this becomes very difficult if each message is not on it's own line in the file.Any help would be appreciated.

You could create a maxl script that replaces the filters, when you call the maxl script you could pass in a variable such as YR08 and use that variable in the script.
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • Line separator for text files

    Hello
    I am writing a tab limited file which gets its data from a string[][] array . When i try opening the file its messed up with square blocks in notepad and looks great with wordpad. I do know that Unix has '\n' as the line separator, whereas windows has '\r\n' and mac has '\r'.
    The problem is the file is created on unix system, but its opened on windows or mac systems. Does anyone has a suggestion how can i fix this. I did tried using 'line.separator' but that doesn't help..which is obvious since it uses the unix separator.
    Please let me know if anyone can help
    I am writing to the file using PrintWriter
    thanx

    isn't there a way just to know whats the line
    separator used in the original file?The following will check out the line separator of a file..
    final public class TestSeparator{
    public static void main(String[] args) throws Exception{
    java.io.BufferedReader br =null;
    int k=0x0,temp=0x0;
    try{
    br =new java.io.BufferedReader(new java.io.InputStreamReader(new java.io.FileInputStream("TestSeparator.java"),"UTF8"));//file and encoding are arbitrary
    while((k=br.read())!=-1){
    if((temp==(char)0xd)&&(k!=(char)0xa)) System.out.println();
    if(k==(char)0xd||k==(char)0xa){
       if(k==(char)0xd) {
          System.out.print("r");
       if(k==(char)0xa) {
          System.out.print("n");
        if(temp==(char)0xd||k==(char)0xa) System.out.println();
        temp=k;
    }//end of while
    System.out.println("");
    }finally{
         if(br!=null) br.close();
    }

  • Best way to extract lines from a log file in Linux

    My query is that I have a log file which is around 7 GB and I want to extract lines between a certain time range which should total up to half of the size to 3.5 GB. The lines start with a time range like                        “2014-03-11 17:35:00”. I want to extract the lines between “2014-03-11 17:35:00” to “2014-03-11 18:05:00”. What should be the best way as there may be a grep command or a sed command to do it.
    I hope, my question is clear.
    Please revert with the reply to my query.
    Regards

    How about the following:
    awk '/2014-03-11 17:35:00/, /2014-03-11 18:05:00/' infile > outfile

  • Too many lines in my log file

    Hy every body,
    I've made a Tomcat project under Eclipse and I'm using log4j.
    I've configured my log4j in "log4j-config.xml" file.
    I've made one file per level (DEBUG, INFO, WARN and ERROR), but I'm receiving too many lines in my "debug.log" file.
    Example:
    17:52:39 - [main] [DEBUG] org.apache.commons.beanutils.BeanUtils : BeanUtils.populate(org.apache.struts.tiles.TilesPlugin@14d556e, {definitions-parser-validate=true, definitions-parser-details=2, definitions-debug=2, moduleAware=true, definitions-config=/WEB-INF/struts/struts-tiles-defs.xml})
    17:52:39 - [main] [DEBUG] org.apache.commons.beanutils.BeanUtils : setProperty(org.apache.struts.tiles.TilesPlugin@14d556e, definitions-parser-validate, true)
    17:52:39 - [main] [DEBUG] org.apache.commons.beanutils.BeanUtils : setProperty(org.apache.struts.tiles.TilesPlugin@14d556e, definitions-parser-details, 2)
    Can some one tell me how to filter my logs.
    Here is a part of my "log4j-config.xml" file:
    <appender name="DEBUG" class="org.apache.log4j.FileAppender">
              <param name="File" value="${log.path}/debug.log" />
    <param name="Threshold" value="DEBUG" />
    <param name="Append" value="false" />
    <layout class="org.apache.log4j.PatternLayout">
    <param name="ConversionPattern" value="%d{HH:mm:ss} - [%t] [%-5p] %c : %m%n" />
    </layout>
         </appender>
    Thanks in advance.

    Hy every body,
    I've made a Tomcat project under Eclipse and I'm
    using log4j.
    I've configured my log4j in "log4j-config.xml" file.
    I've made one file per level (DEBUG, INFO, WARN and
    ERROR), but I'm receiving too many lines in my
    "debug.log" file.Then don't use that level. The whole point of the various levels is to be able to trade off level of detail for volume of output. Debug is meant to be very verbose.
    You can set various classes or packages to log at different levels, overriding the default for that logger. So if you don't want debug output for com.acme.whatever, then in log4j.properties or log4j.xml or whatever, you can configure that package and all "subpackages" for info, warn, or error. See log4j's docs for details.

  • How can I write my Adobe AIR application tracing lines into a log file

    I Have a question about log files for AIR application
    How can I make my application writing all tracing and exceptions into a log file?

    I think if you pubish a -debug SWF it will log to flashlog.txt

  • A real task..( to search and display few lines from a log file)

    have a look at the following link
    http://computing.unn.ac.uk/staff/cgpb2/public_html/log.html
    i need to display to the users only the Hourly Transmission Statistics (Bytes Sent Requests Time) on a tabular format from the above mentioned link.
    t
    his is not easy
    can any one write down the code please..
    thank you very much.

    What have you done so far?
    What kind of approach are you going to take (aside from asking others to do it all for you)?
    Have you written the algorithm in psuedo-code? If so, can we see it? We might be able to give you a few pointers.

  • How to see SOP lines of my adpater code in my log files

    Hi,
    I am not able to see my SOP lines in the log files when my adapter executes. Its strange but its true. I changed the log.properties file to DEBUG for adapters.
    Do I have to set any other configuration change?
    Please suggest.
    Thanks,
    Kalpana.

    our OIM version is 9.1.0.18.
    Also I have one question, if one jar file gets compiled in one jdk version(lower version) and then I make changes in that same java code and build the same jar file(with higher version of jdk), does it takes the changes into consideration while executing that jar file?
    This jar file which i am testing was build by my colleague in jdk lower version, I made changes to the same java code and created jar file(with higher version of jdk).
    When i run the code, then my sop lines, log.debug, log.error, log,info is not getting printed in the log file. Dont know where I am going wrong.
    If anybody has any information, then please share.

  • Getting Log File Pattern Matched Line Count metric to work ?

    Hi
    has anyone been able to get this to work with more complex Perl expressions ?
    Basically I can get simple, single expressions to match.
    EG *(does not exist)* will match the text *"does not exist"* anywhere in a file.
    However, if I want to match either does not exist OR file not found I should be able to do something like
    *(does not exist)|(file not found)* OR *(does not exist|file not found)* but this just doesn't work.
    I want to be able to do more complex expressions, using *\i* (ignore case), and *^* (start of line) *$* (end of line) expressions too.
    I can test the matching functionality using a simple perl program, and I know the expression works in Perl.
    Oracle is supposed to be using a perl pattern match but seems to fail unless it is a single simple expression.
    Anyone been able to use this functionality at all.
    Many thanks.

    I have a chance to look into the parse-log1.pl script which is responsible for monitoring the log files and generating the alerts for EMGC. I am just pasting the comments given in this file
    # This script is used in EMD to parse log files for critical and
    # warning patterns. The script holds the last line number searched
    # for each file in a state file for each time the script is run. The
    # next run of the script starts from the next line. The state file name
    # is read from the environment variable $EM_STATE_FILE, which must
    # be set for the script to run.
    but in my case this is not happending according to log files it is storing the lst read line of the log file but it is not using that info in its next run. The file will be scanned from the begining again but this is not the case with emagent.log file monitoring its working fine as expected and explained in the script file.
    According to my observation this is becasue of the script is rotating my log file for each run i dont know how stop it. I just want to scan my log file I dont want to rotate my log file for each run of the script. Could any one please help me to solve this problem
    Thanks
    Ashok Chava.

  • Need help in understanding the result logged in gateway log file

    I have installed unixODBC.x86_64 (2.2.14) and mysql-connector-odbc-5.3.4-linux-el6-x85-64 on Oracle Linux Server 6.5 64 bit with Oracle 12c and MySQL Community Server 5..6.14.  Oracle database character set is AL32UTF8 and MySQL database character set is latin1. I have copied the content of a gateway log file to the bottom of this entry and boldfaced the lines that I hope someone can explain to me what they mean and what should I do.
    Gateway init file has the following configuration:
    HS_FDS_CONNECT_INFO = myodbc5
    HS_FDS_TRACE_LEVEL = DEBUG
    HS_FDS_SHAREABLE_NAME = /usr/lib64/libodbc.so
    HS_LANGUAGE=AMERICAN_AMERICA.WE8ISO8859P15
    set ODBCSYSINI=/tmp/shared
    ODBC.ini located in /tmp/shared has the following configuration:
    [myodbc5]
    Driver          = /tmp/shared/mysql-odbc/lib/libmyodbc5w.so
    DATABASE        = peter
    DESCRIPTION     = MySQL ODBC 5.3 Unicode Driver
    SERVER          = localhost
    UID             = peter
    PASSWORD        = peter
    SOCKET          = /var/lib/mysql/mysql.sock
    Listener.ora has the following configuration:
    LISTENER =
      (ADDRESS_LIST=
    (ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521))
    (ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY)))
    SID_LIST_LISTENER =
       (SID_LIST =
         (SID_DESC =
           (GLOBAL_DBNAME = PETER)
           (ORACLE_HOME = /opt/oracle/12c)
           (SID_NAME = PETER)
         (SID_DESC =
           (SID_NAME = mysqlodbc)
           (ORACLE_HOME = /opt/oracle/12c)
           (PROGRAM = dg4odbc)
    Tnsnames.ora has the following configuration:
    mysqlodbc =
    (DESCRIPTION=
    (ADDRESS=
    (PROTOCOL=TCP) (HOST=localhost) (PORT=1521)
    (CONNECT_DATA=
    (SID=mysqlodbc))
    (HS=OK)
    Gateway log file has the following content:
    Oracle Corporation --- TUESDAY   FEB 17 2015 14:08:39.306
    Heterogeneous Agent Release
    12.1.0.1.0
    Oracle Corporation --- TUESDAY   FEB 17 2015 14:08:39.306
        Version 12.1.0.1.0
    Entered hgogprd
    HOSGIP for "HS_FDS_TRACE_LEVEL" returned "DEBUG"
    Entered hgosdip
    setting HS_OPEN_CURSORS to default of 50
    setting HS_FDS_RECOVERY_ACCOUNT to default of "RECOVER"
    setting HS_FDS_RECOVERY_PWD to default value
    setting HS_FDS_TRANSACTION_LOG to default of HS_TRANSACTION_LOG
    setting HS_IDLE_TIMEOUT to default of 0
    setting HS_FDS_TRANSACTION_ISOLATION to default of "READ_COMMITTED"
    setting HS_NLS_NCHAR to default of "AL32UTF8"
    setting HS_FDS_TIMESTAMP_MAPPING to default of "DATE"
    setting HS_FDS_DATE_MAPPING to default of "DATE"
    setting HS_RPC_FETCH_REBLOCKING to default of "ON"
    setting HS_FDS_FETCH_ROWS to default of "100"
    setting HS_FDS_RESULTSET_SUPPORT to default of "FALSE"
    setting HS_FDS_RSET_RETURN_ROWCOUNT to default of "FALSE"
    setting HS_FDS_PROC_IS_FUNC to default of "FALSE"
    setting HS_FDS_MAP_NCHAR to default of "TRUE"
    setting HS_NLS_DATE_FORMAT to default of "YYYY-MM-DD HH24:MI:SS"
    setting HS_FDS_REPORT_REAL_AS_DOUBLE to default of "FALSE"
    setting HS_LONG_PIECE_TRANSFER_SIZE to default of "65536"
    setting HS_SQL_HANDLE_STMT_REUSE to default of "FALSE"
    setting HS_FDS_QUERY_DRIVER to default of "TRUE"
    setting HS_FDS_SUPPORT_STATISTICS to default of "FALSE"
    setting HS_FDS_QUOTE_IDENTIFIER to default of "TRUE"
    setting HS_KEEP_REMOTE_COLUMN_SIZE to default of "OFF"
    setting HS_FDS_GRAPHIC_TO_MBCS to default of "FALSE"
    setting HS_FDS_MBCS_TO_GRAPHIC to default of "FALSE"
    Default value of 64 assumed for HS_FDS_SQLLEN_INTERPRETATION
    setting HS_CALL_NAME_ISP to "gtw$:SQLTables;gtw$:SQLColumns;gtw$:SQLPrimaryKeys;gtw$:SQLForeignKeys;gtw$:SQLProcedures;gtw$:SQLStatistics;gtw$:SQLGetInfo"
    setting HS_FDS_DELAYED_OPEN to default of "TRUE"
    setting HS_FDS_WORKAROUNDS to default of "0"
    Exiting hgosdip, rc=0
    ORACLE_SID is "mysqlodbc"
    Product-Info:
      Port Rls/Upd:1/0 PrdStat:0
      Agent:Oracle Database Gateway for ODBC
      Facility:hsa
      Class:ODBC, ClassVsn:12.1.0.1.0_0017, Instance:mysqlodbc
    Exiting hgogprd, rc=0
    Entered hgoinit
    HOCXU_COMP_CSET=1
    HOCXU_DRV_CSET=46
    HOCXU_DRV_NCHAR=873
    HOCXU_DB_CSET=873
    HS_LANGUAGE is AMERICAN_AMERICA.WE8ISO8859P15
    LANG=en_US.UTF-8
    HOCXU_SEM_VER=121000
    HOCXU_VC2_MAX=4000
    HOCXU_RAW_MAX=2000
    Entered hgolofn at 2015/02/17-14:08:39
    HOSGIP for "HS_FDS_SHAREABLE_NAME" returned "/usr/lib64/libodbc.so"
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac07fe0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac08110
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac089d0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac09d40
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac11c00
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac120a0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac148d0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac15dd0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac16610
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac18060
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac18070
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac197c0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac1cb80
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac1cf60
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac1eb70
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac1f800
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac1fb80
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac21af0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac21f10
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac23b20
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac23960
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac0a360
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac0bc20
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac0f710
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac11470
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac12c00
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac15810
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac16f70
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac18400
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac19ec0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac1a390
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac1b5f0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac1c320
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac1d9c0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac1dcc0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac1e7c0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac20380
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac208d0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac20eb0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac21520
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac22210
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac24fb0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac23610
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac26910
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Entered hgolofns at 2015/02/17-14:08:39
    symbol_peflctx=0xac275e0
    hoaerr:0
    Exiting hgolofns at 2015/02/17-14:08:39
    Exiting hgolofn, rc=0 at 2015/02/17-14:08:39
    HOSGIP for "HS_OPEN_CURSORS" returned "50"
    HOSGIP for "HS_FDS_FETCH_ROWS" returned "100"
    HOSGIP for "HS_LONG_PIECE_TRANSFER_SIZE" returned "65536"
    HOSGIP for "HS_NLS_NUMERIC_CHARACTERS" returned ".,"
    HOSGIP for "HS_KEEP_REMOTE_COLUMN_SIZE" returned "OFF"
    HOSGIP for "HS_FDS_DELAYED_OPEN" returned "TRUE"
    HOSGIP for "HS_FDS_WORKAROUNDS" returned "0"
    HOSGIP for "HS_FDS_MBCS_TO_GRAPHIC" returned "FALSE"
    HOSGIP for "HS_FDS_GRAPHIC_TO_MBCS" returned "FALSE"
    Invalid value of 64 given for HS_FDS_SQLLEN_INTERPRETATION
    treat_SQLLEN_as_compiled = 1
    Exiting hgoinit, rc=0 at 2015/02/17-14:08:39
    Entered hgolgon at 2015/02/17-14:08:39
    reco:0, name:peter, tflag:0
    Entered hgosuec at 2015/02/17-14:08:39
    Exiting hgosuec, rc=0 at 2015/02/17-14:08:39
    HOSGIP for "HS_FDS_RECOVERY_ACCOUNT" returned "RECOVER"
    HOSGIP for "HS_FDS_TRANSACTION_LOG" returned "HS_TRANSACTION_LOG"
    HOSGIP for "HS_FDS_TIMESTAMP_MAPPING" returned "DATE"
    HOSGIP for "HS_FDS_DATE_MAPPING" returned "DATE"
    HOSGIP for "HS_FDS_MAP_NCHAR" returned "TRUE"
    HOSGIP for "HS_FDS_RESULTSET_SUPPORT" returned "FALSE"
    HOSGIP for "HS_FDS_RSET_RETURN_ROWCOUNT" returned "FALSE"
    HOSGIP for "HS_FDS_PROC_IS_FUNC" returned "FALSE"
    HOSGIP for "HS_FDS_REPORT_REAL_AS_DOUBLE" returned "FALSE"
    using peter as default schema
    HOSGIP for "HS_SQL_HANDLE_STMT_REUSE" returned "FALSE"
    Entered hgocont at 2015/02/17-14:08:39
    HS_FDS_CONNECT_INFO = "myodbc5"
    RC=-1 from HOSGIP for "HS_FDS_CONNECT_STRING"
    Entered hgogenconstr at 2015/02/17-14:08:39
    dsn:myodbc5, name:peter
    optn:
    Entered hgocip at 2015/02/17-14:08:39
    dsn:myodbc5
    Exiting hgocip, rc=0 at 2015/02/17-14:08:39
    Exiting hgogenconstr, rc=0 at 2015/02/17-14:08:39
    Entered hgolosf at 2015/02/17-14:08:39
    Exiting hgolosf, rc=0 at 2015/02/17-14:08:39
    DriverName:libmyodbc5w.so, DriverVer:05.03.0004
    DBMS Name:MySQL, DBMS Version:5.6.14
    Exiting hgocont, rc=0 at 2015/02/17-14:08:39
    SQLGetInfo returns Y for SQL_CATALOG_NAME
    SQLGetInfo returns 192 for SQL_MAX_CATALOG_NAME_LEN
    Exiting hgolgon, rc=0 at 2015/02/17-14:08:39
    Entered hgoulcp at 2015/02/17-14:08:39
    Entered hgowlst at 2015/02/17-14:08:39
    Exiting hgowlst, rc=0 at 2015/02/17-14:08:39
    SQLGetInfo returns 0x0 for SQL_SCHEMA_USAGE
    TXN Capable:3, Isolation Option:0xf
    SQLGetInfo returns 0 for SQL_MAX_SCHEMA_NAME_LEN
    SQL_SU_DML_STATEMENTS bit is not set. Schemas are not supported by FDS.
    SQLGetInfo returns 192 for SQL_MAX_TABLE_NAME_LEN
    SQLGetInfo returns 192 for SQL_MAX_PROCEDURE_NAME_LEN
    HOSGIP returned value of "TRUE" for HS_FDS_QUOTE_IDENTIFIER
    SQLGetInfo returns ` (0x60) for SQL_IDENTIFIER_QUOTE_CHAR
    Entered hgopoer at 2015/02/17-14:08:39
    Exiting hgopoer, rc=0 at 2015/02/17-14:08:39 with error ptr FILE:hgopoer.c LINE:195 ID:GetDiagRec error
    hgoulcp, line 2138: calling SQLFetch got sqlstate 00000
    3 instance capabilities will be uploaded
      capno:5964, context:0x00000000, add-info:        0
      capno:5989, context:0x00000000, add-info:        0
      capno:5992, context:0x0001ffff, add-info:        1, translation:"`"
    Exiting hgoulcp, rc=0 at 2015/02/17-14:08:39 with error ptr FILE:hgoulcp.c LINE:2140 ID:Translation text for Unicode literal not supported. Leaving HOACOPTTSTN1 capability off.
    Entered hgouldt at 2015/02/17-14:08:39
    NO instance DD translations were uploaded
    Exiting hgouldt, rc=0 at 2015/02/17-14:08:39
    Entered hgobegn at 2015/02/17-14:08:39
    tflag:0 , initial:1
    hoi:0xd991a328, ttid (len 24) is ...
      00: 50455445 522E3864 63666537 31332E31  [PETER.8dcfe713.1]
      10: 2E33312E 32303235                    [.31.2025]
                     tbid (len 21) is ...
      00: 50455445 525B312E 33312E32 3032355D  [PETER[1.31.2025]]
      10: 5B312E34 5D                          [[1.4]]
    Exiting hgobegn, rc=0 at 2015/02/17-14:08:39
    Entered hgodtab at 2015/02/17-14:08:39
    count:1
      table: mysql_table1
    Allocate hoada[0] @ 0x10b6300
    Entered hgopcda at 2015/02/17-14:08:39
    Column:1(dest): dtype:1 (CHAR), prc/scl:5/0, nullbl:1, octet:5, sign:1, radix:0
    Exiting hgopcda, rc=0 at 2015/02/17-14:08:39
    The hoada for table mysql_table1 follows...
    hgodtab, line 1079: Printing hoada @ 0x10b6300
    MAX:1, ACTUAL:1, BRC:1, WHT=6 (TABLE_DESCRIBE)
    hoadaMOD bit-values found (0x200:TREAT_AS_CHAR)
    DTY      NULL-OK  LEN  MAXBUFLEN   PR/SC  CST IND MOD NAME
      1 CHAR Y          5          5   0/  0    0   0 200 dest
    Exiting hgodtab, rc=0 at 2015/02/17-14:08:39
    Entered hgodafr, cursor id 0 at 2015/02/17-14:08:39
    Free hoada @ 0x10b6300
    Exiting hgodafr, rc=0 at 2015/02/17-14:08:39
    Entered hgopars, cursor id 1 at 2015/02/17-14:08:39
    type:0
    SQL text from hgopars, id=1, len=39 ...
         00: 53454C45 43542041 312E6064 65737460  [SELECT A1.`dest`]
         10: 2046524F 4D20606D 7973716C 5F746162  [ FROM `mysql_tab]
         20: 6C653160 204131                      [le1` A1]
    Exiting hgopars, rc=0 at 2015/02/17-14:08:39
    Entered hgoopen, cursor id 1 at 2015/02/17-14:08:39
    hgoopen, line 87: NO hoada to print
    Deferred open until first fetch.
    Exiting hgoopen, rc=0 at 2015/02/17-14:08:39
    Entered hgodscr, cursor id 1 at 2015/02/17-14:08:39
    Allocate hoada @ 0x10b6300
    Entered hgodscr_process_sellist_description at 2015/02/17-14:08:39
    Entered hgopcda at 2015/02/17-14:08:39
    Column:1(dest): dtype:1 (CHAR), prc/scl:5/0, nullbl:1, octet:5, sign:1, radix:0
    Exiting hgopcda, rc=0 at 2015/02/17-14:08:39
    hgodscr, line 470: Printing hoada @ 0x10b6300
    MAX:1, ACTUAL:1, BRC:100, WHT=5 (SELECT_LIST)
    hoadaMOD bit-values found (0x200:TREAT_AS_CHAR)
    DTY      NULL-OK  LEN  MAXBUFLEN   PR/SC  CST IND MOD NAME
      1 CHAR Y          5          5   0/  0    0   0 200 dest
    Exiting hgodscr, rc=0 at 2015/02/17-14:08:39
    Entered hgoftch, cursor id 1 at 2015/02/17-14:08:39
    hgoftch, line 135: Printing hoada @ 0x10b6300
    MAX:1, ACTUAL:1, BRC:100, WHT=5 (SELECT_LIST)
    hoadaMOD bit-values found (0x200:TREAT_AS_CHAR)
    DTY      NULL-OK  LEN  MAXBUFLEN   PR/SC  CST IND MOD NAME
      1 CHAR Y          5          5   0/  0    0   0 200 dest
    Performing delayed open.
    SQLBindCol: column 1, cdatatype: 1, bflsz: 6
    SQLFetch: row: 1, column 1, bflsz: 6, bflar: 5
    SQLFetch: row: 1, column 1, bflsz: 6, bflar: 5, (bfl: 5, mbl: 5)
    1 rows fetched
    Exiting hgoftch, rc=0 at 2015/02/17-14:08:39
    Entered hgoftch, cursor id 1 at 2015/02/17-14:08:39
    hgoftch, line 135: Printing hoada @ 0x10b6300
    MAX:1, ACTUAL:1, BRC:1, WHT=5 (SELECT_LIST)
    hoadaMOD bit-values found (0x200:TREAT_AS_CHAR)
    DTY      NULL-OK  LEN  MAXBUFLEN   PR/SC  CST IND MOD NAME
      1 CHAR Y          5          5   0/  0    0   0 200 dest
    0 rows fetched
    Exiting hgoftch, rc=1403 at 2015/02/17-14:08:39
    Entered hgoclse, cursor id 1 at 2015/02/17-14:23:33
    Exiting hgoclse, rc=0 at 2015/02/17-14:23:33
    Entered hgodafr, cursor id 1 at 2015/02/17-14:23:33
    Free hoada @ 0x10b6300
    Exiting hgodafr, rc=0 at 2015/02/17-14:23:33
    Entered hgocomm at 2015/02/17-14:23:33
    keepinfo:0, tflag:1
       00: 50455445 522E3864 63666537 31332E31  [PETER.8dcfe713.1]
       10: 2E33312E 32303235                    [.31.2025]
                     tbid (len 21) is ...
       00: 50455445 525B312E 33312E32 3032355D  [PETER[1.31.2025]]
       10: 5B312E34 5D                          [[1.4]]
    cmt(0):
    Entered hgocpctx at 2015/02/17-14:23:33
    Exiting hgocpctx, rc=0 at 2015/02/17-14:23:33
    Exiting hgocomm, rc=0 at 2015/02/17-14:23:33
    Entered hgolgof at 2015/02/17-14:23:33
    tflag:1
    Exiting hgolgof, rc=0 at 2015/02/17-14:23:33
    Entered hgoexit at 2015/02/17-14:23:33
    Exiting hgoexit, rc=0
    Entered horcrces_CleanupExtprocSession at 2015/02/17-14:23:33
    Entered horcrpooe_PopOciEnv at 2015/02/17-14:23:33
    Entered horcrfoe_FreeOciEnv at 2015/02/17-14:23:33
    Exiting horcrfoe_FreeOciEnv at 2015/02/17-14:23:33
    Entered horcrfse_FreeStackElt at 2015/02/17-14:23:33
    Exiting horcrfse_FreeStackElt at 2015/02/17-14:23:33
    Exiting horcrpooe_PopOciEnv at 2015/02/17-14:23:33
    Exiting horcrces_CleanupExtprocSession at 2015/02/17-14:23:33

    Hi Matt,
    There is no error and data is returned.  It is just my curiosity to know about those lines in the log file and wonder whether they imply some underlying issues.
    Thanks,
    Peter

  • Working Linux command to grep date range in a log file

    Linux Gurus,
    Could you please help me with command to show only those lines in a log file which falls under some date range, probably using grep command.
    Our server logs are in following format <Jun 23, 2013 12:45:02 AM UTC>
    Regards,
    Varun

    Perhaps you can do the following:
    Go to Google.
    Type "working Linux command to grep date range in a log file"
    See what results you might get from that search.  ( I did, and got more than 600,000 search results.)

  • SNTP log file on cRIO9024

    Hello,
    I have a question regarding the file "ts_sntp_log.txt" that is written on the internal disc of the cRIO when the device is synchronized to an NTP server. I have performed a measurement where the network latency is relevant to know. I am now trying to analyze the log file in Matlab.
    This is the only info I can find regarding the time format on the file:
    http://digital.ni.com/public.nsf/allkb/F2B057C72B5​37EA2862572D100646D43
    and
    http://www.faqs.org/rfcs/rfc2030.html
    The timestamps in my Tdms file are correct, so I suppose that the cRIO managed to synchronize to the NTP server during the measurement.
    This is a few lines from my log file
    2eb2c57243acbbd3: 98999630.450878, 000008199a840185, 2eb2c5724049c687, 2eb2c572404be4a6, 000008199a876693
    2eb2c5803c13647d: 98999633.822424, 0000081afc93ea5f, 2eb2c58038b10bbc, 2eb2c58038b32a62, 0000081afc976f60
    2eb2c58e347e54d9: 98999644.899869, 0000081c5ea3cdb7, 2eb2c58e3120db8b, 2eb2c58e312365e0, 0000081c5ea85bd3
    As I have interpreted the reference documents, the first number is the time on the cRIO formatted so that the first 8 digits are the number of whole seconds starting from 1900-01-01. My measurement was carried out in 2010 so this number should be
    (2011-1900)*365*24*3600 = 3.500496e+009
    I use the following Matlab lines to convert the hex-number into whole seconds
    ts = '2eb2c57243acbbd3';
    sec_from_1900 = hex2dec(['00000000',ts(1:8)])
    which turns out to be 783468043, i.e., approx 25 years.
    Am I doing something wrong, or is the log file corrupt? If it is corrupt, how can the timing info in my Tdms file still be correct?
    I would be very greatful for some help on this.
    Regards // Johan

    Hello again everyone,
    Is there really nobody out there who has got some information on this? I still believe that the hexadecimal numbers in the file ts_sntp_log.txt are correct since the timestamps written i my TDMS files are correct. I cannot understand the format in the log file though, and the support people at NI sweden has not been able to help me either. I now get the year 1999.
    Does anyone know where to find the exact algorithm for recalculating timestamps (in seconds) in LabView into yy-mm-dd HH:MMS.FFFFFF - format?
    I attach my files and would be very helpful if somebody could have a look at them. Unfortunately the upload function rejected Matlab files so I had to change the file extension into .txt. Just change the following:
       replaceinfile.txt --> replaceinfile.m 
       import_ts_sntp_log.txt --> import_ts_sntp_log.m
    and it should work fine.
    Regards
    Johan
    Attachments:
    ts_sntp_log.txt ‏157 KB
    replaceinfile.txt ‏3 KB
    import_ts_sntp_log.txt ‏3 KB

  • Location of useful log files

    Hello,
    Could someone tell me the location of BI Pubs major log files.
    I am using BIP Enterprise Edition 10.1.3.2
    Thanks,
    Matt Soukup

    I'm using same version installed on Red Hat Linux.
    Here i put some lines of xdo.log file which was generated by BIP in '/xdo/tmp' directory:
    Tmp file for generated document : /u01/oracle/10.1.3/j2ee/home/applications/xmlpserver/xmlpserver/xdo/tmp/xmlp39364631.tmp
    Tmp file for XML data : /u01/oracle/10.1.3/j2ee/home/applications/xmlpserver/xmlpserver/xdo/tmp/xmlp7827625.tmp
    Tmp file for bursting control file : /u01/oracle/10.1.3/j2ee/home/applications/xmlpserver/xmlpserver/xdo/tmp/xmlp15393381.tmp
    [081507_094312410][][STATEMENT] Logger.init(): *** DEBUG MODE IS ON. ***
    [081507_094312410][][STATEMENT] Logger.init(): LogDir=/u01/oracle/10.1.3/xmldebug
    [081507_094318608][][STATEMENT] Logger.init(): *** DEBUG MODE IS ON. ***
    [081507_094318609][][STATEMENT] Logger.init(): LogDir=/u01/oracle/10.1.3/xmldebug
    Log file 'xdo_081507_094318610_fo_data_1.xsl' is created.
    '/u01/oracle/10.1.3/j2ee/home/applications/xmlpserver/xmlpserver/xdo/tmp//081507_094318375/28.xml'.
    Log file 'xdo_081507_094318610_fo_data_1.xml' is created.
    '/u01/oracle/10.1.3/j2ee/home/applications/xmlpserver/xmlpserver/xdo/tmp//081507_094318375/xmlp7414637229.pdf'.
    Log file 'xdo_081507_094318610_fo_out.out' is created.

  • Essbase  log files

    Does anyone know what the info codes are at the end of the various lines in the Essbase log files. eg: [Wed Oct 22 08:18:17 2003]Local/ESSBASE0///Info(1051164)- id like to understand what "info (1051164)" is

    If you have licensed the Essbase API, on your essbase server there should be an essbase\api\include folder. In that folder there should be a file called message.txt that has a list of all the codes and messages associated with them.If you have a newer version of Essbase, some of the messages are documented in greater detail in an html document in the docs folder of the Essbase server. Look for /essbase/docs/errmsgs/erhelp.htm (or something like that).Good luck,Bruce

  • Essbase 7.1 - trouble reading log files

    We have recently upgraded to Essbase 7.1 We have the DELIMITEDMSG TRUE entry in our essbase.cfg file. Since the upgrade to 7.1 the log files for each application no longer have a CR/LF at the end of each entry. They are just run together into an extremely long line. However, when we look at the application window on the Essbase server the messages appear on separate lines. Has anyone else run into this problem? Is there a new essbase.cfg setting that we need to use? We take the log file for each application and load it into a logfile cube each month and this becomes very difficult if each message is not on it's own line in the file. Any help would be appreciated.

    I know that in ASO, each parent should have at least one child that aggregate with +.Did you make sure that the members properties are OK (i.e. not all children at ~) ?Not respecting this prevent from saving the Outline.If you want to keep ~ for all children, then the parent should be Label Only.I hope this helps.Denis

  • Rule created to monitor a single line entries in a text.log file does not work

    Hi All,
    I have this strange issue. I created a script which generates .log file and i have configured a rule to monitor it. Whenever the .log is altered the alert does not come at all in SCOM 2012 R2.
    I want this alert to be raised when one specific line in the center is altered from LISTENING to NOT LISTENING.
    I have configured it. It triggered a alert for the first time and again it did not trigger at all.
    I created this rule and disabled it and overrided the value to true only to the MS acting as the watcher for this log file.
    The log file generates in the local drive of the MS itself.
    Changed the log watcher to a different server and also mentioned the application data source to a network location when the watcher was changed so it can pull the log accordingly.
    The log is generated in the MS itself. Tried using both local location where the log is located as well as converted the same to a network location still didn't help.
    C:\Port_checker is the directory where the .log file is located also there is no other log file present only 1.
    I also changed the parameters such as "Contains, Wildcard matches etc but nothing worked.
    Screenshots:
    2. 
    The SCOM Action account has Full permissions on all servers over the entire forest itself.
    Target used to create this rule is "Windows server operating system"
    Can any one help me please.
    Gautam.75801

    Since you have a script that updates a file line from "LISTENING" to "NOT LISTENING"
    you might want to try and configure a Two State Script Unit Monitor rather then a rule. So your script just need to check say every 5 minutes the content of the log file and generate an alert when it matches "Not Listening" and clear when
    it changes to "listening".
    http://www.systemcentercentral.com/wp-content/uploads/2009/04/HOW-TO_2-state_ScriptMonitor.pdf
    Cheers,
    Martin
    Blog:
    http://sustaslog.wordpress.com 
    LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

Maybe you are looking for