Getting Log File Pattern Matched Line Count metric to work ?

Hi
has anyone been able to get this to work with more complex Perl expressions ?
Basically I can get simple, single expressions to match.
EG *(does not exist)* will match the text *"does not exist"* anywhere in a file.
However, if I want to match either does not exist OR file not found I should be able to do something like
*(does not exist)|(file not found)* OR *(does not exist|file not found)* but this just doesn't work.
I want to be able to do more complex expressions, using *\i* (ignore case), and *^* (start of line) *$* (end of line) expressions too.
I can test the matching functionality using a simple perl program, and I know the expression works in Perl.
Oracle is supposed to be using a perl pattern match but seems to fail unless it is a single simple expression.
Anyone been able to use this functionality at all.
Many thanks.

I have a chance to look into the parse-log1.pl script which is responsible for monitoring the log files and generating the alerts for EMGC. I am just pasting the comments given in this file
# This script is used in EMD to parse log files for critical and
# warning patterns. The script holds the last line number searched
# for each file in a state file for each time the script is run. The
# next run of the script starts from the next line. The state file name
# is read from the environment variable $EM_STATE_FILE, which must
# be set for the script to run.
but in my case this is not happending according to log files it is storing the lst read line of the log file but it is not using that info in its next run. The file will be scanned from the begining again but this is not the case with emagent.log file monitoring its working fine as expected and explained in the script file.
According to my observation this is becasue of the script is rotating my log file for each run i dont know how stop it. I just want to scan my log file I dont want to rotate my log file for each run of the script. Could any one please help me to solve this problem
Thanks
Ashok Chava.

Similar Messages

  • How do I set multiple pattern matching Vi's and make overlappin​g pattern matches to count as one?

    Hello! I'm a student and I'm currently making a project using pattern matching.
    My patterns are from chick foot/feet.
    I'm  created multiple pattern matching VI's to detect all the feet because I find it difficult/impossible to match all the feet with a single pattern/template.
    However, when using multiple pattern matching VI's some pattern matches detect the same foot, hence overlapping.
    So how can I make the overlaping pattern matches to be counted as one?
    Thank you in advance

    Thank you for replying Sir Zwired1.
    I'm still a newbie in using LabVIEW so pardon me if I can't understand fully
    The objective of my project is to detect all the feet through pattern matching and count the pattern matches made.
    "Keep a 2D array of counts, initialized to zero and the same size as your array of possible locations, and increment the value every time you get a match. If multiple pattern matching attempts result in a match a given location in your count array might be "3" but all you care about is if the number is greater than zero."
    I'm sorry, but how do you do this? BTW, I'm using vision assistant.

  • My Apple ID was used to sign in to iCloud via unknown a web browser. Where can I get log files ?

    My Apple ID was used to sign in to iCloud via unknown a web browser. Where can I get log files ? IP address ?

    As léonie pointed out, you need to check whether or not this is really from Apple.

  • Getting log file when starting J2EE 1.4 Application Server

    I get a log file when starting up the Application Server. I am just a beginner.
    Is there anyone who could help me in setting up?
    Thanks,
    AboliRanade

    What do you mean by you get a log file when starting up the AppServer? There is always a log file present. There should not be a new log file created every time you start the server. Look under <install directory>/domains/domain1/logs for the server log file.

  • Log4j : how to get log file name and directory

    My log4j is working fine. Below is how I define the property file
    log4j.rootCategory=DEBUG, A1
    log4j.appender.A1=org.apache.log4j.RollingFileAppender
    log4j.appender.A1.layout=org.apache.log4j.PatternLayout
    log4j.appender.A1.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
    log4j.appender.A1.File=temp/log.txtI want to know, from my java program, how to retrieve my log file "temp/log.txt" because I want to display at console and notify user where to find the log file.
    Thanks

    Or perhaps I did not understand your requirement. Which of the following is it?
    1. Given some Java class, you need to do something with its source code?
    2. Given some file whose path is specified by user input or runtime configuration or compile-time constant, you need to do something with that file?
    3. Neither of the above?
    If 1: Can't do it. Don't need to do it. Don't waste your time trying. Unless you're writing something like a debugger. If so, then if you have to ask this question, you do not have the skills necessary for the broader task.
    If 2: Google for java io tutorial. Study it, try some code, and come back with a more specific question if you're still confused.
    If 3: Explain clearly what you're trying to accomplish and why you think this approach is the right one.

  • Get native file in ucm (ucm and terastack solution working together)

    Hi,
    We use oracle universal content management server (ucm). A lot of files are stored on the hard disk of the system using this content management system. Now we are using Terastack Solution (http://www.hie-electronics.com/) to create backup of the data. The TeraStack Solution is an optical data storage system designed and manufactured to improve performance through affordable and reliable data storage management. Terastack solution archives (writes) all files to dvds and then a module of it truncates all these files to zero byte in size to save disk space. Now terastack solution watches all archived files. If an application tries to open such a file, it intercepts this, blocks this request, then restores the file from the dvd. During this process calling application just waits for the file to open. After restoring terastack solution sends a signal to the calling application, and the file with full contents is opened in the application.
    Now here is the problem:
    Files are check-in oracle content server. Terastack solution has burned these files to dvd and made these files to zero byte in size. Now when we click a link in content server to get a checked-in file (native file), terastack solution intercepts it, restore the file from dvd, then an open/save dialog box open to either open or save the file to a location. We save the file to a folder, open this file and come to know that there is no content in the file. It is still a zero byte file.
    When we checked the file at original location (remember ucm stores files in vault folder), we came to know that file is actually restored successfully by the terastack solution but the content server returned a zero byte file. When we click the link for the same file again to get native file in ucm then we get a fully restored file. i.e. on the first try to get native file we get a zero byte file although it is restored by terastack solution successfully. And on the second try we get file with full contents. On the second try, file has already been restored by terastack solution during the first try and terastack solution ignores files greater than zero byte in size, that’s why client gets a correct file on second try. Having size greater than zero means file is already restored.
    The link in ucm that is used to get native file does not point directly to the desired file. It calls some code in ucm that transmit the required file back to the client. Something like this:
    http://localhost/idc/idcplg?IdcService=GET_FILE&dID=11&dDocName=test_06&allowInterrupt=1
    I think what is going on here is, when we click on a link to get native file that is zero byte in size, content server creates a response for the file, append the size of the file (currently zero) to the header of the response along with other info, and then tries to transmit the file to the client. At this point, terastack solution intercept the request and restore the file. After restoring terastack solution send signal to calling application that file is restored. But the server has already created request for the file and it is not updated during or after the restore, that’s why client gets zero byte file.
    What I want is somehow force the content server to wait until file is fully restored by terastack solution and then transmit the fully restored file to client. Is there any configuration setting for ucm that will achieve the goal?
    Any setting in bin/intradoc.cfg or config/config.cfg or something else?
    Need help.
    Environment:
    OS: Windows XP SP3
    Content Server 10gR3 (Deployed to: IIS, JDK used: v1.5)
    TeraStack Solution (Deployed to: JBoss)

    Thank you for your reply. Although the links to weblayout version of files works perfectly with terastack, but we want to take backup of files in vault folder. Is it possible to change the links that are use to get native file? Can we somehow make these links to directly point to vault files? If yes please tell us how.
    A custom java service to download the native file can also work. Can you please give us a sample code and how to implement it in ucm?
    Please provide one of the solutions. I’m a new comer to ucm, so please provide instructions in more details.
    BTW, I tested the restoration of file using a test dotnet web application. I added code that downloaded a user selected file to the client. The code we used is as under:
    1. FileInfo fi = new FileInfo(filepath);
    2. Response.Clear();
    3. Response.AddHeader("Content-Disposition", "attachment; filename=" + fi.Name);
    4. Response.AddHeader("Content-Length", fi.Length.ToString());
    5. Response.ContentType = GetFileContentType(fi.Extension);
    6. Response.TransmitFile(fi.FullName);
    7. Response.End();
    GetFileContentType() function is a simple function to get content type of a file type.
    Line#4 is important here. A file is zero byte in size when a request to download a file is received by the server, and the server just appends content-length=0 in the header. So even the file is restored by the terastack afterwards, client still receives a zero byte file. At this stage our test application has the same problem, not getting a native file with full contents.
    We removed line# 4 and tried again to download a file thru our test web application. This time client got file with full contents after the file is restored. So we think this Content-Length header is the one that needs to be handled.
    You may find this information useful if you decide to build a java service solution.

  • Auto remove of log files on the client-side is not working

    Hi,
    I have a setup for one-to-one client/server replication database. Everything is replicated ok.
    But on the client side, I see the log.00000000xx files are not removing at all,
    while the server has only 2 last log.00000000xx files left. But if I switch the role of the client/server,
    the newly server will eventually removed the unused log.00000000xx file, and have two last log file left.
    Both client and server database environment setup has called dbenvp->log_set_config(dbenvp, DB_LOG_AUTO_REMOVE, 1).
    Is there any additional setting for the client-side to auto remove the unused log files?
    Thanks,
    Sandra

    Hi.
    First, what version are you running? We created a test to confirm that this feature is
    working as expected on both a master and a client site. What flags do you have set
    for replication? I think we need to have you run with replication verbose messages
    set on the client site and possibly other diagnostics in order to determine what is different
    about your setup. We should take that level offline. Verbose messages can generate
    a large amount of output.
    You can contact me at the typical [email protected] and we'll move it
    forward that way. Thanks.
    Sue LoVerso
    Oracle

  • Files are off line now encoder wont work

    I have been working on a slideshow in Premiere pro CS4 saved all my files went over to After effects for a day came back to Premiere and all of my files were off line I tried to reconnect them and it said files are to big (Something like that) so I deleted all of my projects and started over now Ive tried to export one simple photo for a slide show and does not show up in the adobe media encoder.Im not doing anything crazy here.Oh the pain
    Any help is appreciated.

    OK, after you Scale your image, do a Render. Remember, previewing in the Program Monitor is just an emulation. To view for critical grading, a calibrated CRT TV monitor will be better. However, you can set your Program Monitor to display 100% (not Fit), and Display Quality to Highest.
    Also be aware that the scaling algorithms in PrPro are not as good as those in Photoshop and offer far less control over the scaling. I do all of my resizing in PS, prior to Import into PrPro, and establish the exact size that I will need, usually the exact Frame Size as my Project, or calculated to the exact size that I will need, if I must pan on a zoomed out image. This ARTICLE will give you some tips on resizing in PS.
    Hope that this helps on the quality issue, and good luck,
    Hunt

  • FCC receiver file adapter new line 'nl' is not working

    Hi Experts,
    I am doing idoc to file scenario,i have to creart a text file in which each line will have a record.
    i am using
    Recordset Structure -->DeliveryRecords
    DeliveryRecords.addHeaderLine 0
    DeliveryRecords.fieldFixedLengths 4,25,3,10,10,10,8,18,40,10,13,15,10,4
    DeliveryRecords.fieldSeparator '0x09'
    DeliveryRecords.endSeparator  'nl'
    but new line is not working every thing is coming in the same line.i have also tried with '0x0A' , this is also not working.
    Pls suggest what can be done.??

    Hi
    You have to use either fieldFixedLength or fieldSeparator.
    You should not mix together.
    DeliveryRecords.addHeaderLine 0
    DeliveryRecords.fielNames aa,bb,cc,dd,ee,ff,gg,hh,ii,jj,kk,ll,mm,nn
    DeliveryRecords.fieldFixedLengths 4,25,3,10,10,10,8,18,40,10,13,15,10,4
                       or
    DeliveryRecords.addHeaderLine 0
    DeliveryRecords.fielNames aa,bb,cc,dd,ee,ff,gg,hh,ii,jj,kk,ll,mm,nn
    DeliveryRecords.fieldSeparator '0x09'
    DeliveryRecords.endSeparator 'nl'
    http://help.sap.com/saphelp_nw04/helpdata/en/d2/bab440c97f3716e10000000a155106/content.htm
    /people/arpit.seth/blog/2005/06/02/file-receiver-with-content-conversion

  • I can't get the file attachment in a web form to work

    I have a web form made in Business Catalyst that I'm having some problems with. I have added a file attachment option, but I can't get this to work properly.
    When a user chooses a file and sends the form, the message that is being sent includes the name of the file that was attached, but not the file itself! What am I doing wrong?
    This is the web form (in Norwegian, but you get the idea of where the file upload is) In this form, the file "produktark_plusstjenester.pdf" has been attached.
    The e-mail that is being sent now looks like this (I have removed the private information). But as you can see, it mentions the file produktark_plusstjenester.pdf (94,45 kb), but it is not attached in the e-mail itself.
    Hope someone can clarify this for me

    File on web forms is attached to the case. Go to the case in the admin and you can retrieve the file.

  • Can I monitor a Log file using EMGC 10.2.0.2?

    Hi,
    I am thinking of monitoring my web application log file using EMGC by creating a generic service is that possible. Now we are using some shell scripts to do that but its bit difficult to maintain all these shell scripts on each of the host. Is there any in built in mechanism that enable me to monitor the log file and when a perticular pattern match I would like to send a email notification to concerned people say application admins if there is no out of box option for this do we have plugins to do this. please let me know the possibility of implementing this using EMGC or extencibility plug-ins.
    Ashok Chava.

    Hi,
    I have used "Log File Pattern Matched Line Count" of host to monitor the log files and below is the pattern I have defined for the log file. But i could not find any alerts even there are so many such exceptions in the log file matching the patteren given in EMGC.
    /u01/app/oracle/product/IAS904/sysman/log/emias.log;%oracle.sysman.emSDK.util.jdk.EMException;%
    I have even add the log file in agent_home/sysman/config/lfm_ifiles file as given in the documentation but I could not see any alerts as expected am I doint anything wrong in my setup.
    Please let me know.
    Thanks,
    Ashok Chava

  • Parsing a log file on Weblogic

    Hi!
    I'd like to know how to get started on parsing a log file present in the default directory of Weblogic (ver 6.1 to be precise).
    I thought of using regular expressions, and use java.util.regex , but that is supported from JDK1.5 onwards, whereas WL6.1 supports JDK1.3.
    If u can also provide the code template for the same , that would be nice.
    Thanks in advance,
    Deepthy.

    uncle_alice wrote:
    String regex = "([^\"\\\\]++|\\\\.)++"{code} The trick is to match anything except a quotation mark or a backslash, OR match a backslash followed by anything (because the backslash is usually used to escape other characters as well, including backslashes).Superb! Thanks! I have to admit I've never used the ++ before (only the greedies), but that's the thing I was looking for.
    Just for the completeness, this is the whole thing that's able to parse a log line:
    {code}
    public class LogParser {
    private static final String NOSPACE_PARAM = "([^ ]++)";
    private static final String DATE_PARAM = "([^\\]]++)";
    private static final String ESCAPED_PARAM = "((?:[^\"\\\\]++|\\\\.)++)";
    private static final String PATTERN_STRING = NOSPACE_PARAM
    + " " + NOSPACE_PARAM
    + " " + NOSPACE_PARAM
    + " \\[" + DATE_PARAM + "\\]"
    + " \"" + ESCAPED_PARAM + "\""
    + " " + NOSPACE_PARAM
    + " " + NOSPACE_PARAM
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\""
    + " " + NOSPACE_PARAM
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\"";
    private static final Pattern PATTERN = Pattern.compile(PATTERN_STRING);
    public static String[] parse(String line) {
    Matcher m = PATTERN.matcher(line);
    if (m.matches()) {
    String[] result = new String[m.groupCount()];
    for (int i = 0; i < m.groupCount();) {
    result[i] = m.group(++i);
    return result;
    return null;
    {code}
    Any idea about the efficiency of this thing?

  • Log file sync question

    Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
    http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
    The question I have relates to the stated breakdown of 'log file sync' wait event:
    1. Wakeup LGWR if idle
    2. LGWR gathers the redo to be written and issue the I/O
    3. Time for the log write I/O to complete
    4. LGWR I/O post processing
    5. LGWR posting the foreground/user session that the write has completed
    6. Foreground/user session wakeup
    Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
    Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
    However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
    So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
    Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
    Clearly, my analsys contains a lot of conjecture, hence this note.
    Can anybody point me in the direction of some facts?

    Tony Hasler wrote:
    Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
    http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
    The question I have relates to the stated breakdown of 'log file sync' wait event:
    1. Wakeup LGWR if idle
    2. LGWR gathers the redo to be written and issue the I/O
    3. Time for the log write I/O to complete
    4. LGWR I/O post processing
    5. LGWR posting the foreground/user session that the write has completed
    6. Foreground/user session wakeup
    Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
    Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
    However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
    So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
    Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
    Clearly, my analsys contains a lot of conjecture, hence this note.
    Can anybody point me in the direction of some facts?It depends on what you mean by facts - presumably only the people who wrote the code know what really happens, the rest of us have to guess.
    You're right about point 1 in the MOS note: it should include "or wait for current lgwr write and posts to complete".
    This means, of course, that your session could see its "log file sync" taking twice the "redo write time" because it posted lgwr just after lgwr has started to write - so you have to wait two write and post cycles. Generally the statistical effects will reduce this extreme case.
    You've been pointed to the two best bits of advice on the internet: As Kevin points out, if you have lgwr posting a lot of processes in one go it may stall as they wake up, so the batch of waiting processes has to wait extra time; and as Riyaj points out - there's always dtrace (et al.) if you want to see what's really happening. (Tanel has some similar notes, I think, on LFS).
    If you're stuck with Oracle diagnostics only then:
    redo size / redo synch writes for sessions will tell you the typical "commit size"
    redo size + redo wastage / redo writes for lgwr will tell you the typical redo write size
    If you have a significant number of small processes "commit sizes" per write (more than CPU count, say) then you may be looking at Kevin's storm.
    Watch out for a small number of sessions with large commit sizes running in parallel with a large number of sessions with small commit sizes - this could make all the "small" processes run at the speed of the "large" processes.
    It's always worth looking at the event histogram for the critical wait events to see if their patterns offer any insights.
    Regards
    Jonathan Lewis

  • How to recover from one corrupted redo log file in NOARCHIVE mode?

    Oracle 10.2.1.
    The redo log file was corrupted and Oracle can't work.
    When I use STARTUP mount, I got no error msg.
    SQL> startup mount
    ORACLE instance started.
    Total System Global Area 1652555776 bytes
    Fixed Size 1251680 bytes
    Variable Size 301991584 bytes
    Database Buffers 1342177280 bytes
    Redo Buffers 7135232 bytes
    Database mounted.
    But I have some applications which are depended on Oracle can't be started.
    So, I tried STARTUP open. But I got error msg.
    SQL> startup open
    ORACLE instance started.
    Total System Global Area 1652555776 bytes
    Fixed Size 1251680 bytes
    Variable Size 301991584 bytes
    Database Buffers 1342177280 bytes
    Redo Buffers 7135232 bytes
    Database mounted.
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 497019 change 42069302 time 11/07/2007
    23:43:09
    ORA-00312: online log 4 thread 1:
    'G:\ORACLE\PRODUCT\10.2.0\ORADATA\NMDATA\REDO04.LOG'
    So, how can I restore and recover my database?
    If use RMAN, how to do that?
    Any help will be appreciated.
    Thanks.

    Hi, Yingkuan,
    Thanks for the helps.
    Actually, I have 10 redo log files exists. All of them are here.
    I tried your suggestion:
    alter database clear unarchived logfile group 4;
    The error msg I got is the same as before:
    SQL> alter database clear unarchived logfile group 4;
    alter database clear unarchived logfile group 4
    ERROR at line 1:
    ORA-01624: log 4 needed for crash recovery of instance nmdata (thread 1)
    ORA-00312: online log 4 thread 1:
    'G:\ORACLE\PRODUCT\10.2.0\ORADATA\NMDATA\REDO04.LOG'
    Compared to losing all the data, it is OK for me lose some of them.
    I have more than 1 TB data stored and 99.9% of them are raster images.
    The loading of these data were the headache. If I can save them, I can bear the lost.
    I want to grasp the last straw.
    But I don't know how set the parameter: allowresetlogs_corruption
    I got the error msg:
    SQL> set allowresetlogs_corruption=true;
    SP2-0735: unknown SET option beginning "_allow_res..."
    I have run the command:
    Recover database until cancel
    Alter database open resetlogs
    The error msg I got is the following:
    SQL> recover database until cancel
    ORA-00279: change 41902930 generated at 11/05/2007 22:01:48 needed for thread 1
    ORA-00289: suggestion :
    D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\NMDATA\ARCHIVELOG\2007_11_09\O1_MF_
    1_1274_%U_.ARC
    ORA-00280: change 41902930 for thread 1 is in sequence #1274
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    cancel
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\NMDATA\SYSTEM01.DBF'
    ORA-01112: media recovery not started
    SQL>
    From the log file, I got the following:
    ALTER DATABASE RECOVER database until cancel
    Fri Nov 09 00:12:48 2007
    Media Recovery Start
    parallel recovery started with 2 processes
    ORA-279 signalled during: ALTER DATABASE RECOVER database until cancel ...
    Fri Nov 09 00:13:20 2007
    ALTER DATABASE RECOVER CANCEL
    Fri Nov 09 00:13:21 2007
    ORA-1547 signalled during: ALTER DATABASE RECOVER CANCEL ...
    Fri Nov 09 00:13:21 2007
    ALTER DATABASE RECOVER CANCEL
    ORA-1112 signalled during: ALTER DATABASE RECOVER CANCEL ...
    Thank you very much. and I am looking forward to your followup input.

  • Director 11 line.count problem?

    Hi all. My name is Natassa and this is my first time here!
    I am not new with Director, but can't figure out the
    following!
    There seems to be something wrong when working with text
    members regarding the line.count property. In our project, i parse
    through XML files and store data in text members which are placed
    on stage and run a mouseOver script using pointToLine(the mouseLoc)
    in order to change the color of the "active" line of the text
    member.
    The XML files are all utf-8 encoded, since we have english,
    greek and french text.
    I use a repeat loop so as to fill a text member with some
    titles.
    I have tried everything i could think of and could not solve
    my problem. So, i thought i should try something very basic and
    figure out what is going on.
    Nothing seems to be working!
    I have run the following basic scripts with the following
    output:
    Example A
    script:
    --I use numToChar(940) which is a greek character, since in
    our project we have english, greek, french all utf-8 encoded
    member("myText").text=""
    repeat with i=1 to 300
    myString = " line " & numToChar(940) &
    numToChar(940) & numToChar(940) & numToChar(940) & i
    member("myText").line[ i ] =myString
    put i,member("myText").line[ i ],
    member("myText").line.count
    end repeat
    Message window output:
    -- 1 " line άάάά1" 1
    -- 2 " line άάάά2" 2
    . (everything ok so far)
    -- 232 " line άάάά232" 232
    -- 233 " line άάάά233" 233
    -- 234 " line άάάά234" 208
    -- 235 "" 235 HERE IS THE PROBLEM - NO STING STORED IN THE
    TEXT MEMBER AFTER THIS LINE
    -- 236 "" 236
    -- 237 "" 237
    -- 299 "" 299
    -- 300 "" 300
    put member("myText").line.count
    -- 300
    put member("myText").line[300]
    Example B
    script:
    --I use numToChar(940) which is a greek character, since in
    our project we have english, greek, french all utf-8 encoded
    member("myText").text=""
    repeat with i=1 to 300
    myString = " line " & numToChar(940) &
    numToChar(940) & numToChar(940) & numToChar(940) & i
    member("myText").setContentsAfter(RETURN & myString)
    put i,member("myText").line[ i ],
    member("myText").line.count
    end repeat
    Message window output:
    -- 1 " line άάάά1" 2
    -- 2 " line άάάά2" 3
    . (everything ok so far)
    -- 232 " line άάάά232" 233
    -- 233 " line άάάά233" 234
    -- 234 " line άάάά234" 209 HERE IS THE
    PROBLEM - line.count DOES NOT WORK CORRECTLY
    -- 235 " line άάάά235" 210
    -- 299 " line άάάά299" 274
    -- 300 " line άάάά300" 275
    put member("clippings").line.count
    -- 275
    -- But if i copy the text of the member and paste it into a
    text editor, there are 300 lines!!!
    put member("clippings").line[300]
    -- " line άάάά300"
    As a result, the pointToLine(the mouse Loc) is not working
    well! The mouse is over a line and a different line has the
    "active" color...
    As you can see, the problem occurs after line 234!! :-|
    Does anyone else have the same problem? I have a deadline and
    can't figure out what to do!!
    p.s. I updated a director MX file to a director 11 file. A
    text member (contains greek and latin characters) opened in MX has
    a line.count of 384 (correct) and the same member opened in 11 gave
    a line.count of 275!!!!! Again the same problem with
    pointToLine()...

    >
    Example A
    >
    > -- 233 " line ????233" 233
    > -- 234 " line ????234" 208
    > -- 235 "" 235 HERE IS THE PROBLEM - NO STING STORED IN
    THE TEXT MEMBER
    > AFTER THIS LINE
    FWIW: I can replicate this problem, and it doesn't occur in
    DMX2004, so
    it's a new bug. It seems that line 234 gets the line number
    wrong (208)
    and it all turns to custard after that
    >
    Example B
    >
    > -- 232 " line ????232" 233
    > -- 233 " line ????233" 234
    > -- 234 " line ????234" 209 HERE IS THE PROBLEM -
    line.count DOES NOT WORK
    > CORRECTLY
    > -- 235 " line ????235" 210
    And I can replicate this too.
    However, I can "fix" your second example by referencing
    member.text.line
    instead of member.line. See the following alteration to your
    original
    handler:
    on test2
    member("myText").text=""
    c = numToChar(940)
    s = "line " & c & c & c & c
    repeat with i=1 to 300
    member("myText").setContentsAfter(RETURN & s & i)
    put i, member("myText").text.line
    , member("myText").text.line.count
    end repeat
    end
    I can't find a fix for your first example in a similar
    fashion. It's not
    permitted to execute member("myText").text.line =
    "string"
    > As a result, the pointToLine(the mouse Loc) is not
    working well! The mouse is
    > over a line and a different line has the "active"
    color...
    > p.s. I updated a director MX file to a director 11 file.
    A text member
    > (contains greek and latin characters) opened in MX has a
    line.count of 384
    > (correct) and the same member opened in 11 gave a
    line.count of 275!!!!! Again
    > the same problem with pointToLine()...
    Try instead measuring member.text.line.count instead of
    member.line.count
    I will try to replicate the pointToLine() issue and see if I
    can find a
    workaround.

Maybe you are looking for

  • Sharing PDF Printer Security Settings

    Hi We are rolling out Acrobat Pro XI to a group of people; we want all pdfs to be created with the same security settings.  I've created a security policy with the required settings (no ability to print, copy contents etc) and it shows up as the requ

  • Outlook Calendar doesnt sync

    My iPhone is set up to sync with my outlook calendar but none of the appointments show up on phone. All other Outlook items sync fun but my appoints are not on my calendar. Help please!! Thanks

  • Cannot compile java classes error

    After mimicing the 'Add the databse exception handling to the FulfillOrder process' in my prototype when I attempt to compile it I get the error 'Error: Failed to compile classes. Failed to compile the generated BPEL classes for "providerEnrollmentMa

  • 10.4.8 worst update I've ever had from Apple.

    I'm pretty much a Mac expert, never had a problem with a update in 15 some odd years but 10.4.8 has a lot of glitches. Main problem is airport related on all 6 macs, G4, G5 and intel machines. I hope Apple fixes it's errors very soon.

  • Strange behavior of EP

    Hi Guys, Transport package created in QA environment, Iviews, pages, roles. & worksets included in package. As well, all dependent objects were included. From various logins, this content cannot be seen in the production portal. Permissions were revi