Inconsistent FileReader behaviour during heavy load

I am using a StreamTokenizer / FileReader to parse the contents of files and have noticed inconsistent results similar to the ones we get during concurrency problems. Only in my case its a single single Thread calling the parser method.
I have run the parser 672 times over the same file and I have a counter that counts the parsed words. Though I would expect the counter to always show the same number, suprisingly it sometimes shows a smaller number !
---1---[227]
---1---[227]
---1---[227]
---1---[227]
---1---[190] <====
---1---[227]
---1---[227]
---1---[227]
---1---[227]
---1---[227]
---1---[227]
---1---[227]
I have the feeling StreamTokenizer.TT_EOL may return EOF prematurely. Then again I need a deterministic way of processing the contents of as many files as needed. How can I make sure my code will always process the same number of tokens for the same file at any given time?
Here the method that produces the inconsistent counts:
    public HashMap<String,ArrayList<Long>> parseDocument( Reader       _reader
                                                        , int          _pivDocID
                                                        , yxStopList   _yxStpLst
                                                        , String       _filename )
            throws   FileNotFoundException
                   , IOException
        HashMap<String,ArrayList<Long>> _postingLists = new HashMap<String,ArrayList<Long>>(HASHMAP_INITIAL_SIZE);
        ArrayList<Long>                 _offsets      = null;
        int                             _currOffset   = 0;
        int                             _numericTokens= 0;
        String                          _token        = null;
        BufferedReader                  _buffReader   = new BufferedReader(_reader);
        StreamTokenizer                 _st           = new StreamTokenizer(_buffReader);
        _st.resetSyntax();
        _st.ordinaryChars(0,255);
        _st.eolIsSignificant(true);
        _st.lowerCaseMode(true);
        _st.whitespaceChars(',', ',' ); // COMMA
        _st.whitespaceChars(' ', ' ' ); // SPACE
        _st.whitespaceChars('.', '.' ); // PERIOD
        _st.whitespaceChars('\t','\t'); // TAB
        _st.whitespaceChars('\n','\n'); // EOL
        _st.whitespaceChars('\r','\r'); // EOL
        _st.wordChars('a','z');
        _st.wordChars('A','Z');
        _st.wordChars('0','9');
        _st.wordChars('_','_');
scan:
        while(true)
            try
                switch(_st.nextToken())
                        case StreamTokenizer.TT_WORD  :
                             _token = _st.sval;
                             if (_token.length() < MINIMUM_ACCEPTABLE_TOKEN_LENGTH) break;
                             if (_token.length() > MAXIMUM_ACCEPTABLE_TOKEN_LENGTH) break;
                             if (_token.matches(".*[^a-zA-Z0-9_].*")) break;
                             if (_token.indexOf("__") > -1) break;
                             if (_token.matches(".*[0-9].*") && _token.matches(".*[a-zA-Z_].*")) break;
                             if (!_token.matches("[a-zA-Z_]+"))
                                 if ( !_token.startsWith("0" ) )               break;    //only numbers like: 069 456456 or 004916099113815
                                 if (  _token.startsWith("000") )              break;
                                 if (    _token.length() == 4
                                      && !(    _token.startsWith("19")
                                            || _token.startsWith("20")
                                            || _token.startsWith("21")) )      break;
                                 if (_token.length() > 20)                     break;
                                 _numericTokens++;
                                 if (_numericTokens > MAXIMUM_NUMBERS_PER_DOC) break;    //do not allow too many numeric tokens per document
                             }//end of [IF]
                             if ( _yxStpLst.isStopWord(_token) )
                                 yxL.log(6,"[yxParser  --  parseDocument(2)]","","WARNING","REJECTING STOPWORD ["+_token+"] !");
                                 break;
                             _currOffset++;
                             if ( !_postingLists.containsKey(_token) )
                                _offsets = new ArrayList<Long>();
                                _offsets.add(0,(long)_pivDocID);
                                _offsets.add(1,(long)_currOffset);
                                try {
                                   _postingLists.put(_token,_offsets);
                                }catch(OutOfMemoryError e001){
                                   e001.printStackTrace();
                                   yxL.log(2,"[yxParser  --  parseDocument(2)]","","ERROR","OutOfMemory while parsing ["+_filename+"]");
                                   break scan;
                             }//end of [IF]
                             else
                                _offsets = _postingLists.get(_token);
                                if (_offsets.size() == MAX_OFFSETS_PER_TOKEN) break;
                                int _prevSumOfOffsets = 0;
                                for ( int i1 = 1;                                                       // ignore i1=0 because i1=0 is the DOCID
                                      i1<_offsets.size();                                               // loop until end of encoded Offsets
                                      i1++ ) _prevSumOfOffsets += _offsets.get(i1);                     // sum all existing encoded offsets
                                _offsets.add(_offsets.size(),(long)(_currOffset - _prevSumOfOffsets));
                                try {
                                   _postingLists.put(_token,_offsets);              // put : replaces existing Key
                                }catch(OutOfMemoryError e001){
                                   e001.printStackTrace();
                                   yxL.log(2,"[yxParser  --  parseDocument(2)]","","ERROR","02 [yxParser] OutOfMemory while parsing ["+_filename+"]");
                                   break scan;
                                yxL.log(6,"[yxParser  --  parseDocument(2)]","","INFO"
                                         ,"02   Updating  ["+_token+"]["+Arrays.toString(_offsets.toArray())+"]");
                             break;
                        case StreamTokenizer.TT_NUMBER: break;                  // Numbers will be treated as Strings
                        case StreamTokenizer.TT_EOL   : break;                  // EOL
                        case StreamTokenizer.TT_EOF   : break scan;             // EOF
                        default                       :
                             break;                                             // individual 1-char tokens will be ignored
                }//end of [SWITCH]
            }catch (Exception e){e.printStackTrace();}
        }//end of [WHILE]
        _buffReader.close();
        _reader.close();
        _token      = null;
        _st         = null;
        _buffReader = null;
        _offsets    = null;
        int _tokensFound = _postingLists.size();
        if (_tokensFound < 1)
            yxL.log(3,"[yxParser  --  parseDocument(2)]","","WARNING","Number of tokens found = ["+_postingLists.size()+"]["+_filename+"]");
            if (sh.isIndexable(_filename))
                _offsets = new ArrayList<Long>();
                _offsets.add(0,(long)_pivDocID);
                _offsets.add(1,1L);
                try {
                   _postingLists.put(sh.MANUAL_INDEX_FILE_START,_offsets);
                }catch(OutOfMemoryError e001){
                   e001.printStackTrace();
                   yxL.log(2,"[yxParser  --  parseDocument(2)]","","ERROR","OutOfMemory2 while parsing ["+_filename+"]");
            }//end of [IF]
        else
            yxL.log(4,"[yxParser  --  parseDocument(2)]","","INFO","Number of tokens found = ["+_postingLists.size()+"]");
        COUNT_TOTAL_WORDS += _postingLists.size();
System.out.println("---1---["+_postingLists.size()+"]");
        yxL.log(6,"[yxParser  --  parseDocument(2)]","END");
        return _postingLists;
    }

Yes. there is a GUI involved. The Gui participated in this only by triggering the parsing process through the click of a button. The parser receives its input from a Vector which contains fully qualified filenames.
Each filename is used to instantiate a FileReader Object. This in turn is passed on to the parser method shown above. The inconsistency is located in the method above. I have already searched for days to reach to this conclusion. I was taking for granded that the stream would be read till EOF is reached, but this is not always the case. The method shown above is re-run over 600 times and 2% of the time it return 10-20 words less. !!

Similar Messages

  • Server hangs or freezes during heavy load

    During peak times of the day, especially during heavy load on the Calendar Server,
    the application seems to hang. The client side application will not respond on
    the user's desktop, and uni* commands on the server itself respond considerably
    slow.
    <P>
    There are two parameters in the server configuration file that are strongly
    believed to be a trigger of server hangs or freezes in large deployments and/or
    busy servers. Here is a description of the problem:
    <P>
    Large deployments tend to be 3000+ users per node. This could be a single or
    multi-node environment.
    <P>
    A lock manager fix was implemented in 4.0 to correct a problem that was
    found in 3.51 where the server would hang. At that time, the parameters called
    read/writelocktimeouts
    were introduced as a failover mechanism in case the
    database was not available, which would then trigger the client process to
    disconnect rather than hang the whole server.
    <P>
    These timeouts effectively will terminate a process whose read or write exceeds
    the specified periods. The default of 20 seconds is quite a large amount of time;
    however, it is not totally unlikely that such a value could be met on a
    very busy system. If this is the case, and there is some relation between a
    process being terminated by one of these timeouts and subsequent system
    instability, then the "solution" would not be to extend the values of the
    timeouts but rather to exclude them. This way, it will ensure that no process is
    terminated this way and therefore the process would be allowed to continue until
    it had completed its job.
    <P>
    The timeouts were not removed from the product, but under normal circumstances
    they probably won't be needed anymore anyhow. It seems that on a busy calendar
    server, setting the db timeout alarms may actually trigger the server to freeze.
    Below are some examples of errors that appear in the log files which show
    that the database is no longer accepting client requests:
    <P>
    db_VISTA ERROR -920 -> cst_d_open: d_open
    db_SchedBaseOpen: unable to open database
    probable cause: unilckd is down or "/users/unison/tmp/unisonlckm"
    was removed
    uniengd: database lock timeout
    ITEM: "NA,NA" <0,0>
    CLIENT: "unises", "A.02.80"
    INET-NAME:
    INET-ADDR:
    CALL: "SessionsInfoGet"
    <P>
    To make the fix:
    <OL>
    <LI>Using your favorite editor, edit the /users/unison/misc/unison.ini file.
    In the following section you will see these two parameters:
    <P>
    [ENG]
    writelocktimeout = 20
    readlocktimeout = 20
    <P>
    <LI>Place a "#" sign (or the appropriate comment symbol for your OS) in front of
    these two lines and save the file.
    <P>
    <LI>The server will now have to be restarted in order for the changes to take
    effect.
    </OL>

    This looks similar to what I'm seeing.
    DPM 2010, there's one backup set (for me a file server disk) that every time I try to run the initial replica on it the server hangs and needs to be rebooted by iLO. It doesn't just die suddenly, first the data stream on the backup stops then the OS becomes
    less responsive but there is no resource issue. trying to open event view will cause a few things to lock up then over a few mins the server is complete froze. like the disk drives have been locked.
    Suspecting McAfee, I added in all the exclusions, that didn't help so I added the process exclusions which are done by setting dpmra and csc to low risk and that didn't help either. I could reproduce it just by kicking off a backup for this one file servers
    drive so it's easy to test with.
    Tonight, I had some permissions in EPO to let me stop the scanning completely and disable the on-access scan and for the first time it worked!
    There is definitely an issue between DPM and McAfee beyond what is on MS's web page for AV checks.
    I don't have a workaround yet other than stopping the AV completely... Something to follow up on next week. For the moment I made some progress though.

  • Broken TCP stack in latest kernel when under heavy load

    I'm running an Arch box with a decent amount of HTTP traffic. After upgrading to the latest kernel I've seen that packets are send from the wrong source and destination address. This only applies during heavy load (100+ requests per second). tcpdump shows the following:
    18:52:58.512573 IP 0.0.0.0.80 > 0.0.0.0.4316: Flags [FP.], seq 0, ack 1, win 14400, length 0
    18:52:58.512600 IP 0.0.0.0.80 > 0.0.0.0.56546: Flags [FP.], seq 0, ack 1, win 14400, length 0
    18:52:58.512621 IP 0.0.0.0.80 > 0.0.0.0.4535: Flags [FP.], seq 0, ack 1, win 14600, length 0
    18:52:58.512641 IP 0.0.0.0.80 > 0.0.0.0.3528: Flags [FP.], seq 0, ack 1, win 14600, length 0
    18:52:58.512662 IP 0.0.0.0.80 > 0.0.0.0.4509: Flags [FP.], seq 0, ack 1, win 14400, length 0
    18:52:58.512682 IP 0.0.0.0.80 > 0.0.0.0.65040: Flags [FP.], seq 0, ack 1, win 14600, length 0
    18:52:58.512702 IP 0.0.0.0.80 > 0.0.0.0.2455: Flags [FP.], seq 0, ack 1, win 10240, length 0
    18:52:58.512722 IP 0.0.0.0.80 > 0.0.0.0.16545: Flags [FP.], seq 0:268, ack 1, win 15008, length 268
    18:52:58.519258 IP 0.0.0.0.80 > 0.0.0.0.29802: Flags [FP.], seq 0:268, ack 1, win 980, options [nop,nop,TS val 745514 ecr 1317559555], length 268
    18:52:58.565907 IP 0.0.0.0.80 > 0.0.0.0.32376: Flags [FP.], seq 0, ack 1, win 14400, length 0
    18:52:58.619241 IP 0.0.0.0.80 > 0.0.0.0.50493: Flags [FP.], seq 0:268, ack 1, win 11256, options [nop,nop,TS val 745544 ecr 9539361], length 268
    18:52:58.805927 IP 0.0.0.0.80 > 0.0.0.0.20852: Flags [FP.], seq 3025419976:3025420244, ack 3037671074, win 967, options [nop,nop,TS val 745600 ecr 6445640], length 268
    18:52:58.805953 IP 0.0.0.0.80 > 0.0.0.0.65025: Flags [FP.], seq 1663827778:1663828046, ack 2127675352, win 707, options [nop,nop,TS val 745600 ecr 457812708], length 268
    18:52:58.845918 IP 0.0.0.0.80 > 0.0.0.0.2217: Flags [FP.], seq 0:268, ack 1, win 707, options [nop,nop,TS val 745612 ecr 546643], length 268
    18:52:59.099245 IP 0.0.0.0.80 > 0.0.0.0.5112: Flags [FP.], seq 0:268, ack 1, win 15008, length 268
    18:52:59.152582 IP 0.0.0.0.80 > 0.0.0.0.1175: Flags [FP.], seq 0:268, ack 1, win 15008, length 268
    18:52:59.232612 IP 0.0.0.0.80 > 0.0.0.0.47217: Flags [FP.], seq 684621876:684622144, ack 3544859356, win 11256, length 268
    18:52:59.659258 IP 0.0.0.0.80 > 0.0.0.0.3098: Flags [FP.], seq 2105858244:2105858512, ack 3896053916, win 980, options [nop,nop,TS val 745856 ecr 52041], length 268
    18:52:59.659290 IP 0.0.0.0.80 > 0.0.0.0.3099: Flags [FP.], seq 18772067:18772335, ack 2568646283, win 980, options [nop,nop,TS val 745856 ecr 52041], length 268
    18:52:59.759244 IP 0.0.0.0.80 > 0.0.0.0.18780: Flags [FP.], seq 0:268, ack 1, win 707, options [nop,nop,TS val 745886 ecr 168876], length 268
    18:52:59.845907 IP 0.0.0.0.80 > 0.0.0.0.58449: Flags [FP.], seq 0, ack 1, win 980, options [nop,nop,TS val 745912 ecr 528058426], length 0
    18:52:59.925936 IP 0.0.0.0.80 > 0.0.0.0.65137: Flags [FP.], seq 0:268, ack 1, win 15008, length 268
    18:52:59.979497 IP 0.0.0.0.80 > 0.0.0.0.2920: Flags [FP.], seq 0:268, ack 1, win 980, options [nop,nop,TS val 745952 ecr 18879], length 268
    18:52:59.979527 IP 0.0.0.0.80 > 0.0.0.0.2922: Flags [FP.], seq 0:268, ack 1, win 980, options [nop,nop,TS val 745952 ecr 18879], length 268
    18:52:59.979553 IP 0.0.0.0.80 > 0.0.0.0.2940: Flags [FP.], seq 0:268, ack 1, win 980, options [nop,nop,TS val 745952 ecr 18879], length 268
    Source and destination ports are correctly set. Wireshark shows the correct HTML inside the packets that are returned to 0.0.0.0. The web server log also looks normal; the correct IP address is displayed and logged as a successful request.
    When dropping incomming traffic on port 80 on eth0 everything works as expected (when requesting the server on eth1, which otherwise fails).
    I'm running on "Linux srv 3.0-ARCH #1 SMP PREEMPT Wed Oct 19 12:14:48 UTC 2011 i686" which is the latest kernel in the repos. When booting the fallback image this problem does not exist, all packets are correctly addressed no matter how much load I put on the server.
    Does anyone else have this problem?
    Edit:
    Running lighttpd 1.4.29. No tweaked kernel/TCP parameters whatsoever.
    Last edited by nullvoid (2011-10-29 17:19:57)

    Did a full reinstall of Arch on another machine and the problem still persist. Tried with Apache and Nginx, same behaviour as with Lighttpd. Could anyone else using an arch box under heavy load see if there's activity from 0.0.0.0?
    Hint:
    # tcpdump -n host 0.0.0.0
    I'll do a bug report upstream later today.

  • Oracle "IO Error" during SELECT query under heavy load

    We're experiencing a strange connection break during SELECT queries under heavy load.
    Platform Details: Solaris, Oracle 11G, JDK 1.6, 
    Application: Spring + Hibernate (C3p0 connection pooling)
    Exact error messages from a lengthy stack trace are mentioned below:
        2013/06/05 18:49:02 | Caused by: org.springframework.dao.DataAccessResourceFailureException: Hibernate operation: could not execute query; SQL [SQL Ommitted]; IO Error: No such file or directory;      nested exception is java.sql.SQLException: IO Error: No such file or directory 
        2013/06/05 18:49:02 | Caused by: java.sql.SQLException: IO Error: No such file or directory
        2013/06/05 18:49:02 |    at oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1091)
        2013/06/05 18:49:02 |    at oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:369)
        2013/06/05 18:49:02 |    at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:273)
        2013/06/05 18:49:02 |    at com.mchange.v2.c3p0.impl.NewProxyResultSet.next(NewProxyResultSet.java:2706)
        2013/06/05 18:49:02 |    at org.hibernate.loader.Loader.doQuery(Loader.java:697)
        2013/06/05 18:49:02 | Caused by: java.net.SocketException: No such file or directory
        2013/06/05 18:49:02 |    at java.net.SocketInputStream.socketRead0(Native Method)
        2013/06/05 18:49:02 |    at java.net.SocketInputStream.read(SocketInputStream.java:129)
        2013/06/05 18:49:02 |    at oracle.net.ns.Packet.receive(Packet.java:282)
    We've started looking at TCP connection settings (Max. TCP connections allowed, Max File descriptors allowed for socket connections at system level). Anything we're missing?
    Why "IO Error: No such file or directory"? Any clue?

    user2951561 wrote:
    That's a better answer indeed.
    I can refine my question if it does not provide you enough information.
    The stack trace i displayed here states that oracle jdbc driver has found the connection to be closed, interrupted etc.
    Application behaves perfectly under normal load but blows up as soon as we reach 3000 concurrent sessions. No firewall is breaking connections, the select query that we observe this behavior for is part of a larger workflow that write data, update some, delete some as well in different tables. Then we see above stack trace for the select query.
    I am trying to explore possible options to investigate. One i mentioned is related to Solaris file descriptors. Could it be database it self?
    Any possible course of action for investigation? Help is much appreciated.
    Oracle errors get reported with error code & message; like ORA-01555 Snapshot Too Old; which is not present in your post.
    You indicated that Connection Pooling is used.
    Is there some (artificial) limit within the application that falls off the cliff at 3000 sessions?
    Oracle does not know or care about the "flavor" of client connection. It treats jdbc the same as OCI or ODBC connections.
    Is OS limited to fixed number of open file handles?

  • Due to heavy load, the latest workflow operation has been queued. It will attempt to resume at a later time.

    I have SharePoint 2010 Enterprise running SP1. Configuration is one SharePoint server in the farm and a SQL 2008 R2 database for the backend. Our user environment is 80 users with very little load on the SharePoint server. I have the workflow timer
    set to 1 minute.
    I have a SPD workflow that starts manually on a form library. Whenever I publish a new version of the workflow, the next time I start the workflow it takes the full minute to finish. If I click on the workflow status before it finishes, I see the message
    "Due to heavy load, the latest workflow operation has been queued. It will
    attempt to resume at a later time.". After the minute completes the workflow finishes.
    Here's the weird thing, the next time I start the workflow, it runs in a couple of seconds - almost instantly. I've tried up to 15 times after the inital publishing and everything seems to work fine on initiation.
    Well, that would be fine for me, however, I intermintantly get this heavy load message during task processes that are running inside the workflow. It's probably less than 5% of the time. It's really frustrating though so I appreciate some help. I'm look
    online and haven't found anything that describes my situation.
    Thank you in advance!

    Hi,
    Before considering an additional hardware try to change following configurations for workflow:
    Increase Throttle Size
    Increase Batch Size
    Time Out
    Workflow Timer Interval
    AutoCleanUpDays
    Increase Throttle Size
    The Workflow throttle setting controls how many Workflows can be processing at any one time on the entire server farm. By increasing the throttle it will allow the number of Workflows execution or can be initiated at a time.
    Use below PowerShell command to get the current Throttle Size:
    Get-SPFarmConfig |
    Select WorkflowPostponeThreshold
    Use below PowerShell command to set new Throttle Size:
    Set-SPFarmConfig -WorkflowPostponeThreshold
    100
    Increase Batch Size
    This is the size that determines number of events processed for a single Workflow instance. Default value is 100, but it can be range from 1 to any number.
    Use below PowerShell command to get the current Batch Size:
    Get-SPFarmConfig |
    Select WorkflowBatchSize
    Use below PowerShell command to set new Batch Size:
    Set-SPFarmConfig -WorkflowBatchSize
    200
    Time Out
    This decides the time out of the Workflow event. The default value is 5 and can be any integer. The time is in minute.
    Use below STSADM command to get the current Time Out value:
    stsadm -o getproperty -pn workflow-eventdelivery-timeout
    Use below STSADM command to get the current Time Out value:
    stsadm -o setproperty -pn workflow-eventdelivery-timeout -pv “15″
    Workflow Timer Interval
    This setting is applicable at Web Application level and not the farm level. The workflow timer interval specifies how often the workflow SPTimer job fires to process pending workflow tasks. This interval also represents the granularity of delay timers within
    your workflow. If a timer is set to delay for one minute, but the interval timer fires only every five minutes, the workflow delays for five minutes, not one minute.
    Use below STSADM command to get the current Workflow Timer Interval value:
    stsadm -o getproperty -pn job-workflow -url <Web Application Url>
    Use below STSADM command to get the current Workflow Timer Interval value:
    stsadm -o setproperty -pn job-workflow -pv “Every 10 minutes between 0 and 30″ -url <Web Application Url>
    Here is the url for reference :
    http://praveenkasireddy.wordpress.com/2013/06/14/workflow-due-to-heavy-load-the-latest-workflow-operation-has-been-queued-it-will-attempt-to-resume-at-a-later-time/

  • SAP R/3 4.7 EX 2 SR1 - IDES (Unicode), Problem during DB load

    Dear Experts,
    Details of the system:
    Solaris 10, Oracle 9.2.0.7.
    R/3 Enterprise 4.7 X 2 SR1 - IDES (Unicode)
    Please help me in solving this problem. I have received this error during DB load Phase 25/37. Errors are related to code page conversion failed.
    Message from SAPCLUST.log:
    myCluster (3.5.Imp): 1329: inconsistent field count detected.
    myCluster (3.5.Imp): 1332: nametab says field count (TDESCR) is 305.
    myCluster (3.5.Imp): 1335: alternate nametab says field count (TDESCR) is 311.
    myCluster (3.5.Imp): 1130: unable to retrieve nametab info for logic table BSEG      .
    myCluster (3.5.Imp): 7305: unable to retrieve nametab info for logic table BSEG      .
    myCluster (3.5.Imp): 2090: failed to convert cluster data of cluster item.
    myCluster: RFBLG      *800**0001**0100000000**1995*
    myCluster (3.5.Imp): 319: error during conversion of cluster item.
    myCluster (3.5.Imp): 322: affected physical table is RFBLG.
    (CNV) ERROR: code page conversion failed
                 rc = 2
    ^M
    .--============--
    .^M
    |                              RSCP - Error                            |^M
    | Error from:             Codepage handling (RSCP)                     |^M
    | code:   64  RSCPECALL    Illegal mixture of procedure calls.         |^M
    | Table UMGSETTING of no interest in Unicode system                    |^M
    | module: rscpexcc no:   12 line:   673                    T100: TS007 |^M
    | TSL01: CQ3  p3: UMGSETTING  p4: &                                    |^M
    |----
    |^M
    | Some OS information, which may or may not belong to this error:      |^M
    | errno    17  File exists                                             |^M
    `----
    '^M
    (DB) INFO: disconnected from DB
    Message from SAPPOOL.log:
    failed to read short nametab of table T2200                          (rc=32)conversion failed for row 5836 of table T184L      VARKEY = \343\2
    40\260\343\200\260\343\200\260\343\200\261\342\200\240\342\200\240\342\200\240\342\200\240\342\200\240\342\200\240\342\200\240\342\200\240\342
    \200\240\342\200\240\342\200\240\342\200\240\342\200\240\342\200\240\342\200\240\342\200\240\342\200\240\342\200\240\342\200\240\342\200\240\3
    42\200\240
    (CNV) ERROR: code page conversion failed
                 rc = 2
    ^M
    .--============--
    .^M
    |                              RSCP - Error                            |^M
    | Error from:             Codepage handling (RSCP)                     |^M
    | code:   64  RSCPECALL    Illegal mixture of procedure calls.         |^M
    | Table UMGSETTING of no interest in Unicode system                    |^M
    | module: rscpexcc no:   12 line:   673                    T100: TS007 |^M
    | TSL01: CQ3  p3: UMGSETTING  p4: &                                    |^M
    |----
    |^M
    | Some OS information, which may or may not belong to this error:      |^M
    | errno    17  File exists                                             |^M
    `----
    '^M
    (DB) INFO: disconnected from DB
    /sapmnt/S01/exe/R3load: job finished with 1 error(s)
    /sapmnt/S01/exe/R3load: END OF LOG: 20071123101328
    Thanks for your attention
    Rgds,
    Fausto

    This process can't be done in Unicode. Has to be done in Non-UC.

  • Inconsistant Aggregation Behaviour - Activation Terminated

    Dear team,
    Iam facing DSO activation failure in daily process chains.
    Error Msg:Inconsistant Aggregation Behaviour - Activation Terminated.
    data is coming from 2 data sources into one DSO.
    I deleted the failed requests from DSO & again upload it from PSA and activate it individuallu...
    But in the next day load....activation fails again & I observe that yesterdays requests are also in red.
    why does this happen ?
    in dso settings --> Activate Data Automatically is checked.
    please reply if u need any further info, for better understanding of the issue.
    regards
    kv

    Hi,
    Delete the yesterdays requests as well (which are uploaded from both the data source).
    Rerun the delta loads and acttivate them manually.
    You probably for got to activate the request yesterday. the step failled in activation so all the requests which are being activated will turn red.
    try the above way it might help you.
    If we check that option, i guess the data will be activated soon after the load is done.
    Not pretty sure about this point. never tried.
    Cheers,
    Srinath.
    Edited by: Srinath Singamsetti on Aug 3, 2009 10:24 AM

  • Session bleed under heavy load...any suggestions?

    hello.
    i'm working on an application that has user sensitive data and we are seeing session bleed under heavy load (ie users reporting seeing other users data, error reports with missing session values, things along thoes lines).  the app itself is typical stuff; a user logs in, they see information specific to their user account and do things with it.  some of that information comes from the session.  this all seems to work fine under normal load (100 or less users), or with a few users testing, but fails under heavy load (1000+ concurrent users).  we cannot reporoduce it locally, nor can we see it when we log into the system ourselves and click around during peak load times.
    here is some more detail.  as i mentioned, we are storing certain user informaiton in the session.  we use an exclusive lock of the session scope to write that info, and a readonly lock of the session scope to read it (i am quadruple checking this now).  this app is running in a multi-instance clustered environment (all on the same server).  CF8 with IIS.  we are using j2ee session management, with sticky sessions and session  replication on.  we were seeing the session bleed before the clustering was introduced however...
    one caveat is that a huge number of our users come from behind a proxy  system, meaning they all have the same IP.  i did some searching on this, but could not find any definitive information that it would create a problem with session variables.
    i was wondering if anyone else had seen this kind of problem and/or had any suggestions in dealing with it?
    thanks.

    the jury is still out to a degree, but i think we've identified the culprit(s) of our session bleed for anyone interested.  it boiled down to two problems.
    1.  var scoping issues.  unfortunately this was a fairly old application, written before we strictly employed best pracitices on var scoping variables within functions in all our cfc's.  we've fixed the bad code and our session bleed problems seemed to have stopped.  there is a great utility for checking code for var scoping problems available at: http://varscoper.riaforge.org/
    2.  a misunderstanding of how cflock with a timeout setting (and no throw on error)  behaves.  aside from session bleed, it turned out we had another issue in there, which was expected session values missing all together.  the crux of the problem is that we had set our cflock read/write timeout's to 30 seconds.  under the extremely heavy load, requests were routinely exceeding those timeout thresholds.  the locks were not set to throw on error, so when the timeout threshold was exceeded, the code within the lock ended up just being skipped.  this was leading to missing data in the session. temporarily we've simply increased the timeout setting to a large number, which has fixed our problem.  eventually we'll set these locks to throw on error and handle the exception in a more graceful manner.
    hope this helps someone.

  • Note: Due to heavy load, the latest workflow operation has been queued. It will attempt to resume at a later time

    Dear all,
    sorry for opening another thread on this.
    I think I have a performance issue with workflows attached to document sets in SharePoint. And I say “I think” because people keep telling me that this is the way it just is.
    The user creates a new document set, which triggers a workflow in which the user has to confirm/review/approve a series of tasks. The time it takes from clicking the OK button on those task form to the workflow status moving to the next step is about 4 seconds.
    And visiting that status page within those 4 seconds brings up the infamous “Note: Due to heavy load, the latest workflow operation has been queued. It will attempt to resume at a later time.” message.
    Hitting Refresh in the browser after those 4 seconds will make the new workflow status appear and the red text go away.
    Is that normal? Is that the performance that everyone else is seeing as well?
    I struggle to see why simply moving a workflow from one task to another should take that on a machine that isn’t doing anything else at the time.
    (1)   
    I have a standalone (non-clustered) SharePoint box, 4 CPUs, 8 GB of memory, more than half of that available, acting as application server and wfe; only the database is on different box.
    (2)   
    The CPU only goes up to 18 or 19%, so CPU does not seem to be the bottleneck. Half the RAM is also still free.
    (3)   
    The workflow is designed with Nintex, and has about 9 flexi and review tasks – the last 2 of them in a loop iterating over typically 3 or 4 items.
    (4)   
    Looking at the logs it looks like the processing in Nintex only takes about 1 second – I don’t know where the other 3 seconds are going.
    (5)   
    There is nothing obvious in the logs.
    (6)   
    We’ve looked at all the “theoretical” improvements around throttling and batch sizes etc. – none of them appeared make any difference. And the workflow is so small that it looks like my tasks gets executed straight away. The problem appears
    to be that the execution takes too long(?) and therefore has not finished by the time the page get redrawn.
    (7)   
    I am running perfmon and I can e.g. see one(!) workflow being loaded into memory – as expected as I am the only user.
    (8)   
    I am seeing a total of 3(?) SQL queries being executed(?). I get the Bytes Sent/sec spiking at 25K, and Bytes received at 18K. But is this good or bad or a bottleneck?
    Where do I take it from here?
    I have been told that “[…] most customers have no issue with this as they are used to the way SP operates and it can be slow at times.” Is it really that bad?
    If it is worth watching more performance counters then I’d need to know what to compare them to.
    Is there something else I am missing?
    Thanks
    Martin

    Hi,
    Before considering an additional hardware try to change following configurations for workflow:
    Increase Throttle Size
    Increase Batch Size
    Time Out
    Workflow Timer Interval
    AutoCleanUpDays
    Increase Throttle Size
    The Workflow throttle setting controls how many Workflows can be processing at any one time on the entire server farm. By increasing the throttle it will allow the number of Workflows execution or can be initiated at a time.
    Use below PowerShell command to get the current Throttle Size:
    Get-SPFarmConfig | Select WorkflowPostponeThreshold
    Use below PowerShell command to set new Throttle Size:
    Set-SPFarmConfig -WorkflowPostponeThreshold 100
    Increase Batch Size
    This is the size that determines number of events processed for a single Workflow instance. Default value is 100, but it can be range from 1 to any number.
    Use below PowerShell command to get the current Batch Size:
    Get-SPFarmConfig | Select WorkflowBatchSize
    Use below PowerShell command to set new Batch Size:
    Set-SPFarmConfig -WorkflowBatchSize 200
    Time Out
    This decides the time out of the Workflow event. The default value is 5 and can be any integer. The time is in minute.
    Use below STSADM command to get the current Time Out value:
    stsadm -o getproperty -pn workflow-eventdelivery-timeout
    Use below STSADM command to get the current Time Out value:
    stsadm -o setproperty -pn workflow-eventdelivery-timeout -pv “15″
    Workflow Timer Interval
    This setting is applicable at Web Application level and not the farm level. The workflow timer interval specifies how often the workflow SPTimer job fires to process pending workflow tasks. This interval also represents the granularity of delay timers within
    your workflow. If a timer is set to delay for one minute, but the interval timer fires only every five minutes, the workflow delays for five minutes, not one minute.
    Use below STSADM command to get the current Workflow Timer Interval value:
    stsadm -o getproperty -pn job-workflow -url <Web Application Url>
    Use below STSADM command to get the current Workflow Timer Interval value:
    stsadm -o setproperty -pn job-workflow -pv “Every 10 minutes between 0 and 30″ -url <Web Application Url>
    Here is the url for reference :
    http://praveenkasireddy.wordpress.com/2013/06/14/workflow-due-to-heavy-load-the-latest-workflow-operation-has-been-queued-it-will-attempt-to-resume-at-a-later-time/

  • A Single POP Sound from Left speaker, after system heavy load.

    Hello From Russia)) I bought Brand New Macbook pro 15" Late 2011 Ci7, this is incredible machine. But a month past and i started to notice a very strange behavior:
    When i using mac for resource "Eating" application or processes my computer starts to Heat-this is normal.
    When i close all the apps here starts interesting) I have Istat installed, so i can see what temps are in my system. Computer is cooling, and when cpu reaches 50-55 degrees (Celsium, from 80 degrees, because of heavy load) ,my left speaker produces a single loud pop sound. (like something turning of may be)
    In other terms of use absolutely is fine! It is very strange and i dont know is normal or not(
    Thank you very much, and sorry for my english.

    That is not something I would expect.  I would go to a repair facility and have it checked.
    Ciao.

  • Oracle ODBC  - Internal Error - unable to initialize NLS during driver load

    I'm having some trouble with my ODBC connections which I hope someone can please help me with!
    About 6 weeks ago all was working as normal.
    As far as I know there have been no updates to the ORacle DB, the Windows XP operating system or the ODBC Drivers.
    Today when I opened access and visual case 2 to connect to Oracle I was at first greeted with a:
    unable to connect SQLState=IM004 SQL_HANDLE_ENV
    error. ODBC also kept crashing.
    I restarted the computer and was confronted with a different error:
    odbc SQLSTate 08004 ORA 12154 TNS could not resolve the connect identifier specified
    I was able to fix this error by setting the environment variable TNS_ADMIN in windows xp environment variables. I'm extremely confused about how this happened though as it was working and I don't think anything has changed.
    I was then able to connect to the database via Microsoft Access but when I opened Visual Case 2 and tried to make an update, I was confronted with the following error:
    Oracle ODBC Driver - internal error - unable to initialize NLS during driver load
    I looked in the registry at:
    HKEY_LOCAL_MACHINE\SOFTWARE\Oracle\KEY_OraClient10g_home1
    HKEY_LOCAL_MACHINE\SOFTWARE\Oracle\KEY_OraClient10g_home2
    HKEY_LOCAL_MACHINE\SOFTWARE\Oracle\KEY_OraDb10g_home1
    and NLS_LANG was set to "AMERICAN_AMERICA.WE8MSWIN1252" in all 3 places.
    (Though KEY_OraClient10g_home2 only had 4 entries as opposed to KEY_OraClient10g_home1's 13 entries)...
    Since I made those changes I can no longer connect through Access.
    I just receive a ODBC - connection to 'xxx' failed
    Advice greatly appreciated!!!!
    Edited by: user11150264 on Aug 25, 2009 10:21 PM

    Actually it sort of does...
    I switched the ODBC connection to use instant client and now it's all working again.
    The biggest mystery is what changed to make it suddenly stop working the old way...

  • Error during database load in ECC 5.0

    Hello All,
    i am trying to install ECC.5 with oracle 10g during database load phase he is giving me a error when i saw in log file i found this.
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: START OF LOG: 20090611085559
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: version R6.40/V1.4
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe -ctf I E:/sap5/51031332_1/EXP1/DATA/SAP0000.STR C:\Program Files\sapinst_instdir\ECC50\SYSTEM\ABAP\ORA\NUC\DB/DDLORA.TPL C:\Program Files\sapinst_instdir\ECC50\SYSTEM\ABAP\ORA\NUC\DB/SAP0000.TSK ORA -l C:\Program Files\sapinst_instdir\ECC50\SYSTEM\ABAP\ORA\NUC\DB/SAP0000.log -o D
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: job completed
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: END OF LOG: 20090611085559
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: START OF LOG: 20090611103425
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: version R6.40/V1.4
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe -dbcodepage 1100 -i C:\Program Files\sapinst_instdir\ECC50\SYSTEM\ABAP\ORA\NUC\DB/SAP0000.cmd -l C:\Program Files\sapinst_instdir\ECC50\SYSTEM\ABAP\ORA\NUC\DB/SAP0000.log -stop_on_error
    DbSl Trace: OCI-call 'OCISessionBegin' failed: rc = 1034
    DbSl Trace: CONNECT failed with sql error '1034'
    DbSl Trace: OCI-call 'OCISessionBegin' failed: rc = 1034
    DbSl Trace: CONNECT failed with sql error '1034'
    (DB) ERROR: db_connect rc = 256
    DbSl Trace: OCI-call 'OCISessionBegin' failed: rc = 1034
    DbSl Trace: CONNECT failed with sql error '1034'
    DbSl Trace: OCI-call 'OCISessionBegin' failed: rc = 1034
    DbSl Trace: CONNECT failed with sql error '1034'
    (DB) ERROR: DbSlErrorMsg rc = 99
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: END OF LOG: 20090611103428
    can anybody help me to sort this out.
    Any help highly appreciated.
    Thanks & Regards
    Subhash

    First of all thanks for helping me
    i have tried this
    Connected to an idle instance.
    SQL> STARTUP MOUNT
    ORACLE instance started.
    Total System Global Area  205520896 bytes
    Fixed Size                  1248116 bytes
    Variable Size             117441676 bytes
    Database Buffers           83886080 bytes
    Redo Buffers                2945024 bytes
    Database mounted.
    SQL> SET AUTORECOVERY ON
    SQL> RECOVER DATABASE
    ORA-00279: change 262364 generated at 06/11/2009 09:43:56 needed for thread 1
    ORA-00289: suggestion : C:\ORACLE\EPT\ORAARCH\EPTARCHARC00052_0689209324.001
    ORA-00280: change 262364 for thread 1 is in sequence #52
    ORA-00308: cannot open archived log
    'C:\ORACLE\EPT\ORAARCH\EPTARCHARC00052_0689209324.001'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    But i m not able to see any archive file on oraarch folder
    Regards
    Subhash

  • Strange Error During Page Load in Debug Mode (only) - Please Help!

    Hi All,
    Data base version: oracle 11g
    Apex version: Apex 4.1.1
    Webserver: Apache
    Need help with how to troubleshoot a Critical problem. The following error only occurs during page load in "Debug" mode. And, only occurs on a specific page within the application. A web page is served-up containing the following message and the application is blocked from running the page. The browser's (IE 8.0) back button must be clicked to proceed outside of "Debug" mode.
    "Error occurred while painting error page: ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06502: PL/SQL: numeric or value error: character string buffer too small"
    Debug log follows:
    "S H O W: application="2006" page="6" workspace="" request="" session="500549669426301"
    Computation point: Before Header
    ...Perform computation of item: APP_SERVER, type=FUNCTION_BODY
    ...Performing function body computation
    ...Execute Statement: declare function x return varchar2 is begin return owa_util.get_cgi_env('SERVER_NAME'); return null; end; begin wwv_flow.g_computation_result_vc := x; end;
    ......Result = 156.9.122.214
    ...Session State: Save "APP_SERVER" - saving same value: "156.9.122.214"
    Processes - point: BEFORE_HEADER
    ...Process "GET_POSITION" - Type: PLSQL
    ...Execute Statement: begin wwv_flow.g_boolean := :F109_POSITION_ID IS NULL and :APP_PAGE_ID != 101; end;
    ......Result = FALSE
    ......Skip because condition or authorization evaluates to FALSE
    ...Process "Get JARS Sifter Log File Record Count" - Type: PLSQL
    ...Execute Statement: begin DECLARE vcnt NUMBER := 0; BEGIN d('Get JARS Sifter Log File Record Count'); Select count(*) into vcnt From JARS.JARS_SIFTER_LOG Where moveid = to_number(:P6_MOVEID) and sifter_status IN ('F','J'); :F1000_P6_SIFTER_LOG_COUNT := to_char(vcnt); END; end;
    Custom: Get JARS Sifter Log File Record Count
    ...Process "Set PTM Planned Trip Status" - Type: PLSQL
    ......Skip because condition or authorization evaluates to FALSE
    ...compatibility mode - do not set mime type
    ...compatibility mode - do not set additional http headers
    ...close http header
    ...metadata, fetch item type settings
    ...metadata, fetch items
    Show page template header
    Rendering form open tag and internal values
    Add error onto error stack
    ...Error data:
    ......message: Error processing request.
    ......additional_info: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ......display_location: ON_ERROR_PAGE
    ......is_internal_error: true
    ......apex_error_code: APEX.UNHANDLED_ERROR
    ......ora_sqlcode: -6502
    ......ora_sqlerrm: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ......error_backtrace: ORA-06512: at "APEX_040100.WWV_FLOW", line 3027 ORA-06512: at "APEX_040100.WWV_FLOW", line 7867
    ...Show Error on Error Page
    ......Performing rollback
    Rendering form open tag and internal values
    ...Unhandled Error while painting error page: ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ...Error Backtrace: ORA-06512: at "APEX_040100.WWV_FLOW", line 2707 ORA-06512: at "APEX_040100.WWV_FLOW_ERROR", line 185
    End Page Rendering"
    Thanks!
    Bernard

    All,
    It appears that the page Javascript maximum limit size was reached. The error stopped appearing after some of the page Javascript code was removed out to Application Static Files. I wonder if there exists any "direct" indicator by the system whenever the size limit has been reached?
    Again, the run error only occurred when the page was loaded in "Debug" mode.
    Bernard

  • How to debug a transfer rule during data load?

    I am conducting a flat file (excel sheet saved as a CSV file) data load.  The flat file contains a date field and the value is '12/18/1988'.  In transfer rule for this field, I use a function call to transfer this value to '19881218' which corresponds to BW DATS format, but the monitor of the InfoPackage shows red error:
    "Value '1981218' of characteristic 0DATE is not a number with 000008 spaces".
    Somehow, the last digit or character of the year 1988 was cut and the year grabbed is 198 other than 1988.  The function code is (see below in between two * lines):
    FUNCTION ZDM_CONVERT_DATE.
    ""Local Interface:
    *"  IMPORTING
    *"     REFERENCE(CHARDATE) TYPE  STRING
    *"  EXPORTING
    *"     REFERENCE(DATE) TYPE  D
    DATA:
    c_date(2) TYPE c,
    c_month(2) TYPE c,
    c_year(4) TYPE c,
    c_date_combined(8) TYPE c.
    data: text(10).
    text = chardate.
    search text for '/'.
    if sy-fdpos = 1.
      concatenate '0' text into text.
    endif.
    c_month = text(2).
    c_date = text+3(2).
    c_year = text+6(4).
    CONCATENATE c_year c_month c_date INTO c_date_combined.
    date = c_date_combined.
    ENDFUNCTION.
    Could experts here tell me what's wrong and also tell me on how to debug a transfer rule during data load?
    Thanks

    hey Bhanu/AHP,
    I find the reason.  Originally, I set the character length for the date InfoObject ZCHARDAT1 to 9, then I find the date field value (12/18/1988)length is 10.  Then I modified the InfoObject ZCHARDAT1 length from 9 to 10 and activated it already.  But when defining the transfer rule for this field, before the code screen, click the radio button "Selected Fields" and pick the filed /BIC/ZCHARDAT1, then continue to go to the transfer rule code screen, but find the declaration lines for the infoObject /BIC/ZCHARDAT1 is as following:
      InfoObject ZCHARDAT1: CHAR - 000009
        /BIC/ZCHARDAT1(000009) TYPE C,
    That means even if I've modified the length to 10 for the InfoObject and activated it, but somehow the transfer rule code screen always takes the old length 9.  Any idea to have it fixed to take the length 10 in the transfer rule code screen defination?
    Thanks

  • Short dump in ECC 6.0 during delta load

    Hi,
    We are currently working in BI upgrade project
    from BW 2.1c to BI 7.0
    At the same time R/3 is being upgraded from 4.0B to ECC 6.0
    During delta load we are getting short dump in R/3 system.
    Below is the short dump detalis
    The current ABAP prgm "SAPMSSY1" had to be terminated because it has come across a statement that unfortunately cannot be executed.
    The following syntax error occured in program "/BI0/SAPLQI2LIS_11_VAITM" in include "/BI0/SAPLQI2LIS_11_VAITM" is not Unicode-compatible, according to its attributes.
    If any help can be provided on this issue it would be great.
    Regards,
    Nikhil
    Edited by: Nikhil Sakhardanday on Sep 11, 2008 1:58 PM

    Hi,
    did you made UCCHECK ?
    /manfred

Maybe you are looking for

  • XML Publishers Report not Working Properly

    I have developed the Report in XML Publisher EBS R12 as follwing: 1. Successfully Built RTF 2. Successfully register Data Definition 3. Successfully register Template in Apps and setting default output RTF 4. Successfully attach Template With Concurr

  • The sprocket at the upper right end of the bookmark toolbar is missing--how do I get it back?

    While attempting to figure out how to get the dotted square or blank icon to turn into a favicon, I noticed that the subject of my question was missing. I do not mean the internet options sprocket that is in the drop down menu under the 3 lines icon

  • Help with adding video to my website

    I have encoded the quicktime video into flash and i am trying to set it up in my dreamweaver html page, and it is not showing up.....please help, getting very tired of trying.... : (

  • Click-and-drag to select multiple points in puppet warp

    Hello Community, Puppet Warp is a great tool. But sometimes I want to isolate an area and only rework that spot, while keeping everything else in place. To do this I have to go around clicking everything to pin it down. I'm wondering if there's a way

  • How do i fix error oxc18a0501​?

    Photosmart 8250 now has error ink system failure Oxc18a0501. Do you know how to fix this problem. I have unplugged the printer and restarted it but still is not working. Please help