Optimizer failure

Dear Optimizer Experts,
The scenario is like as follow -
Material X extended to location A & B, in optimizer we dont want X to be planned at A, therefore we have excluded location A from optimizer variant. But location B exist in variant because we want the planning at locaion B.
Now the optimizer is throwing error "Material master not created" & message no. /sapapo/snp042
For your information we have set the SDP relevance indicator to 1 for the location A.
Please share your valuable feedback.
Thanks
Sumit Kalyan

Hi Sumit,
Assuming that you are using a Cost Based Optimizer, there are number of ways to exclude Material A from picking up by the Optimizer, you can use anyone you feel is fit to meet your requirment.
1) Define a Very high Non delivery Penalty for Location B and zero Penalty for Location A
2) Block the transportation Lane for that Materials from Location A
3) Mark the Material at Location A as " Flag for deletion" this is to be done in ECC and then CIFed to APO
Try any of these options it should work.
Thanks,
Harsh

Similar Messages

  • Any way to identify causes of Optimization failures ???

    Hi guys
    We have recently been seeing regular optimization failures (06:45 every morning - Lite optimization).
    Unfortunately, the BPC logs only tell me that the job has failed, with no reason as to why it has failed.
    This job runs every 2 hours on our system and the 06:45 job is the only one that fails with any regularity.
    Is there any way that I can track what has actually gone on with the job, or am I stuck with only the job logs and EventViewer for information?
    BTW - This is BPC 5.1 SP5
    Thanks
    Craig

    There are definitely no scheduled jobs that clash with this one.
    How do I check the "Master data"? (And indeed, what is the master data?)
    I am not aware of any other jobs that run overnight on the server, other than this lite optimization every 2 hours and a full optimize.
    The results (every day) are as follows:
    00:45 - Lite Optimize - Success
    02:45 - Lite Optimize - Success
    03:30 - Full Optimize (no compression) - Success
    04:45 - Lite Optimize - Success
    06:45 - Lite Optimize - Failure
    08:45 - Lite Optimize - Success
    10:45 - Lite Optimize - Success
    ..... and so on throughout the day
    The job succeeds almost every time it is run (with the odd exception of a dimension update which can clash with the 08:45 job if we are changing it at the time).
    The only thing that changes before the 06:45 lite optimize is that the dependencies are dropped and recreated on each of the Fact tables:
    INS1..., INS2..., INS3..., INS11..., INS21..., INS31..., DMU... for each Application
    This is done because of a known issue in v5.1 which drops the dependencies of each application after a full optimize.
    As for running the job again 30 minutes later - That works perfectly as well.
    I just really need to know whether I have any way of finding out what other job(s) is/are running at the same time that could block the optimization, or whether the rebuilding of the dependencies could cause the issue (although that has been done for months on end without issue until now).

  • Adobe Acrobat Pro 9 pdf optimizer failure..

    We have a problem optimizing files in Acrobat Pro 9.
    When we try to optimize (Advance > PDF Optimize) the file(s) Acrobat either chrashes, or just say "an error was encountered while processing images".
    It is rather large files we have, from 50Mb to 112Mb and up to 17000 pages.
    We cant split them into smaller files.
    I can´t find anythin in the error log.
    This happends on at least 4 computers, whit diffrent settings and hardware.
    Any suggestions?
    Thanks.

    You've likely exceeded your computers' capacity to process the necessary commands specified in the Optimizer settings chosen.
    During optimization, a temp file will be created that is approximately the same size as the file being processed. The computer must be able to handle the additional load of each page being analyzed and processed. Text is not so bad, but layered images can be murder on memory.
    There are some cases when an optimized pdf might actually end up being a larger file size than the original. Sometimes a simple Save As can dramatically reduce file size.
    What is the goal of the optimization? How are you planning to distribute the document? Is there a reason for not providing multiple files instead of one large file? If you can give a few more details, some of the folks here might be able to give some helpful ideas.
    FYI: A good article: "Understanding Acrobat's Optimizer" http://www.appligent.com/talkingpdf-understandingacrobatsoptimizer
    It doesn't answer your question, but does provide good information about making optimizer choices

  • Database performance is poor after upgrading to 9i

    hi Guys
    My system was upgraded to 9i by the third party after that I took over from that and I am getting the continuous complaints regarding performance of Database.
    there are 150 Users who connects using citrix , from different locations and DB size is 20 GB.
    this is my Init Ora file
    # Cache and I/O
    db_block_size = 4096
    db_cache_size=591396864
    db_file_multiblock_read_count=16
    # Cursors and Library cache
    open_cursors=3000
    # Database Identification
    db_domain=""
    db_name=db_live
    # Diagnostics and Statistics
    background_dump_dest=D:\oracle\admin\db_live\bdump
    core_dump_dest=D:\oracle\admin\db_live\cdump
    timed_statistics=TRUE
    user_dump_dest=D:\oracle\admin\db_live\udump
    # File Configuration
    control_files=("D:\oracle\oradata\db_live\control01.ctl","D:\oracle\oradata\db_live\control02.ctl","D:\oracle\oradata\db_live\control03.ctl")
    # Instance Identification
    instance_name = live
    # Job Queues
    job_queue_processes=10
    # MTS
    dispatchers = "(PROTOCOL=TCP)"
    # Miscellaneous
    aq_tm_processes = 1
    compatible=9.2.0.0.0
    # Optimizer
    hash_join_enabled=TRUE
    query_rewrite_enabled=FALSE
    star_transfformation_enabled=FALSE
    # Pools
    java_pool_size=0
    large_pool_size=145752064
    shared_pool_size=197132288
    # Processes and Sessions
    processes=400
    # Redo Log and Recovery
    fast_start_mttr_target=300
    # Security and Auditing
    remote_login_passwordfile=EXCLUSIVE
    # SORT, HASH Joins, Bitmap Indexes
    pga_aggregate_target=525336576
    sort_area_size=524288
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_retention=10800
    undo_tablespace=UNDOTBS1
    I feel SGA size and Sort area size are not correct if we increase the value of both will it solve some my problem
    Abhi

    I have run the some scripts to check the performance
    Here is the ouput :
    Hit Ratio Section
    =========================
    BUFFER HIT RATIO
    =========================
    (should be > 70, else increase db_block_buffers in init.ora)
    Buffer Hit Ratio
    77
    logical_reads phys_reads phy_writes BUFFER HIT RATIO
    89,127,477 20,013,061 1,556,989 78
    =========================
    DATA DICT HIT RATIO
    =========================
    (should be higher than 90 else increase shared_pool_size in init.ora)
    Data Dict. Gets Data Dict. cache misses DATA DICT CACHE HIT RATIO
    11,785,386 40,955 99
    =========================
    LIBRARY CACHE MISS RATIO
    =========================
    (If > .1, i.e., more than 1% of the pins resulted in reloads, then
    increase the shared_pool_size in init.ora)
    executions Cache misses while executing LIBRARY CACHE MISS RATIO
    9,845,481 11,831 .0012
    =========================
    Library Cache Section
    =========================
    hit ratio should be > 70, and pin ratio > 70 ...
    NAMESPACE Hit ratio pin hit ratio reloads
    SQL AREA 70 94 7,293
    TABLE/PROCEDURE 99 98 4,529
    BODY 87 68 9
    TRIGGER 99 99 0
    INDEX 98 98 0
    CLUSTER 98 98 0
    OBJECT 100 100 0
    PIPE 100 100 0
    JAVA SOURCE 100 100 0
    JAVA RESOURCE 100 100 0
    JAVA DATA 100 100 0
    =========================
    REDO LOG BUFFER
    =========================
    redo log space requests 37
    Pool's Free Memory
    POOL NAME BYTES
    shared pool free memory 8,075,760
    large pool free memory 127,547,048
    SQL Summary Section
    Tot SQL run since startup SQL executing now
    4,727,276 3,833
    Lock Section
    =========================
    SYSTEM-WIDE LOCKS - all requests for locks or latches
    =========================
    Processing Locks and Latches, please standby...
    User Lock Type Mode Held
    XR Null
    Temp Segment Row-X (SX)
    SYNTECH Transaction Exclusive
    SYNTECH DML Row-S (SS)
    SYNTECH Transaction Exclusive
    SYNTECH DML Row-X (SX)
    SYNTECH DML Row-X (SX)
    SYNTECH DML Row-X (SX)
    SYNTECH Transaction Exclusive
    SYNTECH DML Row-S (SS)
    =========================
    DDL LOCKS - These are usually triggers or other DDL
    =========================
    User Owner Name Type Mode held
    SYNTECH SYS DBMS_TRANSACTIO Table/Procedure/Type Null
    SYNTECH SYS DBMS_TRANSACTIO Table/Procedure/Type Null
    SYNTECH SYNTECH SW_LUC_SEQ Table/Procedure/Type Null
    SYNTECH SYS DBMS_UTILITY Body Null
    SYNTECH SYS DBMS_UTILITY Body Null
    SYSTEM SYSTEM SYSTEM 18 Null
    SYNTECH SYS DBMS_UTILITY Table/Procedure/Type Null
    SYNTECH SYS DBMS_UTILITY Table/Procedure/Type Null
    SYSTEM SYS DBMS_OUTPUT Body Null
    SUMMIT SUMMIT SUMMIT 18 Null
    SUMMIT SUMMIT SUMMIT 18 Null
    User Owner Name Type Mode held
    SUMMIT SUMMIT SUMMIT 18 Null
    SUMMIT SUMMIT SUMMIT 18 Null
    SUMMIT SUMMIT SUMMIT 18 Null
    SUMMIT SUMMIT SUMMIT 18 Null
    SUMMIT SUMMIT SUMMIT 18 Null
    SUMMIT SUMMIT SUMMIT 18 Null
    SUMMIT SUMMIT SUMMIT 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    User Owner Name Type Mode held
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    User Owner Name Type Mode held
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    User Owner Name Type Mode held
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    User Owner Name Type Mode held
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    User Owner Name Type Mode held
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    User Owner Name Type Mode held
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYNTECH SYNTECH 18 Null
    SYNTECH SYS DBMS_TRANSACTIO Body Null
    User Owner Name Type Mode held
    SYNTECH SYS DBMS_TRANSACTIO Body Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    User Owner Name Type Mode held
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SUMMIT SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SUMMIT SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    User Owner Name Type Mode held
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    User Owner Name Type Mode held
    SUMMIT SYS DATABASE 18 Null
    SUMMIT SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    User Owner Name Type Mode held
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYSTEM SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    User Owner Name Type Mode held
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    User Owner Name Type Mode held
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SUMMIT SYS DATABASE 18 Null
    SUMMIT SYS DATABASE 18 Null
    SUMMIT SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SUMMIT SYS DATABASE 18 Null
    SUMMIT SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    User Owner Name Type Mode held
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYNTECH SYS DATABASE 18 Null
    SYSTEM SYS DBMS_OUTPUT Table/Procedure/Type Null
    SYNTECH SYS DBMS_APPLICATIO Table/Procedure/Type Null
    SYNTECH SYS DBMS_APPLICATIO Table/Procedure/Type Null
    SYSTEM SYS DBMS_APPLICATIO Table/Procedure/Type Null
    SYNTECH SYS DBMS_APPLICATIO Body Null
    SYNTECH SYS DBMS_APPLICATIO Body Null
    SYSTEM SYS DBMS_APPLICATIO Body Null
    =========================
    DML LOCKS - These are table and row locks...
    =========================
    User Owner Name Mode held
    SYNTECH SYNTECH SYJOBTRN Row-S (SS)
    SYNTECH SYNTECH JCINVTRN Row-X (SX)
    SYNTECH SYNTECH JCBATCH Row-S (SS)
    SYNTECH SYNTECH JCIVBK Row-X (SX)
    SYNTECH SYNTECH JCWSTG Row-S (SS)
    SYNTECH SYNTECH JCINVC Row-X (SX)
    Latch Section
    if miss_ratio or immediate_miss_ratio > 1 then latch
    contention exists, decrease LOG_SMALL_ENTRY_MAX_SIZE in init.ora
    NAME miss_ratio immediate_miss_ratio
    library cache .04 .24
    virtual circuit queues .42 .00
    Rollback Segment Section
    if any count below is > 1% of the total number of requests for data
    then more rollback segments are needed
    CLASS COUNT
    free list 0
    undo block 1
    undo header 201
    system undo block 0
    system undo header 0
    Tot # of Requests for Data
    89,163,600
    =========================
    ROLLBACK SEGMENT CONTENTION
    =========================
    If any ratio is > .01 then more rollback segments are needed
    NAME WAITS GETS Ratio
    SYSTEM 0 625 .00000
    _SYSSMU1$                              31      67196    .00046                 
    _SYSSMU2$                               0      30118    .00000                 
    _SYSSMU3$                              18      75372    .00024                 
    _SYSSMU4$                               9      36534    .00025                 
    _SYSSMU5$                              28     173253    .00016                 
    _SYSSMU6$                               0      31350    .00000                 
    _SYSSMU7$                               0      28957    .00000                 
    _SYSSMU8$                              10      62721    .00016                 
    _SYSSMU9$                               3      44480    .00007                 
    _SYSSMU10$                             37      42719    .00087                 
    NAME WAITS GETS Ratio
    _SYSSMU11$                              0      20228    .00000                 
    _SYSSMU12$                              1      19168    .00005                 
    Session Event Section
    if average-wait > 20 then contention might exists
    EVENT TOTAL_WAITS TOTAL_TIMEOUTS AVERAGE_WAIT
    latch free 5 1 1
    latch free 2 0 1
    buffer busy waits 362 0 1
    buffer busy waits 1 0 25
    buffer busy waits 217 0 1
    log buffer space 4 0 5
    log file switch completion 4 0 3
    log file switch completion 7 0 3
    log file sync 253 0 1
    log file sync 12 1 11
    log file sync 307 1 1
    EVENT TOTAL_WAITS TOTAL_TIMEOUTS AVERAGE_WAIT
    db file sequential read 776 0 1
    db file sequential read 53 0 3
    db file sequential read 2,791 0 1
    db file sequential read 2,125 0 1
    db file sequential read 728 0 1
    db file sequential read 12 0 1
    db file sequential read 1,710 0 1
    db file sequential read 237 0 1
    db file sequential read 1 0 1
    db file sequential read 404 0 1
    db file sequential read 731 0 1
    EVENT TOTAL_WAITS TOTAL_TIMEOUTS AVERAGE_WAIT
    db file sequential read 1 0 1
    db file sequential read 18 0 1
    db file sequential read 11 0 3
    db file sequential read 4 0 1
    db file sequential read 64 0 4
    db file sequential read 119 0 1
    db file sequential read 1 0 1
    db file sequential read 3 0 2
    db file sequential read 511 0 2
    db file sequential read 608 0 1
    db file sequential read 2 0 1
    EVENT TOTAL_WAITS TOTAL_TIMEOUTS AVERAGE_WAIT
    db file sequential read 6 0 1
    db file sequential read 5 0 1
    db file sequential read 80 0 1
    db file sequential read 74 0 2
    db file sequential read 40 0 3
    db file sequential read 876 0 1
    db file sequential read 2,791 0 1
    db file sequential read 5 0 1
    db file sequential read 2 0 1
    db file sequential read 315 0 1
    db file scattered read 7 0 1
    EVENT TOTAL_WAITS TOTAL_TIMEOUTS AVERAGE_WAIT
    db file scattered read 15 0 1
    db file scattered read 5 0 4
    db file scattered read 8 0 1
    db file scattered read 2 0 1
    db file scattered read 1 0 1
    db file scattered read 8 0 1
    db file scattered read 129 0 1
    db file scattered read 7 0 5
    db file scattered read 1 0 1
    db file scattered read 6 0 1
    db file scattered read 4 0 1
    EVENT TOTAL_WAITS TOTAL_TIMEOUTS AVERAGE_WAIT
    db file scattered read 2 0 1
    db file scattered read 5 0 4
    db file scattered read 4 0 3
    db file scattered read 4 0 1
    db file scattered read 10,487 0 1
    db file scattered read 2 0 1
    db file scattered read 3 0 1
    db file scattered read 2 0 1
    db file scattered read 3 0 1
    db file scattered read 13 0 2
    db file scattered read 17 0 1
    EVENT TOTAL_WAITS TOTAL_TIMEOUTS AVERAGE_WAIT
    db file scattered read 2 0 1
    db file scattered read 3 0 1
    db file parallel read 32 0 1
    db file parallel read 1 0 2
    db file parallel read 29 0 2
    db file parallel read 38 0 1
    db file parallel read 24 0 2
    db file parallel read 52 0 1
    db file parallel read 33 0 1
    undo segment extension 9 9 2
    76 rows selected.
    Queue Section
    average wait for queues should be near zero ...
    PADDR Queue type # queued WAIT TOTALQ AVG WAIT
    00 COMMON 0 3195556 10,273,816 .311038858
    1DCB67CC DISPATCHER 0 14654 10,649,458 .001376032
    2 rows selected.
    Multi-threaded Server Section
    If the following number is > 1
    then increase MTS_MAX_SERVERS parm in init.ora
    Avg wait per request queue
    .311037586853935493365783330857794608413 hundredths of seconds
    1 row selected.
    If the following number increases, consider adding dispatcher processes
    Avg wait per response queue
    .001376211356617059142626710087687462551 hundredths of seconds
    =========================
    DISPATCHER USAGE
    =========================
    (If Time Busy > 50, then change
    MTS_MAX_DISPATCHERS in init.ora)
    NAME STATUS IDLE BUSY Time Busy
    D000 WAIT 3,506,156 89,948 2.501
    Shared Server Processes
    0
    high-water mark for the multi-threaded server
    MAXIMUM_CONNECTIONS MAXIMUM_SESSIONS SERVERS_STARTED SERVERS_TERMINATED
    SERVERS_HIGHWATER
    157 157 1721 1721
    17
    file i/o should be evenly distributed across drives.
    # Name STATUS BYTES PHYRDS PHYWRTS
    1 F:\ORACLE\ORADATA\DB_LIVE\SYST SYSTEM 262,144,000 16239 735
    2 F:\ORACLE\ORADATA\DB_LIVE\UNDO ONLINE 2,222,981,120 1962 191422
    3 F:\ORACLE\ORADATA\DB_LIVE\CONQ ONLINE 17,179,860,992 2361254 111110
    4 F:\ORACLE\ORADATA\DB_LIVE\INDX ONLINE 26,214,400 20 18
    5 F:\ORACLE\ORADATA\DB_LIVE\SUMM ONLINE 162,529,280 303 60
    6 F:\ORACLE\ORADATA\DB_LIVE\TOOL ONLINE 10,485,760 20 18
    7 F:\ORACLE\ORADATA\DB_LIVE\USER ONLINE 26,214,400 20 18
    8 F:\ORACLE\ORADATA\DB_LIVE\CONQ ONLINE 1,263,534,080 2254864 123356
    SYSTEM_STATISTIC VALUE
    CPU used by this session 639,861
    CPU used when call started 639,807
    CR blocks created 27,293
    Cached Commit SCN referenced 0
    Commit SCN cached 0
    DBWR buffers scanned 1,494,581
    DBWR checkpoint buffers written 240,048
    DBWR checkpoints 18
    DBWR cross instance writes 0
    DBWR free buffers found 1,319,309
    DBWR fusion writes 0
    SYSTEM_STATISTIC VALUE
    DBWR lru scans 1,210
    DBWR make free requests 1,210
    DBWR revisited being-written buffer 0
    DBWR summed scan depth 1,494,581
    DBWR transaction table writes 271
    DBWR undo block writes 191,098
    DDL statements parallelized 0
    DFO trees parallelized 0
    DML statements parallelized 0
    OTC commit optimization attempts 0
    OTC commit optimization failure - setup 0
    SYSTEM_STATISTIC VALUE
    OTC commit optimization hits 0
    PX local messages recv'd 0
    PX local messages sent 0
    PX remote messages recv'd 0
    PX remote messages sent 0
    Parallel operations downgraded 1 to 25 pct 0
    Parallel operations downgraded 25 to 50 pct 0
    Parallel operations downgraded 50 to 75 pct 0
    Parallel operations downgraded 75 to 99 pct 0
    Parallel operations downgraded to serial 0
    Parallel operations not downgraded 0
    SYSTEM_STATISTIC VALUE
    RowCR - row contention 0
    RowCR attempts 0
    RowCR hits 0
    SQL*Net roundtrips to/from client 20,533,267
    SQL*Net roundtrips to/from dblink 0
    Unnecesary process cleanup for SCN batching 0
    active txn count during cleanout 82,931
    background checkpoints completed 17
    background checkpoints started 18
    background timeouts 42,216
    branch node splits 222
    SYSTEM_STATISTIC VALUE
    buffer is not pinned count 58,473,081
    buffer is pinned count 58,622,335
    bytes received via SQL*Net from client 690,006,487
    bytes received via SQL*Net from dblink 0
    bytes sent via SQL*Net to client 102,210,355,400
    bytes sent via SQL*Net to dblink 0
    calls to get snapshot scn: kcmgss 8,092,887
    calls to kcmgas 130,839
    calls to kcmgcs 87,223
    calls to kcmgrs 0
    change write time 16,834
    SYSTEM_STATISTIC VALUE
    cleanout - number of ktugct calls 91,022
    cleanouts and rollbacks - consistent read gets 15,130
    cleanouts only - consistent read gets 16,490
    cluster key scan block gets 126,689
    cluster key scans 74,722
    cold recycle reads 0
    commit cleanout failures: block lost 708
    commit cleanout failures: buffer being written 59
    commit cleanout failures: callback failure 49
    commit cleanout failures: cannot pin 0
    commit cleanout failures: hot backup in progress 0
    SYSTEM_STATISTIC VALUE
    commit cleanout failures: write disabled 0
    commit cleanouts 488,120
    commit cleanouts successfully completed 487,304
    commit txn count during cleanout 36,303
    consistent changes 155,131
    consistent gets 74,645,555
    consistent gets - examination 16,776,826
    current blocks converted for CR 4
    cursor authentications 55,278
    data blocks consistent reads - undo records applied 154,868
    db block changes 12,976,485
    SYSTEM_STATISTIC VALUE
    db block gets 14,531,153
    deferred (CURRENT) block cleanout applications 137,895
    deferred CUR cleanouts (index blocks) 0
    dirty buffers inspected 9,671
    enqueue conversions 11,684
    enqueue deadlocks 0
    enqueue releases 184,564
    enqueue requests 184,686
    enqueue timeouts 94
    enqueue waits 1
    exchange deadlocks 0
    SYSTEM_STATISTIC VALUE
    execute count 7,319,198
    free buffer inspected 10,181
    free buffer requested 19,253,045
    gcs messages sent 0
    ges messages sent 0
    global cache blocks corrupt 0
    global cache blocks lost 0
    global cache claim blocks lost 0
    global cache convert time 0
    global cache convert timeouts 0
    global cache converts 0
    SYSTEM_STATISTIC VALUE
    global cache cr block build time 0
    global cache cr block flush time 0
    global cache cr block receive time 0
    global cache cr block send time 0
    global cache cr blocks received 0
    global cache cr blocks served 0
    global cache current block flush time 0
    global cache current block pin time 0
    global cache current block receive time 0
    global cache current block send time 0
    global cache current blocks received 0
    SYSTEM_STATISTIC VALUE
    global cache current blocks served 0
    global cache defers 0
    global cache freelist waits 0
    global cache get time 0
    global cache gets 0
    global cache prepare failures 0
    global cache skip prepare failures 0
    global lock async converts 0
    global lock async gets 0
    global lock convert time 0
    global lock get time 0
    SYSTEM_STATISTIC VALUE
    global lock releases 0
    global lock sync converts 0
    global lock sync gets 0
    hot buffers moved to head of LRU 1,334,001
    immediate (CR) block cleanout applications 31,620
    immediate (CURRENT) block cleanout applications 256,895
    immediate CR cleanouts (index blocks) 0
    index fast full scans (direct read) 0
    index fast full scans (full) 6,363
    index fast full scans (rowid ranges) 0
    index fetch by key 7,199,288
    SYSTEM_STATISTIC VALUE
    index scans kdiixs1 2,173,768
    instance recovery database freeze count 0
    kcmccs called get current scn 0
    kcmgss read scn without going to GES 0
    kcmgss waited for batching 0
    leaf node 90-10 splits 137
    leaf node splits 20,302
    logons cumulative 2,347
    logons current 86
    messages received 60,379
    messages sent 60,378
    SYSTEM_STATISTIC VALUE
    native hash arithmetic execute 0
    native hash arithmetic fail 0
    next scns gotten without going to GES 0
    no buffer to keep pinned count 5
    no work - consistent read gets 55,487,127
    number of map misses 0
    number of map operations 0
    opened cursors cumulative 692,633
    opened cursors current 11,134
    opens of replaced files 0
    opens requiring cache replacement 0
    SYSTEM_STATISTIC VALUE
    parse count (failures) 35
    parse count (hard) 222,131
    parse count (total) 740,269
    parse time cpu 71,868
    parse time elapsed 79,481
    physical reads 20,026,636
    physical reads direct 1,052,614
    physical reads direct (lob) 0
    physical writes 1,557,017
    physical writes direct 1,130,421
    physical writes direct (lob) 0
    SYSTEM_STATISTIC VALUE
    physical writes non checkpoint 1,477,234
    pinned buffers inspected 344
    prefetch clients - 16k 0
    prefetch clients - 2k 0
    prefetch clients - 32k 0
    prefetch clients - 4k 0
    prefetch clients - 8k 0
    prefetch clients - default 223
    prefetch clients - keep 0
    prefetch clients - recycle 0
    prefetched blocks 14,445,041
    SYSTEM_STATISTIC VALUE
    prefetched blocks aged out before use 2,636
    process last non-idle time ################
    queries parallelized 0
    recovery array read time 0
    recovery array reads 0
    recovery blocks read 0
    recursive calls 1,911,332
    recursive cpu usage 4,012
    redo blocks written 3,597,536
    redo buffer allocation retries 337
    redo entries 6,634,895
    SYSTEM_STATISTIC VALUE
    redo log space requests 37
    redo log space wait time 284
    redo log switch interrupts 0
    redo ordering marks 3
    redo size 1,772,883,980
    redo synch time 6,781
    redo synch writes 27,527
    redo wastage 11,115,384
    redo write time 16,597
    redo writer latching time 5
    redo writes 51,171
    SYSTEM_STATISTIC VALUE
    remote instance undo block writes 0
    remote instance undo header writes 0
    rollback changes - undo records applied 88,835
    rollbacks only - consistent read gets 12,179
    rows fetched via callback 3,423,058
    serializable aborts 0
    session connect time ################
    session cursor cache count 0
    session cursor cache hits 0
    session logical reads 89,176,705
    session pga memory 35,556,660
    SYSTEM_STATISTIC VALUE
    session pga memory max 132,532,528
    session stored procedure space 0
    session uga memory 50,817,520
    session uga memory max 497,741,936
    shared hash latch upgrades - no wait 2,566,917
    shared hash latch upgrades - wait 32
    sorts (disk) 130
    sorts (memory) 196,268
    sorts (rows) 18,962,940
    summed dirty queue length 376,151
    switch current to new buffer 33,449
    SYSTEM_STATISTIC VALUE
    table fetch by rowid 39,128,298
    table fetch continued row 556,214
    table lookup prefetch client count 0
    table scan blocks gotten 32,208,464
    table scan rows gotten 2,455,389,745
    table scans (cache partitions) 0
    table scans (direct read) 0
    table scans (long tables) 3,754
    table scans (rowid ranges) 0
    table scans (short tables) 124,879
    total file opens 0
    SYSTEM_STATISTIC VALUE
    total number of slots 0
    transaction lock background get time 0
    transaction lock background gets 0
    transaction lock foreground requests 0
    transaction lock foreground wait time 0
    transaction rollbacks 366
    transaction tables consistent read rollbacks 1
    transaction tables consistent reads - undo records appl 260
    user calls 10,869,191
    user commits 27,014
    user rollbacks 411
    SYSTEM_STATISTIC VALUE
    workarea executions - multipass 36
    workarea executions - onepass 218
    workarea executions - optimal 131,726
    workarea memory allocated 888
    write clones created in background 1
    write clones created in foreground 48

  • Process dimensions and Applications after Restore

    We are using BPC 7.0MS.  We need to process dimensions and Applications after the database restore in case of any Optimization failures. It takes around 3 hours for us to do this process. Can we skip the BPC process by taking backup of Database and SSAS and restoring them in case of any failures?
    Thanks
    Raj

    Hi Raj,
    you can try but remeber that after the restore you must execute a "modifiy application" for all your applications so maybe you'll not gain time.
    Kind regards
    Roberto
    Edited by: Roberto Vidotti on Dec 12, 2011 3:33 PM

  • OS X 10.4.4 DOWNLOAD HANGING AT "Optimizing System Performance" STAGE

    I began downloading 10.4.4 update about 1.5 hours ago; most of the download executed fairly quickly--within 10 minutes. But then the download got to the "Optimizing System Performance" stage, and has been hanging in it ever since. It's accompanied by the spinning beach ball.
    Is this common, or at least normal?
    If not, what should I do? How long should I wait? I don't know if I can even get my iBook to force quit at this stage.
    thanks in advance for your help.
    T.W.

    Timothy,
    What you are experiencing is not common or normal, but can hopefully be rectified.
    According to iBook: If the Computer Won't Respond you can Force Quit by holding the power button for several seconds.
    Restart from the Tiger Installation CD and run the Disk Utility as directed in the Try Disk Utility paragraph of Using Disk Utility and fsck to resolve startup issues or perform disk maintenance.
    Next, attempt a normal startup of your computer.
    Mac OS X: Troubleshooting installation and software updates says this about optimization failures:
    Installation fails during "optimization"
    If the installation fails during "optimization," all of the software was installed. There is no risk of an "incomplete installation." The optimization phase of an installation only affects performance and not stability or features. You may force optimization to be repeated by reinstalling the software. If you were using the Software Update pane of System Preferences on the first attempt, you will need to download the standalone installer of the same software from Apple Downloads in order to reinstall the software.;~)

  • Optimizer job failure

    Hi,
    We have encountered a similar failure in Optimizer background job for two days in a row. The Optimizer fails with an error which says "An Exception occurred in communication object.".
    Please let me know if running a consistency check on APO database would be of any help to resolve this issue. Thank you.
    Rishikesh

    Hi Rishikesh,
    You could check if your optimizer server is reachable properly from SCM.
    The issue coud most likely lie in Basis domain.
    There is a SAP document on Optimizer setup, but unfortunately I can't attach it here.
    I am copy pasting some things that could be checked by Basis team (Choose the Optimizer below that you use):
    1. Log on to the SAP SCM System.
    2. Call transaction SM59.
    The Display and maintain RFC destinations screen appears.
    3. Open the node for TCP/IP connection.
    4. For the first optimizer server, you have to adapt the following RFC entries:
    - OPTSERVER_CTM01
    - OPTSERVER_DPS01
    - OPTSERVER_SNP01
    - OPTSERVER_SEQ01
    - OPTSERVER_VSR01
    - OPTSERVER_MMP01
    - OPTSERVER_CS01
    For the second optimizer server, the RFC entry names end with 02 (for example,
    OPTSERVER_CTM02) and so on.
    To adapt an RFC entry:
    a. Double-click the destination name OPTSERVER_<Optimizer>01.
    The RFC Destination OPTSERVER_<Optimizer>01 screen appears.
    b. Depending on the server you must do the following to check the RFC entries:
    A) Standalone Optimizer Server:
    i. Choose Start on Explicit host.
    ii. In the Program field check your program path (see table Program Paths of RFC Entries below).
    iii. Check the name of your Target Host.
    iv. Enter the number of the gateway host and the corresponding gateway service SAPGW<GW_NO>. You can find out the required parameters on your target host as follows:
    a. On your target host, call transaction SMGW
    b. Choose Goto u2192 Parameters u2192 Display (see entries for gateway hostname and gateway service)
    v. Confirm with O.K.
    B) Optimizer and SAP SCM on Same Server:
    i. Choose Start on Application server.
    ii. In the Program entry field check your program path (see table Program Paths of RFC Entries below).
    If your SAP SCM server is an Unicode system, you must do the following in addition to the above setting for each OPTSERVER_<Optimizer>01 destination:
    a. Choose the tab MDMP & Unicode.
    b. In the group frame Communication Type With Target System you must select the flag Non-Unicode and the flag Inactive for the MDMP Settings box.
    I hope this would be helpful in some basic check.
    If your basis team is not able to resolve the issue, then better to reach out to SAP through OSS.
    Thanks - Pawan

  • G770 won't boot normally or in safe mode... I suspect it has something to do with the boot optimizer

    My Lenovo G770 will not boot in normal or safe mode.  I usually escape out of the boot optimizer.  Today I let it run and it went to the "starting windows" screen with a brief startup of the windows 7 animation...it freezes for a second, then a quick flash of the bsod, then to the "windows error recovery page" giving me the option of "starting windows normally" or "Launch Startup Repair (recommended)." 
    Starting Windows Normally eventually brings me back to the same place repeating what I just stated above paragraph.
    When I launch the Startup Repair it "Cannot repair this computer automatically".
    So I go to view advanced options for system recovery and support.
    It brings me to 5 options:
    Startup Repair (we already tried this above)
    System Restore (unfortunately I didn't create any restore points)
    System Image Recovery (unfortunately I haven't created an image to recover)
    Windows Memory Diagnostic (no problems found-done several times)
    Command Prompt (don't know what I can do here except for remove a bad/corrupted driver which may be the problem, but I don't know the driver name that is associated with the boot optimizer...can anyone tell me this?)
    I've tried booting to safe mode in all of its incarnations and I can't even do that..it repeats the same things as stated above...windows 7 animation briefly starts then locks up, flash of bsod, then the windows error recovery page.
    I've tried booting to last known good configuration (same thing occurs...brief startup of windows 7 animation, freeze, flash of bsod, then error recovery page.
    The only thing that has given me any kind of result was "disabling system restart on system failure."  When I do this, the BSOD doesn't flash briefly..it stays. and it gives me the error message page_fault_in_nonpaged_area.
    I'm at a loss as to what to do.  Not being able to boot into Safe Mode even is really frustrating.  Any advice from anyone?  can I remove the driver associated with the boot optimizer?  If so, what is the name of the driver and where (directory) is it located? 

    How did you resolve the issue?
    I have the exactly same issue.
    When I go System Image Recovery -->Select System Image-->Advanced->I can open all the drives [:Local Drive(C , LENOVO(D, Local Disk(E, Boot(X-where i think executable booting is here]. It comes with an Open prompt asking me to enter File Name with a File type: Setup Information.
    I don't know which setup information and where to find it on my Drives.
    Anyone know how to fix?
    I was trying to re-install Win 7 from DVD but it is not executing either.
    Can I boot with USB Ubuntu and Install Win 7 from there??? but how?
    Need help?

  • How to run a do-while loop on the 2nd execution after a stop on failure occurs?

    I am trying to use the Stop on Failure process model callback from the TestStand Examples.
    If a step fails within a Do-While loop and the test is terminated, the second time the test is run (continuing to the Next UUT in the process model), the condition for the do-while loop is checked first before it ever enters the loop. This is incorrect because the Do-While loop should execute once and then check the "while" condition.
    Also, If I was to stop the execution altogether and then restart the test (instead of continuing with the Next UUT), it runs the loop once, then checks the "while" condition.
    I'm not sure if I am describing this clearly enough. The execution seems to flow like this:
    Start test
    Do
    NumericTest step FAILS  -> terminate
    Next UUT starts
    While (condition is false and skips over Numeric Test step).
    So it seems that TestStand thinks it's still within that Do-While loop on the second execution and whatever runtime variables are not reset correctly.
    Is this a TestStand bug?  It happens in both TestStand 3.1 and 3.5.
    Is there any way around this?
    Thanks for any help.

    If it is what I think it is, it is a bug. Try unchecking Sequence Properties>>Optimize Non-Reentrant Calls to This Sequence and please tell me if that fixes it.

  • I am facing multiple applicatio​n failures,s​ystem crash reports ,all with a locale id of 16393,

    product name - HPG62 361TX, WINDOWS 7 ,Home BAsic,64 bit,
    am facing several application failures and system crash reports,each with a locale id of 16393 in the problem signature,displayed in the problem reports of action centre these failures include-,NET RUNTIME OPTIMIZATION,ADOBE READER STOPPED WORKING,Anti Malware service Executable- Mp Telemetry,stopped working,bing bar stopped working,,TBApp error, COM surrogate stopped working,,comet bird stopped workingHost Process For Windows Services App crash, hp advisor App crash, HP Quick synchronisation service,HBPA Service stopped workinghpqwmiex module stopped working,,HPWMISVC application stopped working etc.. and all have a locale id of 16393,even replacement of ram has not solved this problem ; i fear this is a cpu  core related problem, can anyone please confirm? the system starts normally but everytime it shows a warning/error message inthe event viewer log.
    ever after reformatting the HD and replacing the ram , the problem still persist- shown by event viewer like boot performance monitoring,standby performance monitoring etc. please help me diagnose the hardwadre problem.

    product name - HPG62 361TX, WINDOWS 7 ,Home BAsic,64 bit,
    am facing several application failures and system crash reports,each with a locale id of 16393 in the problem signature,displayed in the problem reports of action centre these failures include-,NET RUNTIME OPTIMIZATION,ADOBE READER STOPPED WORKING,Anti Malware service Executable- Mp Telemetry,stopped working,bing bar stopped working,,TBApp error, COM surrogate stopped working,,comet bird stopped workingHost Process For Windows Services App crash, hp advisor App crash, HP Quick synchronisation service,HBPA Service stopped workinghpqwmiex module stopped working,,HPWMISVC application stopped working etc.. and all have a locale id of 16393,even replacement of ram has not solved this problem ; i fear this is a cpu  core related problem, can anyone please confirm? the system starts normally but everytime it shows a warning/error message inthe event viewer log.
    ever after reformatting the HD and replacing the ram , the problem still persist- shown by event viewer like boot performance monitoring,standby performance monitoring etc. please help me diagnose the hardwadre problem.

  • Failure modes in TCP WRITE?

    I need help diagnosing an issue where TCP communications breaks down between my host (Windows) and a PXI (LabVIEW RT 2010).
    The bottom-line questions are these:
    1...Are there circumstances in which TCP WRITE, given a string of say, 10 characters, will write more than zero and fewer than 10 characters to the connection? If so, what are those circumstances?
    2...Is it risky to use a timeout value of 1 mSec?  Further thought seems to say that I won't get a 1000 uSec timeout if we're using a 1-mSec timebase, but I don't know if that's true in the PXI.
    Background:
    On the PXI, I'm running a 100-Hz PID loop, controlling an engine.  I measure the speed and torque, and control the speed and throttle.  Along the way, I'm measuring 200 channels of misc stuff (analog, CAN, TCP instruments) at 10 Hz and sending gobs of info to the host (200 chans * 8 = 1600 bytes every 0.1 sec)
    The host sends commands, the PXI responds.
    The message protocol is a fixed-header, variable payload type: a message is a fixed 3-byte header, consisting of a U8 OpCode, and a U16 PAYLOAD SIZE field. I flatten some structure to a string, measure its size, and prepend the header and send it as one TCP WRITE.  I receive in two TCP READs: one for the header, then I unflatten the header, read the PAYLOAD SIZE and then another read for that many more bytes.
      The payload can thus be zero bytes: a TCP READ with a byte count of zero is legal and will succeed without error.
    A test starts with establishing a connection, some configuration stuff, and then sampling starts. The 10-Hz data stream is shown on the host screen at 2-Hz as numeric indicators, or maybe selected channels in a chart.
    At some point the user starts RECORDING, and the 10-Hz data goes into a queue for later writing to a file. This is while the engine is being driven thru a prescribed cycle of speed/torque target points.
    The recording lasts for 20 or in some cases 40 minutes (24000 samples) and then recording stops, but sampling doesn't.  Data is still coming in and charted. The user can then do some special operations, related to calibration checks and leak checks, and those results are remembered.  Finally, they hit the DONE button, and the whole mess gets written to a file.
    All of this has worked fine for several years, but as the system is growing (more devices, more channels, more code), a problem has cropped up: the two ends are occasionally getting out of synch. 
    The test itself, and all the configuration stuff before, is working perfectly. The measurement immediately after the test is good.  At some point after that, it goes south.  The log shows the PXI sending results for operations that were not requested. The data in those results is garbage; 1.92648920e-299 and such numbers, resulting from interpreting random stuff as a DBL.
    After I write the file, the connection is broken, the next test re-establishes it, and all is well again.
    In chasing all this, I've triple-checked that all my SENDs are MEASURING the size of the payload before sending it.  Two possibilities have come up:
    1... There is a message with a payload over 64k.  If my sender were presented with a string of length 65537, it would convert that to a U16 of value 1, and the receiver would expect 1 byte. The receiver would then expect another header, but this data comes instead, and we're off the rails.
      I don't believe that's happening. Most of the messages are fewer than 20 bytes payload, the data block is 1600 or so, I see no evidence for such a thing to happen.
    2... The PXI is failing, under certain circumstances, to send the whole message given to TCP WRITE.  If it sent out a header promising 20 more bytes, but only delivered 10, then the receiver would see the header and expect 20 more. 10 would come immediately, but whatever the NEXT message was, it's header would be construed as part of the payload of the first message, and we're off the rails.
    Unfortunately, I am not checking the error return from TCP write, since it never failed in my testing here (I know, twenty lashes for me).
    It also occurs to me that I am giving it a 1-mSec timeout value, since I'm in a 100-Hz loop. Perhaps I should have separated the TCP stuff into a separate thread.  In any case, maybe I don't get a full 1000 uSec, due to clock resolution issues.
    That means that TCP WRITE cannot get the data written before the TIMEOUT expires, but it has written part of it.
    I suspect, but the logs don't prove, that the point of failure is when they hit the DONE button.  The general CPU usage on the PXI is 2-5% but at that point there are 12-15 DAQ domain managers to be shutting down, so the instantaneous CPU load is high.  If that happens to coincide with a message going out, well, maybe the problem crops up.  It doesn't happen every time.
    So I repeat the two questions:
    1...Are there circumstances in which TCP WRITE, given a string of say, 10 characters, will write more than zero and fewer than 10 characters to the connection? If so, what are those circumstances?
    2...Is it risky to use a timeout value of 1 mSec?  Further thought seems to say that I won't get a 1000 uSec timeout if we're using a 1-mSec timebase, but I don't know if that's true in the PXI.
    Thanks,
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks
    Solved!
    Go to Solution.

    There are a couple of issues at play here, and both are working together to cause your issue(s).
    1) LV RT will suspend the TCP thread when your CPU utilization goes up to 100%. When this happens, your connection to the outside world simply goes away and your communications can get pretty screwed up. (More here)
    Unless you create some form of very robust resend and timeout strategy your only other solution would be to find a way to keep your CPU from maxing out. This may be through the use of some scheduler to limit how many processes are running at a particular time or other code optimization. Any way you look at it, 100% CPU = Loss of TCP comms.
    2) The standard method of TCP communication shown in all examples I have seen to date uses a similar method to transfer data where a header is sent with the data payload size to follow.
    <packet 1 payload size (2 bytes)><packet 1 payload..........><packet 2 payload size (2 bytes)><packet 2 payload.......................>
    On the Rx side, the header is read, the payload size extracted then a TCP read is set with the desired size. Under normal circumstances this works very well and is a particularly efficent method of transferring data. When you suspend the TCP thread during a Rx operation, this header can get corrupted and pass the TCP Read a bad payload size due to a timeout on the previous read. As an example the header read expects 20 bytes but due to the TCP thread suspension only gets 10 before the timeout. The TCP Read returns only those 10 bytes, leaving the other 10 bytes in the Rx buffer for the next read operation. The subsequent TCP Read now gets the first 2 bytes from the remaining data payload (10 bytes) still in the buffer. This gives you a further bad payload read size and the process continues OR if you happen to get a huge number back, when you try to allocate a gigantic TCP receive buffer, you get an out of memory error.
     The issue now is that your communications are out of sync. The Rx end is not interpeting the correct bytes as the header thus this timeout or bad data payload behavior can continue for quite a long time. I have found that occasionally (although very rare) the system will fall back into sync however it really is a crap shoot at this point.
    I more robust way of dealing with the communication issue is to change your TCP read to terminate on a CRLF as opposed to the number of bytes or timeout (The TCP Read has an enum selctor for switching the mode. In this instance, whenever a CRLF is seen, the TCP Read will immediately terminate and return data. If the payload is corrupted, then it will fail to be parsed correctly or would encounter a checksum failure and be discarded or a resend request issued. In either case, the communications link will automatically fall back into sync between the Tx and Rx side. The one other thing that you must do is to encode your data to ensure that no CRLF characters exist in the payload. Base64 encode/decode works well. You do give up some bandwith due to the B64 strings being longer, however the fact that the comm link is now self syncing is normally a worthwhile sacrifice.
    When running on any other platform other than RT, the <header><payload> method of transmitting data works fine as TCP guarantees transmission of the data, however on RT platforms due to the suspension of the TCP thread on high CPU excursions this method fails miserably.

  • Optimization bug with C++ inlining

    Hi,
    While evaluating Sun Studio 11 I have identified an optimization bug with C++ inlining.
    The bug can easily be reproduced with the small program below. The program produces
    wrong results with -xO2, because an inline access function always returns the value 0.0
    instead of the value given on the commandline:
    djerba{ru}16 : CC -o polybug  polybug.cc
    djerba{ru}17 : ./polybug 1.0
    coeff(0): 1.000000
    djerba{ru}18 : CC -o polybug -xO2 polybug.cc
    djerba{ru}19 : ./polybug 1.0
    coeff(0): 0.000000            <<<<<<<<<< wrong, should be 1.000000This occurs only with optimization level O2; levels below or above O2 don't
    exhibit the bug.
    Compiler version is
    Sun C++ 5.8 Patch 121017-01 2005/12/11
    on Solaris 8 / Sparc.
    I include a preliminary analysis at the end.
    Best Regards
    Dieter R.
    -------------------- polybug.cc -------------------------
    // note: this may look strange, but this is a heavily stripped down
    // version of actual working application code...
    #include <stdio.h>
    #include <stdlib.h>
    class Poly {
      public:
        // constructor initializes number of valid coefficients to zero:
        Poly() { numvalid = 0; };
        ~Poly() {};
        // returns coefficient with index j, if valid. Otherwise returns 0.0:
        double coeff(int j) {
         if (j < numvalid) {
             return coefficients[j];
         } else {
             return 0.0;
       // copies contents of this Object to other Poly:
        void getPoly(Poly& q) { q = *this; };
        // data members:
        // valid coefficients: 0 ... (numvalid - 1)
        double coefficients[6];
        int numvalid;
    void troublefunc(Poly* pC) {
        // copies Poly-Object to local Poly, extracts coefficient
        // with index 0 and prints it. Should be the value given
        // on commandline.
        // Poly constructor, getPoly and coeff are all inline!
        if (pC) {
         Poly pol;                      
         pC->getPoly(pol);
         printf("coeff(0): %f\n",pol.coeff(0));
    int main(int argc,char* argv[]) {
        double d = atof(argv[1]);
        // creates Poly object and fills coefficient with index
        // 0 with the value given on commandline
        Poly* pC = new Poly;
        pC->coefficients[0] = d;
        pC->numvalid = 1;
        troublefunc(pC);   
        return 0;
    The disassembly fragment below shows that the access function coeff(0), instead
    of retrieving coefficient[0] simply returns the fixed value 0.0 (presumably because the
    optimizer "thinks" numvalid holds still the value 0 from the constructor and that therefore
    the comparison "if (i < numvalid)" can be omitted).
    Note: disassembly created from code compiled with -features=no%except for simplicity!
    00010e68 <___const_seg_900000102>:
            ...     holds the value 0.0
    00010e80 <__1cLtroublefunc6FpnEPoly__v_>:
       10e80:       90 90 00 08     orcc  %g0, %o0, %o0      if (pC) {   
       10e84:       02 40 00 14     be,pn   %icc, 10ed4
       10e88:       9c 03 bf 50     add  %sp, -176, %sp
                                                       local Poly object at %sp + 120
                                                             numvalid at %sp + 0xa8 (168)
       10e8c:       c0 23 a0 a8     clr  [ %sp + 0xa8 ]      Poly() { numvalid = 0; };
                                                             pC->getPoly(pol):
                                                             loop copies *pC to local Poly object
       10e90:       9a 03 a0 80     add  %sp, 0x80, %o5
       10e94:       96 10 20 30     mov  0x30, %o3
       10e98:       d8 5a 00 0b     ldx  [ %o0 + %o3 ], %o4
       10e9c:       96 a2 e0 08     subcc  %o3, 8, %o3
       10ea0:       16 4f ff fe     bge  %icc, 10e98
       10ea4:       d8 73 40 0b     stx  %o4, [ %o5 + %o3 ]
                                                             pol.coeff(0):
                                                             load double value 0.0 at
                                                             ___const_seg_900000102 in %f0
                                                             (and address of format string in %o0)
       10ea8:       1b 00 00 43     sethi  %hi(0x10c00), %o5
       10eac:       15 00 00 44     sethi  %hi(0x11000), %o2
       10eb0:       c1 1b 62 68     ldd  [ %o5 + 0x268 ], %f0
       10eb4:       90 02 a0 ac     add  %o2, 0xac, %o0
       10eb8:       82 10 00 0f     mov  %o7, %g1
                                                             store 0.0 in %f0 to stack and load it
                                                             from there to %o1/%o2
       10ebc:       c1 3b a0 60     std  %f0, [ %sp + 0x60 ]
       10ec0:       d2 03 a0 60     ld  [ %sp + 0x60 ], %o1
       10ec4:       d4 03 a0 64     ld  [ %sp + 0x64 ], %o2
       10ec8:       9c 03 a0 b0     add  %sp, 0xb0, %sp
                                                             call printf
       10ecc:       40 00 40 92     call  21114 <_PROCEDURE_LINKAGE_TABLE_+0x54>
       10ed0:       9e 10 00 01     mov  %g1, %o7
       10ed4:       81 c3 e0 08     retl
       10ed8:       9c 03 a0 b0     add  %sp, 0xb0, %sp
    Hmmm... This seems to stress this formatting tags thing to its limits...

    Thanks for confirming this.
    No, this happens neither in an Open Source package nor in an important product. This is an internal product, which is continuously developed with Sun Tools since 1992 (with incidents like this one being very rare).
    I am a bit concerned with this bug though, because it might indicate a weakness in the area of C++ inlining (after all, the compiler fails to correctly aggregate a sequence of three fairly simple inline functions, something which is quite common in our application). If, on the other hand, this is a singular failure caused by unique circumstances which we have hit by sheer (un)luck, it is always possible to work around this: explicitly defining a assignment operator instead of relying on the compiler-generated one is sufficient to make the bug go away.

  • Mac Hard Drive Failure - How to get all info of iphone?

    The computer that I sync my iphone to had a complete system drive failure. The drive is toast, nothing recoverable. There is lots of information that I have on my iphone that I now only have on my iphone: mail, contacts, notes, pictures. I want to keep all of this information, but I haven't been able to find a way to get it off the iphone.
    If I sync to the fresh OSX install on the computer that failed, or one of my other macs, the iphone gets wiped squeaky clean. I obviously don't want this to occur. Does anyone know a way to create an iphone backup on a computer other than the one the iphone is set to sync to, or is there a way to access the iphone as a disk and copy the information?
    Any help would be appreciated.
    And yes, I have learned the "backup frequently" lesson.

    This is one reason why maintaining a backup is so important. The iPhone does not support disk mode, and is not designed nor intended to be a backup storage device.
    If I sync to the fresh OSX install on the computer that failed, or one of my other macs, the iphone gets wiped squeaky clean.
    Not true. In regards to iTunes content, an iPhone can be synced or manually managed with an iTunes library on a single computer only, and photos can be transferred from a single computer only. When transferring iTunes content and photos from another computer, all iTunes content and photos transferred from a different computer will be erased from the iPhone first, but no other content on the iPhone will be touched. Contacts, calendar events, and Safari bookmarks can be synced with the supports applications on multiple computers.
    iTunes includes an option to transfer iTunes content that was purchased or downloaded from the iTunes store from an iPod or iPhone, but this is for iTunes content that was purchase or downloaded from the iTunes store only.
    With your iPhone connected to iTunes on the Mac you plan on using to sync your iPhone with, first you need to authorize the Mac with your iTunes account with iTunes if your haven't already done so. Without syncing, at the iTunes menu bar go to File and select Transfer Purchases From - the name of your iPhone. If all 3rd party apps do not transfer, you can re-download a 3rd party app with iTunes on the Mac and you won't be charged again for a purchased app as long as you use the same iTunes account to re-download the app that was used to purchase the app originally.
    Photos transferred to an iPhone is a one way transfer process only. Photos transferred to an iPhone are optimize for viewing on the iPhone as part of the iTunes sync/transfer process only - the original resolution of these photos is reduced, which is why Apple doesn't transferring these photos in the opposite direction, and why including these photos with your computer's backup is important.
    There are some 3rd party apps that support transferring these photos in the opposite direction and if successful, the original resolution of these photos will be lost.
    Before syncing contacts with the Address Book and calendar events with iCal on this Mac if there are no contacts in the Address Book and no calendar events in iCal - both are empty, enter one contact in the Address Book and one calendar event in iCal - make these up if needed, which can be deleted later. This will provide a merge prompt when syncing this data, which you want to select.
    Backup, backup, bakcup backup, backup, backup.
    Did I mention maintaining a backup?

  • Mysql error java.sql.SQLException: Communication failure during handshake.

    Hi !!!
    I was working ok, with hibernate and mysql but yesterday I try to install the new mysql version (4.1.10) and receive the following error when I try to connect
    Initializing Hibernate
    INFO - Hibernate 2.1.6
    INFO - hibernate.properties not found
    INFO - using CGLIB reflection optimizer
    INFO - configuring from resource: /hibernate.cfg.xml
    INFO - Configuration resource: /hibernate.cfg.xml
    INFO - Mapping resource: com/tutorial/hibernate/core/News.hbm.xml
    INFO - Mapping class: com.tutorial.hibernate.core.News -> news
    INFO - Configured SessionFactory: null
    INFO - processing one-to-many association mappings
    INFO - processing one-to-one association property references
    INFO - processing foreign key constraints
    INFO - Using dialect: net.sf.hibernate.dialect.MySQLDialect
    INFO - Maximim outer join fetch depth: 2
    INFO - Use outer join fetching: true
    INFO - Using Hibernate built-in connection pool (not for production use!)
    INFO - Hibernate connection pool size: 20
    INFO - using driver: org.gjt.mm.mysql.Driver at URL: jdbc:mysql://localhost/unnobanews
    INFO - connection properties: {user=news, password=news}
    INFO - Transaction strategy: net.sf.hibernate.transaction.JDBCTransactionFactory
    INFO - No TransactionManagerLookup configured (in JTA environment, use of process level read-write cache is not recommended)
    WARN - Could not obtain connection metadata
    java.sql.SQLException: Communication failure during handshake. Is there a server running on localhost:3306?
    at org.gjt.mm.mysql.MysqlIO.init(Unknown Source)
    at org.gjt.mm.mysql.Connection.connectionInit(Unknown Source)
    at org.gjt.mm.mysql.jdbc2.Connection.connectionInit(Unknown Source)
    at org.gjt.mm.mysql.Driver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:512)
    at java.sql.DriverManager.getConnection(DriverManager.java:140)
    Somewhere I red that was necessary to update the jdbc driver, so I updated it from version nro 2 to version nro 3.1.7 but still the error
    Phpmyadmin works ok and mysql control center can connect ok too.
    But when I try a telnet localhost:3306 I receive and error of connection filed
    Anyway the mysql status thowme correct information, that it working ok!
    Any idea ?
    King regards
    Naty

    Hibernate 2.1.6
    loaded properties from resource hibernate.properties: {hibernate.connection.username=root, hibernate.connection.password="", hibernate.cglib.use_reflection_optimizer=true, hibernate.connection.pool_size=10, hibernate.dialect=net.sf.hibernate.dialect.MySQLDialect, hibernate.connection.url=jdbc:mysql://manoj/manoj, hibernate.connection.driver_class=org.gjt.mm.mysql.Driver}
    using CGLIB reflection optimizer
    configuring from resource: /hibernate.cfg.xml
    Configuration resource: /hibernate.cfg.xml
    Mapping resource: com/mec/emp.hbm.xml
    Mapping class: com.mec.Employee -> emp
    Configured SessionFactory: null
    processing one-to-many association mappings
    processing one-to-one association property references
    processing foreign key constraints
    Using dialect: net.sf.hibernate.dialect.MySQLDialect
    Maximim outer join fetch depth: 2
    Use outer join fetching: true
    Using Hibernate built-in connection pool (not for production use!)
    Hibernate connection pool size: 10
    using driver: org.gjt.mm.mysql.Driver at URL: jdbc:mysql://manoj/manoj
    connection properties: {user=root, password=""}
    No TransactionManagerLookup configured (in JTA environment, use of process level read-write cache is not recommended)
    Could not obtain connection metadata
    java.sql.SQLException: Server configuration denies access to data source
         at org.gjt.mm.mysql.MysqlIO.init(MysqlIO.java:144)
         at org.gjt.mm.mysql.Connection.<init>(Connection.java:230)
         at org.gjt.mm.mysql.Driver.connect(Driver.java:126)
         at java.sql.DriverManager.getConnection(DriverManager.java:525)
         at java.sql.DriverManager.getConnection(DriverManager.java:140)
         at net.sf.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:101)
         at net.sf.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:73)
         at net.sf.hibernate.cfg.Configuration.buildSettings(Configuration.java:1155)
         at net.sf.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:789)
         at com.mec.CreateSession.getCurrentSession(CreateSession.java:35)
         at com.mec.TestEmployee.createEmployee(TestEmployee.java:37)
         at com.mec.TestEmployee.main(TestEmployee.java:24)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at com.intellij.rt.execution.application.AppMain.main(AppMain.java:78)
    Use scrollable result sets: false
    Use JDBC3 getGeneratedKeys(): false
    Optimize cache for minimal puts: false
    Query language substitutions: {}
    cache provider: net.sf.hibernate.cache.EhCacheProvider
    instantiating and configuring caches
    building session factory
    Not binding factory to JNDI, no JNDI name configured
    SQL Error: 0, SQLState: 08001
    Server configuration denies access to data source
    Cannot open connection
    java.sql.SQLException: Server configuration denies access to data source
         at org.gjt.mm.mysql.MysqlIO.init(MysqlIO.java:144)
         at org.gjt.mm.mysql.Connection.<init>(Connection.java:230)
         at org.gjt.mm.mysql.Driver.connect(Driver.java:126)
         at java.sql.DriverManager.getConnection(DriverManager.java:525)
         at java.sql.DriverManager.getConnection(DriverManager.java:140)
         at net.sf.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:101)
         at net.sf.hibernate.impl.BatcherImpl.openConnection(BatcherImpl.java:286)
         at net.sf.hibernate.impl.SessionImpl.connect(SessionImpl.java:3326)
         at net.sf.hibernate.impl.SessionImpl.connection(SessionImpl.java:3286)
         at net.sf.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:40)
         at net.sf.hibernate.transaction.JDBCTransactionFactory.beginTransaction(JDBCTransactionFactory.java:19)
         at net.sf.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:2231)
         at com.mec.TestEmployee.createEmployee(TestEmployee.java:38)
         at com.mec.TestEmployee.main(TestEmployee.java:24)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at com.intellij.rt.execution.application.AppMain.main(AppMain.java:78)
    Process finished with exit code 0

  • Failure Writing file...Report processing has been canceled by user

    I'm having a bit of an odd issue.  In my SCOM environment I have 8 scheduled reports that management likes to look at. 7 of them are working just fine, however one of them keeps failing with the status:
    "Failure writing file <file name> : Report processing has been canceled by the user."
    Now this report is running every Sunday at 2:30 AM and I can assure you I am not up administering reports at 2:30 am on a Sunday.
    Except for the file name this is set up identically to one of the reports that is working fine. If I write click and "run" the report it generates, though it takes a few minutes, and I can save it as an excel file to where we want it. There's nothing
    in the event log around the time it says it last ran (2:30:03 AM).
    Any ideas?

    Hi Dave,
    Could be conflicting with this built maintenance at 2:30am:
    There is a rule in the System Center Internal Library called "Optimize Indexes". This rule runs every night at 2:30am on the RMS and calls p_OptimizeIndexes.  Make sure any standard maintenance you perform on the OpsMgr DB does not interfere
    with this job. 
    http://blogs.technet.com/b/kevinholman/archive/2008/04/12/what-sql-maintenance-should-i-perform-on-my-opsmgr-databases.aspx
    Did you look at SQL Reporting Services Logs?
    As well, you can try to change the Report Execution TimeOut in Report Manager.
    Natalya

Maybe you are looking for

  • Is production order for one material only?

    Hello Is it true that one material can be specified in production order as the output (material to be produced)? and what is combined order? why the output material in combined order can be different from output materials in subordinate orders? Thank

  • How to check my iphone original

    dear madam/sir i have a iphone 4 i want to check my phone original or fake model MC318LL/A AND S.R. 79034QM84S AND IMEI 012425008121464 AND ICCID 8991020113326384483 so can you tell how can i check my phone if you can check so plz tell me original or

  • Is there a good QT player 7 substitute?

    QT Player 7 allowed for quick and easy editing videos. Is there anything like that available for Mavericks?

  • 5700 transformers

    Hi, i wanna ask about the new phone 5700 transformesrs i've heared that it's at china now.. my question that when it will appear in the egyptian market? and if it will apear or not and also if it's real or fake i mean about the phone itself ... !! th

  • Motion tweens from keyframe vs properties panel

    When motion tweening a grouped object in either Flash 8 or Flash CS3, I get different results depending on whether I use the keyframe or the properties panel to set the tween. If I right-click the first keyframe and select "Create Motion Tween," I ge