Blocking output

Hi,
I got a requirement where I need to block sale order output if pricing is in incompletion log.
We create sale order through IDOCs. We use outputs for sales order and the output medium is 'External send'. i.e. output sent to respective partner function's mail id in PDF format.
And now the issue....When we have a pricing error on one of the sale order item, the order output is send as PDF without pricing in its item. So the customer is receiving PDF without prices. As the order is created through IDOC, we have no control on sale order output. Even if I create sale order manually without pricing and if I save it, it will allow me to save with pricing is in incompletion.
Pls let me know is there any SAP standard solution, where I can put a routine for Output type in output determination to block processing of output of pricing is in incompletion log?
Thanks in Advance...
Srikky

Hello,
Go to
IMG - Sales and Distribution - Basic Functions - Output Control - Output Determination - Output Determination Using the Condition Technique - Maintain Output Determination for Sales Documents - Maintain Output Determination Procedure
Here select your Output Procedure & go to Control Data (Details).
Here against your Output Type (Output Condition Type) assign requirement 20 - Order (Hdr) incompl. or requirement 21 - Order (Itm) incompl.
I believe requitement 21 will be best suitable in your case.
Hope this helps,
Thanks,
Jignesh Mehta

Similar Messages

  • Need to download the ALV blocked output in to excel sheet

    I have a requirement where there is a need to download the Alv block output to excel sheet. when i click on excel button which is on output only header block data is coming in excel sheet.but i need both header block and item block data into excel.

    Hi,
    create a pushbutton on the toolbar,
    whenever user clicks the pushbutton,
    call the fm gui_download for 2 times
    for the first time,
    append = ' ',
    and for second time.
    append = 'X'.
    check this thread.
    Re: How to download multiple ALV Container data on a screen to a single Excel?
    Regards.
    R K.

  • Blocking Output types from Triggering.

    Client has a requirement that if overall credit status of delivery is 'B' ( Blocked ) then all output types Automatic or Manual associated with the delivery should be blocked.
    I added following lines of code in requirements for all output types associated with deliveries
      IF   KOMKBV2-CMGST CA 'B'.
        SY-SUBRC = 4.
        EXIT.
      ENDIF.
    But we are still able to attach output types to deliveries with credit status blocked, any pointers ?

    hi Saket,
    This can be the issue you are facing.
    you are exiting from the current event with the 'EXIT' statment, but the validation is not applicable in the subsequent event.
    in the subsequent event, may be 'END-OF-SELECTION',
    insert the following statement, as the first statement:
    CHECK NOT KOMKBV2-CMGST EQ 'B'.
    hope this helps

  • Block Output Type in MB02

    Hi,
    In Material Document MB02, I have mvt types 503 & 505 and Output types ZGR2 for generating an IDOC.
    My Requirement is
    if Mvt type
      503 - Output type ZGR2 can be given in Material document Output screen( Through Message button) and while saving Idoc can be created.
      505 - if i enter Output type ZGR2 & while saving Idoc shouldn't be generated.
    Analysed with UserExits and I think it cant be done .
    Request your advice & Many Thanks.
    Regards,
    Anbalagan

    Hi,
    My query is that " the output type which is processing IDOC should not come automatically when material document is created during material movement for movement type 505.
    Regards
    Bapu

  • How to export ananonymus block output into excel sheet.

    Dear all,
    I want to import the dbms_output.put_line output directly into excel sheet.
    Please guide .
    thanks & regards
    Munish

    Hi Munish,
    you can't produce with dbms_output.put_line a file which is directly readable by Excel.
    You can check this post Re: Saving SQL+ output to a text file where I have posted some comments.
    Also this Re: Extract data from Oracle in excel file
    Regards.
    Alberto.

  • Purchase Order output to be blocked in ECC when the PO is technically Incom

    Hi All,
    First of all I would like to say that, I have searched the entire forum for this issue & as i could not find any thread relating to this, I am posting.
    We have implemented SAP GTS & whenever a purchase order is created in ECC system and due to Business partner missing or Legal Unit missing in GTS, the document is created as Technically incomplete in GTS.  Inspite of technically incomplete, the ECC users are able to print out the PO.  We have implemented a SAP note for blocking the purchase order output when the customs Import document is Blocked in SAP GTS.
    Could anyone let me know if we have any OSS note for blocking Technically incomplete purchase order's or is there any workaround to prevent the output from being printed.
    Regards
    Aravind G

    Hi,
    We have a requirement to block PO output as well so we implemented Note 900555. But how does it work? We do not see anything anywhere on the PO that would prevent output. We have POs that are blocked in GTS and nothing is any different on the PO. The note doesn't really explain how it works either. We assumed that the Output logs would also show a message of some sort.
    In addition, when this note references blocking "output" is it only meant for blocking print? Or can it block electronic transmission as well?
    If we can't get this note to work, we are thinking of adding GTS to the PO Release Strategy, in order to block transmission of anything to the supplier.
    Thanks,
    Jessica

  • FTP Get File List Action Block, It's double listing files!  ver 11.5

    Hi guys.. I have a good one!   I have an FTP Get File List action block in my BLS transaction.  Occasionally, it double lists the files in its output.   For testing I put a repeater with a logevent output where I log the filename, date, and size.  Heres what I saw for my action block output.
    2009-02-13 00:38:00,963  [UserEvent] : File Name: DMM_Export_0010056.txt, File Date 2009-02-13T00:36:00, File Size 339
    2009-02-13 00:38:00,963  [UserEvent] : File Name: DMM_Export_0010056.txt, File Date 2009-02-13T00:36:00, File Size 339
    This is xMII  version 11.5.6 b73  with java 1.4.2_07
    I have a workaround by putting in a distinct action block, after the filelist, but anybody have an idea why this might happen?   My theory is that something might be occuring if the file is being written to while we try to process it, but not sure. 
    I've been building BLS parsers since 2003, (Remember those fun times with Jeremy?)   I've never seen this happen.

    My example is a sample log file before the distinct action.  The general log shows nothing other than the subsequent transaction errors I get as a result of running the same error twice (Tcode return from BAPI calls etc)
    Here is something else interesting..  my userlog file is acting funny, like its trying to write on top of itself.  could it be the transaction is actually running twice or parts of it? 
    For example look at the following log entries
    This is how my log file entry for a production confirmation should look
    2009-02-13 00:38:06,854 [LHScheduler-Int10_NestingWOProdConf] INFO   UserLog - [UserEvent] :
    However sometimes... its looking like this...
    2009-02-13 2009-02-13 00:38:11,854 [LHScheduler-Int10_NestingWOProdConf] INFO   UserLog - [UserEvent] :
    Like it started writing to the log, then started again.
    The problem we are having is that we have JCO calls to SAP in this transaction that does goods movement, we get locking / block errors back from our  saying that we (our sap account) is already updating the information.   Sometimes the information would be posted twice!  You can see how this has become a HUGE issue posting data to a LIVE system twice.
    This is happening on 2 xMII servers.

  • Sales Order Blocked for MRP

    Dear All
    i hav a scenario for make to order where sales order created should default block for MRP,only authorized person should release the block
    Please suggest how to do this
    Regards

    Hi Sugunan,
    You can use default delivery blocks for your requirement.
    IMG - Sales and Distribution - Basic Functions - Availability Check and Transfer of Requirements - Transfer of Requirements - Block Quantity Confirmation In Delivery Blocks - "Deliveries: Blocking Reasons/Criteria".
    Here you have the option of creating new entries. For a particular delivery block you have multiple controls as under:
    Sales order block - Indicates whether the system automatically blocks sales documents for customers who are blocked for delivery in the customer master.
    Confirmation block - With this you can control in addition to blocking delivery, also block the confirmation of order quantities after an availability check during sales order processing. So MRP won't be affected because no requirements will be transferred to MRP even when the sales order is saved.
    Printing block - Indicates whether the system automatically blocks output for sales documents that are blocked for delivery.
    Delivery due list block - The system does not take a sales document with this block applied into account when creating a delivery due list. You can only create deliveries manually for sales documents with this type of block.
    Picking block - block for picking goods in delivery.
    Goods issue block - Will not allow goods issue if the block is active.
    1. I would suggest create a new delivery block with a suitable description & only tick on "conf."
    2. Go to VOV8 - select the document type - assign the delivery block.
    Now whenever you will create a sales order with the specific document type, the system will propose the delivery block by default for all customers. If you check at the item level - schedule lines, the system will do the availability check. When you save the sales order, the block will function & the system will not transfer the requirements to MRP. Even if you run the MRP using MD50, the system will not generate the planned order.
    If you assign the reason for rejection to the sales order item, then the system will show the status of that item as complete. If there is only 1 item in the sales order, then the system will change the status of the whole sales order as complete which is not recommended.
    With best regards,
    Allabaqsh G. Patil.

  • Regarding Parallel Dynamic Block

    Hi,
    I tried to use PDB (Parallel dynamic block). But it is behaving differently that, I assigned action to 3 users and then the task is appearing in all of the 3 users uwl.But while any of the one user started working on that then remaining users uwl task got disappeared or seems locked or terminated. I did not understand why it is happening like this. can you please help me to solve this issue.
    My process structure is mentioning below:
    Process
    Init Block (Sequential block)
    Input action
    Block (PDB)
    output block (parallel block)
    output action
    Is there any alternative way to do this if PDB will not solve this issue?
    Is it possible to create actions dynamically in a block. suppose this block is creates statically as a parallel block.?
    Mahesh please help me if you have any idea or information about this.
    thanks in advance,
    Sajith

    Hi Dipankar,
    Thanks for ur response.But that form is not going parallel to same users.and i have other problem.If that form go to multiple users.
      That form has to states Approved and rejected.Though any one of them rejected that process must be continuously.not to be terminated.
    If u have any idea /if u worked on same Scenario pl z send document.
    Thanks,
    Santhosh

  • How get the output in a directory

    HI friends,
    I divided an image into 8 by 8 blocks , but i want the blocks(output) in a folder.
    can anyone help me with the above problem.
    Thanks in advance.

    So you have BufferedImage objects of 8*8 pixels then?
    In that case simply use the ImageIO.write() to write the images to a directory of your choice.

  • Sql tuning help

    Hi,
    I need some help on tuning this sql. We run a third party application and I have to ask thrid party for any changes. I have pasted the session statistice from the run for this sql.
    SELECT DECODE( RPAD(NVL(NWKPCDOUTWDPOSTCODE,' '),4,
    ' ')||RPAD(NVL(NWKPCDINWDPOSTCODE,' '),3,' '),
    RPAD(NVL(:zipout1,' '),4,' ')||RPAD(NVL(:zipin1,' '),3,' '),
    '0001', RPAD(NVL(:zipout2,' '),4,'
    ')||RPAD(SUBSTR(NVL(:zipin2,' '),0,1),3,' '), '0002',
    RPAD(NVL(:zipout3,' '),7,' '), '0003',
    RPAD('ZZ999',7,' '), '0004' ) AS CHECKER
    FROM NWKPCDREC
    WHERE NWKPCDNETWORKID = :netid
    AND NWKPCDSORTPOINT1TYPE != 'XXXXXXXX'
    AND ( (RPAD(NVL(NWKPCDOUTWDPOSTCODE,' '),4,' ')||RPAD(NVL(NWKPCDINWDPOSTCODE,' '),3,' ') =
    RPAD(NVL(:zipout4,' '),4,' ')||RPAD(NVL(:zipin3,' '),3,' '))
    OR (RPAD(NVL(NWKPCDOUTWDPOSTCODE,' '),4,'
    ')||RPAD(NVL(NWKPCDINWDPOSTCODE,' '),3,' ') =
    RPAD(NVL(:zipout5,' '),4,' ')||RPAD(SUBSTR(NVL(:zipin4,' '),0,
    1),3,' ')) OR (RPAD(NVL(NWKPCDOUTWDPOSTCODE,' '),4,'
    ')||RPAD(NVL(NWKPCDINWDPOSTCODE,' '),3,' ') =
    RPAD(NVL(:zipout6,' '),7,' ')) OR
    (RPAD(NVL(NWKPCDOUTWDPOSTCODE,' '),4,'
    ')||RPAD(NVL(NWKPCDINWDPOSTCODE,' '),3,' ') = RPAD('ZZ999',7,
    ' ')) ) ORDER BY CHECKER
    Session Statistics 09 October 2007 22:44:56 GMT+00:00
    Report Target : PRD1 (Database)
    Session Statistics
    (Chart form was tabular, see data table below)
    SID Name Value Class
    37 write clones created in foreground 0 Cache
    37 write clones created in background 0 Cache
    37 user rollbacks 16 User
    37 user commits 8674 User
    37 user calls 302838 User
    37 transaction tables consistent reads - undo records applied 0 Debug
    37 transaction tables consistent read rollbacks 0 Debug
    37 transaction rollbacks 9 Debug
    37 transaction lock foreground wait time 0 Debug
    37 transaction lock foreground requests 0 Debug
    37 transaction lock background gets 0 Debug
    37 transaction lock background get time 0 Debug
    37 total file opens 12 Cache
    37 table scans (short tables) 8062 SQL
    37 table scans (rowid ranges) 0 SQL
    37 table scans (long tables) 89 SQL
    37 table scans (direct read) 0 SQL
    37 table scans (cache partitions) 2 SQL
    37 table scan rows gotten 487042810 SQL
    37 table scan blocks gotten 7327924 SQL
    37 table fetch continued row 17 SQL
    37 table fetch by rowid 26130550 SQL
    37 switch current to new buffer 6400 Cache
    37 summed dirty queue length 0 Cache
    37 sorts (rows) 138607 SQL
    37 sorts (memory) 13418 SQL
    37 sorts (disk) 0 SQL
    37 session uga memory max 5176776 User
    37 session uga memory 81136 User
    37 session stored procedure space 0 User
    37 session pga memory max 5559884 User
    37 session pga memory 5559884 User
    37 session logical reads 115050107 User
    37 session cursor cache hits 0 SQL
    37 session cursor cache count 0 SQL
    37 session connect time 1191953042 User
    37 serializable aborts 0 User
    37 rows fetched via callback 1295545 SQL
    37 rollbacks only - consistent read gets 0 Debug
    37 rollback changes - undo records applied 114 Debug
    37 remote instance undo header writes 0 Global Cache
    37 remote instance undo block writes 0 Global Cache
    37 redo writes 0 Redo
    37 redo writer latching time 0 Redo
    37 redo write time 0 Redo
    37 redo wastage 0 Redo
    37 redo synch writes 8683 Cache
    37 redo synch time 722 Cache
    37 redo size 25463692 Redo
    37 redo ordering marks 0 Redo
    37 redo log switch interrupts 0 Redo
    37 redo log space wait time 0 Redo
    37 redo log space requests 1 Redo
    37 redo entries 81930 Redo
    37 redo buffer allocation retries 1 Redo
    37 redo blocks written 0 Redo
    37 recursive cpu usage 101 User
    37 recursive calls 84413 User
    37 recovery blocks read 0 Cache
    37 recovery array reads 0 Cache
    37 recovery array read time 0 Cache
    37 queries parallelized 0 Parallel Server
    37 process last non-idle time 1191953042 Debug
    37 prefetched blocks aged out before use 0 Cache
    37 prefetched blocks 1436767 Cache
    37 pinned buffers inspected 89 Cache
    37 physical writes non checkpoint 3507 Cache
    37 physical writes direct (lob) 0 Cache
    37 physical writes direct 3507 Cache
    37 physical writes 3507 Cache
    37 physical reads direct (lob) 0 Cache
    37 physical reads direct 2499 Cache
    37 physical reads 1591668 Cache
    37 parse time elapsed 336 SQL
    37 parse time cpu 315 SQL
    37 parse count (total) 28651 SQL
    37 parse count (hard) 1178 SQL
    37 opens requiring cache replacement 0 Cache
    37 opens of replaced files 0 Cache
    37 opened cursors current 51 User
    37 opened cursors cumulative 28651 User
    37 no work - consistent read gets 59086317 Debug
    37 no buffer to keep pinned count 0 Other
    37 next scns gotten without going to DLM 0 Parallel Server
    37 native hash arithmetic fail 0 SQL
    37 native hash arithmetic execute 0 SQL
    37 messages sent 9730 Debug
    37 messages received 0 Debug
    37 logons current 1 User
    37 logons cumulative 1 User
    37 leaf node splits 111 Debug
    37 kcmgss waited for batching 0 Parallel Server
    37 kcmgss read scn without going to DLM 0 Parallel Server
    37 kcmccs called get current scn 0 Parallel Server
    37 instance recovery database freeze count 0 Parallel Server
    37 index fast full scans (rowid ranges) 0 SQL
    37 index fast full scans (full) 210 SQL
    37 index fast full scans (direct read) 0 SQL
    37 immediate (CURRENT) block cleanout applications 4064 Debug
    37 immediate (CR) block cleanout applications 83 Debug
    37 hot buffers moved to head of LRU 20004 Cache
    37 global lock sync gets 0 Parallel Server
    37 global lock sync converts 0 Parallel Server
    37 global lock releases 0 Parallel Server
    37 global lock get time 0 Parallel Server
    37 global lock convert time 0 Parallel Server
    37 global lock async gets 0 Parallel Server
    37 global lock async converts 0 Parallel Server
    37 global cache read buffer lock timeouts 0 Global Cache
    37 global cache read buffer blocks served 0 Global Cache
    37 global cache read buffer blocks received 0 Global Cache
    37 global cache read buffer block timeouts 0 Global Cache
    37 global cache read buffer block send time 0 Global Cache
    37 global cache read buffer block receive time 0 Global Cache
    37 global cache read buffer block build time 0 Global Cache
    37 global cache prepare failures 0 Global Cache
    37 global cache gets 0 Global Cache
    37 global cache get time 0 Global Cache
    37 global cache freelist waits 0 Global Cache
    37 global cache defers 0 Global Cache
    37 global cache cr timeouts 0 Global Cache
    37 global cache cr requests blocked 0 Global Cache
    37 global cache cr blocks served 0 Global Cache
    37 global cache cr blocks received 0 Global Cache
    37 global cache cr block send time 0 Global Cache
    37 global cache cr block receive time 0 Global Cache
    37 global cache cr block flush time 0 Global Cache
    37 global cache cr block build time 0 Global Cache
    37 global cache converts 0 Global Cache
    37 global cache convert timeouts 0 Global Cache
    37 global cache convert time 0 Global Cache
    37 global cache blocks corrupt 0 Global Cache
    37 free buffer requested 1597281 Cache
    37 free buffer inspected 659 Cache
    37 execute count 128826 SQL
    37 exchange deadlocks 1 Cache
    37 enqueue waits 0 Enqueue
    37 enqueue timeouts 0 Enqueue
    37 enqueue requests 23715 Enqueue
    37 enqueue releases 23715 Enqueue
    37 enqueue deadlocks 0 Enqueue
    37 enqueue conversions 0 Enqueue
    37 dirty buffers inspected 437 Cache
    37 deferred (CURRENT) block cleanout applications 21937 Debug
    37 db block gets 230801 Cache
    37 db block changes 160407 Cache
    37 data blocks consistent reads - undo records applied 460 Debug
    37 cursor authentications 488 Debug
    37 current blocks converted for CR 0 Cache
    37 consistent gets 114819307 Cache
    37 consistent changes 460 Cache
    37 commit cleanouts successfully completed 37201 Cache
    37 commit cleanouts 37210 Cache
    37 commit cleanout failures: write disabled 0 Cache
    37 commit cleanout failures: hot backup in progress 0 Cache
    37 commit cleanout failures: cannot pin 0 Cache
    37 commit cleanout failures: callback failure 3 Cache
    37 commit cleanout failures: buffer being written 0 Cache
    37 commit cleanout failures: block lost 6 Cache
    37 cold recycle reads 0 Cache
    37 cluster key scans 17 SQL
    37 cluster key scan block gets 36 SQL
    37 cleanouts only - consistent read gets 83 Debug
    37 cleanouts and rollbacks - consistent read gets 0 Debug
    37 change write time 108 Cache
    37 calls to kcmgrs 0 Debug
    37 calls to kcmgcs 391 Debug
    37 calls to kcmgas 8816 Debug
    37 calls to get snapshot scn: kcmgss 171453 Parallel Server
    37 bytes sent via SQL*Net to dblink 0 User
    37 bytes sent via SQL*Net to client 25363874 User
    37 bytes received via SQL*Net from dblink 0 User
    37 bytes received via SQL*Net from client 29829542 User
    37 buffer is pinned count 540816 Other
    37 buffer is not pinned count 86108905 Other
    37 branch node splits 6 Debug
    37 background timeouts 0 Debug
    37 background checkpoints started 0 Cache
    37 background checkpoints completed 0 Cache
    37 Unnecesary process cleanup for SCN batching 0 Parallel Server
    37 SQL*Net roundtrips to/from dblink 0 User
    37 SQL*Net roundtrips to/from client 302837 User
    37 Parallel operations not downgraded 0 Parallel Server
    37 Parallel operations downgraded to serial 0 Parallel Server
    37 Parallel operations downgraded 75 to 99 pct 0 Parallel Server
    37 Parallel operations downgraded 50 to 75 pct 0 Parallel Server
    37 Parallel operations downgraded 25 to 50 pct 0 Parallel Server
    37 Parallel operations downgraded 1 to 25 pct 0 Parallel Server
    37 PX remote messages sent 0 Parallel Server
    37 PX remote messages recv'd 0 Parallel Server
    37 PX local messages sent 0 Parallel Server
    37 PX local messages recv'd 0 Parallel Server
    37 OS Voluntary context switches 0 OS
    37 OS User time used 0 OS
    37 OS System time used 0 OS
    37 OS Swaps 0 OS
    37 OS Socket messages sent 0 OS
    37 OS Socket messages received 0 OS
    37 OS Signals received 0 OS
    37 OS Page reclaims 0 OS
    37 OS Page faults 0 OS
    37 OS Maximum resident set size 0 OS
    37 OS Involuntary context switches 0 OS
    37 OS Integral unshared stack size 0 OS
    37 OS Integral unshared data size 0 OS
    37 OS Integral shared text size 0 OS
    37 OS Block output operations 0 OS
    37 OS Block input operations 0 OS
    37 DML statements parallelized 0 Parallel Server
    37 DFO trees parallelized 0 Parallel Server
    37 DDL statements parallelized 0 Parallel Server
    37 DBWR undo block writes 0 Cache
    37 DBWR transaction table writes 0 Cache
    37 DBWR summed scan depth 0 Cache
    37 DBWR revisited being-written buffer 0 Cache
    37 DBWR make free requests 0 Cache
    37 DBWR lru scans 0 Cache
    37 DBWR free buffers found 0 Cache
    37 DBWR cross instance writes 0 Global Cache
    37 DBWR checkpoints 0 Cache
    37 DBWR checkpoint buffers written 0 Cache
    37 DBWR buffers scanned 0 Cache
    37 Commit SCN cached 0 Debug
    37 Cached Commit SCN referenced 1 Debug
    37 CR blocks created 203 Cache
    37 CPU used when call started 280528 Debug
    37 CPU used by this session 280528 User
    Regards
    Raj
    --------------------------------------------------------------------------------

    Thank you everybody for helping me out while tuning the query. I have managed to bring down the run time from 60 minutes to 12 minutes.
    I am posting the exisitng query, existing database objects ddl and the new query and new ddl to share my learning. This is my first use of forum, senior members, please letme know if I shouldn't have put all this here.
    /pre original code
    SELECT decode(rpad(nvl(a.nwkpcdoutwdpostcode, ' '), 4, ' ') || rpad(nvl(
    a.nwkpcdinwdpostcode, ' '), 3, ' '), rpad(nvl(:zipout1, ' '), 4, ' ')
    || rpad(nvl(:zipin1, ' '), 3, ' '), '0001', rpad(nvl(:zipout2, ' '), 4,
    ' ') || rpad(substr(nvl(:zipin2, ' '), 0, 1), 3, ' '), '0002',
    rpad(nvl(:zipout3, ' '), 7, ' '), '0003', rpad('ZZ999', 7, ' '), '0004')
    AS checker, a.nwkpcdbarcode1to7 nwkpcdbarcode1to7,
    a.nwkpcdbarcode15 nwkpcdbarcode15,
    a.nwkpcdbarcodeseqkey nwkpcdbarcodeseqkey,
    a.nwkpcdsortpoint1code nwkpcdsortpoint1code,
    a.nwkpcdsortpoint1type nwkpcdsortpoint1type,
    a.nwkpcdsortpoint1name nwkpcdsortpoint1name,
    a.nwkpcdsortpoint1extra nwkpcdsortpoint1extra,
    a.nwkpcdsortpoint2type nwkpcdsortpoint2type,
    a.nwkpcdsortpoint2name nwkpcdsortpoint2name,
    a.nwkpcdsortpoint3type nwkpcdsortpoint3type,
    a.nwkpcdsortpoint3name nwkpcdsortpoint3name,
    a.nwkpcdsortpoint4type nwkpcdsortpoint4type,
    a.nwkpcdsortpoint4name nwkpcdsortpoint4name,
    b.nwkprfnetworksequence nwkprfnetworksequence,
    b.nwkprfnetworkid nwkprfnetworkid, b.nwkprfnetworkname nwkprfnetworkname,
    b.nwkprfminweight / 100 AS nwkprfminweight, b.nwkprfmaxweight / 100 AS
    nwkprfmaxweight, b.nwkprfminlengthgirth nwkprfminlengthgirth,
    b.nwkprfmaxlengthgirth nwkprfmaxlengthgirth,
    b.nwkprfminlength nwkprfminlength, b.nwkprfmaxlength nwkprfmaxlength,
    b.nwkprfparceltypecode nwkprfparceltypecode,
    b.nwkprfparceltypename nwkprfparceltypename
    FROM wh1.nwkpcdrec a, wh1.nwkprefrec b
    WHERE a.nwkpcdnetworkid = b.nwkprfnetworkid
    AND a.nwkpcdsortpoint1type != 'XXXXXXXX'
    AND (rpad(nvl(a.nwkpcdoutwdpostcode, ' '), 4, ' ') || rpad(nvl(
    a.nwkpcdinwdpostcode, ' '), 3, ' ') = rpad(nvl(:zipout4, ' '), 4, ' '
    ) || rpad(nvl(:zipin3, ' '), 3, ' ')
    OR rpad(nvl(a.nwkpcdoutwdpostcode, ' '), 4, ' ') || rpad(nvl(
    a.nwkpcdinwdpostcode, ' '), 3, ' ') = rpad(nvl(:zipout5, ' '), 4, ' '
    ) || rpad(substr(nvl(:zipin4, ' '), 0, 1), 3, ' ')
    OR rpad(nvl(a.nwkpcdoutwdpostcode, ' '), 4, ' ') || rpad(nvl(
    a.nwkpcdinwdpostcode, ' '), 3, ' ') = rpad(nvl(:zipout6, ' '), 7, ' '
    OR rpad(nvl(a.nwkpcdoutwdpostcode, ' '), 4, ' ') || rpad(nvl(
    a.nwkpcdinwdpostcode, ' '), 3, ' ') = rpad('ZZ999', 7, ' '))
    AND :weight1 >= b.nwkprfminweight
    AND :weight2 <= b.nwkprfmaxweight
    AND b.nwkprfminlengthgirth <= 60
    AND b.nwkprfmaxlengthgirth >= 60
    AND b.nwkprfminlength <= 15
    AND b.nwkprfmaxlength >= 15
    ORDER BY b.nwkprfnetworkid, checker
    CREATE TABLE "WH1"."NWKPCDREC" ("NWKPCDFILECODE" VARCHAR2(2),
    "NWKPCDRECORDTYPE" VARCHAR2(4), "NWKPCDNETWORKID" VARCHAR2(2),
    "NWKPCDOUTWDPOSTCODE" VARCHAR2(4), "NWKPCDINWDPOSTCODE"
    VARCHAR2(3), "NWKPCDSORTPOINT1CODE" VARCHAR2(2),
    "NWKPCDSORTPOINT1TYPE" VARCHAR2(8), "NWKPCDSORTPOINT1NAME"
    VARCHAR2(16), "NWKPCDSORTPOINT1EXTRA" VARCHAR2(16),
    "NWKPCDSORTPOINT2TYPE" VARCHAR2(8), "NWKPCDSORTPOINT2NAME"
    VARCHAR2(8), "NWKPCDSORTPOINT3TYPE" VARCHAR2(8),
    "NWKPCDSORTPOINT3NAME" VARCHAR2(8), "NWKPCDSORTPOINT4TYPE"
    VARCHAR2(8), "NWKPCDSORTPOINT4NAME" VARCHAR2(8), "NWKPCDPPI"
    VARCHAR2(8), "NWKPCDBARCODE1TO7" VARCHAR2(7),
    "NWKPCDBARCODE15" VARCHAR2(1), "NWKPCDBARCODESEQKEY"
    VARCHAR2(7), "NWKPCDFILLER1" VARCHAR2(7), "NWKPCDFILLER2"
    VARCHAR2(30),
    CONSTRAINT "UK_NWKPCDREC" UNIQUE("NWKPCDNETWORKID",
    "NWKPCDOUTWDPOSTCODE", "NWKPCDINWDPOSTCODE")
    USING INDEX
    TABLESPACE "WH1_INDEX"
    STORAGE ( INITIAL 64K NEXT 0K MINEXTENTS 1 MAXEXTENTS
    2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
    PCTFREE 10 INITRANS 2 MAXTRANS 255)
    TABLESPACE "WH1_DATA_LARGE" PCTFREE 10 PCTUSED 40 INITRANS 1
    MAXTRANS 255
    STORAGE ( INITIAL 4096K NEXT 4096K MINEXTENTS 1 MAXEXTENTS
    2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
    NOLOGGING
    pre original script/
    /pre modified script
    CREATE TABLE "WH1"."NWKPCEREC_OLD" ("NWKPCDFILECODE" VARCHAR2(2),
    "NWKPCDRECORDTYPE" VARCHAR2(4), "NWKPCDNETWORKID" VARCHAR2(2),
    "NWKPCDOUTWDPOSTCODE" VARCHAR2(4), "NWKPCDINWDPOSTCODE"
    VARCHAR2(3), "NWKPCDSORTPOINT1CODE" VARCHAR2(2),
    "NWKPCDSORTPOINT1TYPE" VARCHAR2(8), "NWKPCDSORTPOINT1NAME"
    VARCHAR2(16), "NWKPCDSORTPOINT1EXTRA" VARCHAR2(16),
    "NWKPCDSORTPOINT2TYPE" VARCHAR2(8), "NWKPCDSORTPOINT2NAME"
    VARCHAR2(8), "NWKPCDSORTPOINT3TYPE" VARCHAR2(8),
    "NWKPCDSORTPOINT3NAME" VARCHAR2(8), "NWKPCDSORTPOINT4TYPE"
    VARCHAR2(8), "NWKPCDSORTPOINT4NAME" VARCHAR2(8), "NWKPCDPPI"
    VARCHAR2(8), "NWKPCDBARCODE1TO7" VARCHAR2(7),
    "NWKPCDBARCODE15" VARCHAR2(1), "NWKPCDBARCODESEQKEY"
    VARCHAR2(7), "NWKPCDFILLER1" VARCHAR2(7), "NWKPCDFILLER2"
    VARCHAR2(30))
    TABLESPACE "WH1_DATA_LARGE" PCTFREE 10 PCTUSED 40 INITRANS 1
    MAXTRANS 255
    STORAGE ( INITIAL 4096K NEXT 4096K MINEXTENTS 1 MAXEXTENTS
    2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
    NOLOGGING
    insert into wh1.nwkpcdrec_old select * from wh1.nwkpcdrec;
    drop table wh1.nwkpcdrec;
    CREATE TABLE "WH1"."NWKPCDREC" ("NWKPCDFILECODE" VARCHAR2(2),
    "NWKPCDRECORDTYPE" VARCHAR2(4), "NWKPCDNETWORKID" VARCHAR2(2),
    "NWKPCDOUTINWDPOSTCODE" VARCHAR2(7) NOT NULL,
    "NWKPCDOUTWDPOSTCODE" VARCHAR2(4), "NWKPCDINWDPOSTCODE"
    VARCHAR2(3), "NWKPCDSORTPOINT1CODE" VARCHAR2(2),
    "NWKPCDSORTPOINT1TYPE" VARCHAR2(8), "NWKPCDSORTPOINT1NAME"
    VARCHAR2(16), "NWKPCDSORTPOINT1EXTRA" VARCHAR2(16),
    "NWKPCDSORTPOINT2TYPE" VARCHAR2(8), "NWKPCDSORTPOINT2NAME"
    VARCHAR2(8), "NWKPCDSORTPOINT3TYPE" VARCHAR2(8),
    "NWKPCDSORTPOINT3NAME" VARCHAR2(8), "NWKPCDSORTPOINT4TYPE"
    VARCHAR2(8), "NWKPCDSORTPOINT4NAME" VARCHAR2(8), "NWKPCDPPI"
    VARCHAR2(8), "NWKPCDBARCODE1TO7" VARCHAR2(7),
    "NWKPCDBARCODE15" VARCHAR2(1), "NWKPCDBARCODESEQKEY"
    VARCHAR2(7), "NWKPCDFILLER1" VARCHAR2(7), "NWKPCDFILLER2"
    VARCHAR2(30))
    TABLESPACE "WH1_DATA_LARGE" PCTFREE 10 PCTUSED 40 INITRANS 1
    MAXTRANS 255
    STORAGE ( INITIAL 4096K NEXT 4096K MINEXTENTS 1 MAXEXTENTS
    2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
    NOLOGGING
    INSERT INTO WH1.NWKPCDREC SELECT
                   NWKPCDFILECODE,
                   NWKPCDRECORDTYPE,
                   NWKPCDNETWORKID,
                   rpad(nvl(nwkpcdoutwdpostcode, ' '), 4, ' ') || rpad(nvl(nwkpcdinwdpostcode, ' '), 3, ' '),
                   nwkpcdoutwdpostcode,
                   nwkpcdinwdpostcode,
                   NWKPCDSORTPOINT1CODE,
                   NWKPCDSORTPOINT1TYPE,
                   NWKPCDSORTPOINT1NAME,
                   NWKPCDSORTPOINT1EXTRA,
                   NWKPCDSORTPOINT2TYPE,
                   NWKPCDSORTPOINT2NAME,
                   NWKPCDSORTPOINT3TYPE,
                   NWKPCDSORTPOINT3NAME,
                   NWKPCDSORTPOINT4TYPE,
                   NWKPCDSORTPOINT4NAME,
                   NWKPCDPPI,
                   NWKPCDBARCODE1TO7,
                   NWKPCDBARCODE15,
                   NWKPCDBARCODESEQKEY,
                   NWKPCDFILLER1,
                   NWKPCDFILLER2
    FROM WH1.NWKPCDREC_OLD;
    CREATE UNIQUE INDEX "WH1"."UK_NWKPCDREC"
    ON "WH1"."NWKPCDREC" ("NWKPCDNETWORKID",
    "NWKPCDOUTINWDPOSTCODE")
    TABLESPACE "WH1_INDEX" PCTFREE 10 INITRANS 2 MAXTRANS
    255
    STORAGE ( INITIAL 8192K NEXT 8192K MINEXTENTS 1 MAXEXTENTS
    2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
    LOGGING
    begin
    dbms_stats.gather_table_stats(ownname=> 'WH1', tabname=> 'NWKPCDREC', partname=> NULL);
    end;
    begin
    dbms_stats.gather_index_stats(ownname=> 'WH1', indname=> 'UK_NWKPCDREC', partname=> NULL);
    end;
    SELECT decode(a.nwkpcdoutinwdpostcode, rpad(nvl(:zipout1, ' '), 4, ' ') ||
    rpad(nvl(:zipin1, ' '), 3, ' '), '0001', rpad(nvl(:zipout2, ' '), 4, ' '
    ) || rpad(substr(nvl(:zipin2, ' '), 0, 1), 3, ' '), '0002', rpad(
    nvl(:zipout3, ' '), 7, ' '), '0003', rpad('ZZ999', 7, ' '), '0004') AS
    checker, a.nwkpcdbarcode1to7 nwkpcdbarcode1to7,
    a.nwkpcdbarcode15 nwkpcdbarcode15,
    a.nwkpcdbarcodeseqkey nwkpcdbarcodeseqkey,
    a.nwkpcdsortpoint1code nwkpcdsortpoint1code,
    a.nwkpcdsortpoint1type nwkpcdsortpoint1type,
    a.nwkpcdsortpoint1name nwkpcdsortpoint1name,
    a.nwkpcdsortpoint1extra nwkpcdsortpoint1extra,
    a.nwkpcdsortpoint2type nwkpcdsortpoint2type,
    a.nwkpcdsortpoint2name nwkpcdsortpoint2name,
    a.nwkpcdsortpoint3type nwkpcdsortpoint3type,
    a.nwkpcdsortpoint3name nwkpcdsortpoint3name,
    a.nwkpcdsortpoint4type nwkpcdsortpoint4type,
    a.nwkpcdsortpoint4name nwkpcdsortpoint4name,
    b.nwkprfnetworksequence nwkprfnetworksequence,
    b.nwkprfnetworkid nwkprfnetworkid, b.nwkprfnetworkname nwkprfnetworkname,
    b.nwkprfminweight / 100 AS nwkprfminweight, b.nwkprfmaxweight / 100 AS
    nwkprfmaxweight, b.nwkprfminlengthgirth nwkprfminlengthgirth,
    b.nwkprfmaxlengthgirth nwkprfmaxlengthgirth,
    b.nwkprfminlength nwkprfminlength, b.nwkprfmaxlength nwkprfmaxlength,
    b.nwkprfparceltypecode nwkprfparceltypecode,
    b.nwkprfparceltypename nwkprfparceltypename
    FROM wh1.nwkpcdrec a, wh1.nwkprefrec b
    WHERE a.nwkpcdnetworkid = b.nwkprfnetworkid
    AND a.nwkpcdoutinwdpostcode IN (rpad(nvl(:zipout4, ' '), 4, ' ') ||
    rpad(nvl(:zipin3, ' '), 3, ' '), rpad(nvl(:zipout5, ' '), 4, ' ')
    || rpad(substr(nvl(:zipin4, ' '), 0, 1), 3, ' '), rpad(nvl(:zipout6,
    ' '), 7, ' '), rpad('ZZ999', 7, ' '))
    AND a.nwkpcdsortpoint1type != 'XXXXXXXX'
    AND :weight1 >= b.nwkprfminweight
    AND :weight2 <= b.nwkprfmaxweight
    AND b.nwkprfminlengthgirth <= 60
    AND b.nwkprfmaxlengthgirth >= 60
    AND b.nwkprfminlength <= 15
    AND b.nwkprfmaxlength >= 15
    ORDER BY b.nwkprfnetworkid, checker
    pre modified script/

  • What are the best solutions for data warehouse configuration in 10gR2

    I need help on solutions to be provided to my Client for upgrading the data warehouse.
    Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
    Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
    However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
    We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
    So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
    I have till now thought of the following:
    Move the Oracle database and data mart to separate host.
    The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
    Install new Oracle database 10g on the new host and move the data to it.
    Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
    Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
    I need all your help to prepare a good road map for my assignment. Please suggest.
    Thanks,
    Tapan

    SGA=27.5 GB and PGA=50 MB
    Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
    STATSPACK report for
    Snap Id Snap Time Sessions Curs/Sess Comment
    Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
    End Snap: 583036 12-Mar-13 18:24:24 60 118.9
    Elapsed: 1,342.13 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 21,296M Std Block Size: 16K
    Shared Pool Size: 6,144M Log Buffer: 16,384K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 1,343,739.01 139,883.39
    Logical reads: 100,102.54 10,420.69
    Block changes: 3,757.42 391.15
    Physical reads: 6,670.84 694.44
    Physical writes: 874.34 91.02
    User calls: 1,986.04 206.75
    Parses: 247.87 25.80
    Hard parses: 5.82 0.61
    Sorts: 1,566.76 163.10
    Logons: 10.99 1.14
    Executes: 1,309.79 136.35
    Transactions: 9.61
    % Blocks changed per Read: 3.75 Recursive Call %: 43.34
    Rollback per transaction %: 3.49 Rows per Sort: 190.61
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.90 Redo NoWait %: 100.00
    Buffer Hit %: 96.97 In-memory Sort %: 100.00
    Library Hit %: 99.27 Soft Parse %: 97.65
    Execute to Parse %: 81.08 Latch Hit %: 99.58
    Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
    Shared Pool Statistics Begin End
    Memory Usage %: 7.11 50.37
    % SQL with executions>1: 62.31 46.46
    % Memory for SQL w/exec>1: 26.75 13.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 492,062 43.66
    db file sequential read 157,418,414 343,549 30.49
    library cache pin 92,339 66,759 5.92
    PX qref latch 63,635 43,845 3.89
    db file scattered read 2,506,806 41,677 3.70
    Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sequential read 176,386 0 3,793 22 0.2
    log file parallel write 2,685,833 0 1,813 1 3.5
    db file parallel write 239,166 0 1,350 6 0.3
    control file parallel write 33,432 0 79 2 0.0
    LGWR wait for redo copy 478,120 536 75 0 0.6
    rdbms ipc reply 10,027 0 47 5 0.0
    control file sequential read 32,414 0 40 1 0.0
    db file scattered read 4,101 0 30 7 0.0
    db file sequential read 13,946 0 29 2 0.0
    direct path read 203,694 0 14 0 0.3
    log buffer space 363 0 13 37 0.0
    latch free 3,766 0 9 2 0.0
    direct path write 80,491 0 6 0 0.1
    async disk IO 351,955 0 4 0 0.5
    enqueue 28 0 1 21 0.0
    buffer busy waits 1,281 0 1 0 0.0
    log file single write 172 0 0 1 0.0
    rdbms ipc message 10,563,204 251,286 992,837 94 13.7
    pmon timer 34,751 34,736 78,600 2262 0.0
    smon timer 7,462 113 76,463 10247 0.0
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    CPU used by this session 49,206,154 611.0 63.6
    CPU used when call started 49,435,735 613.9 63.9
    CR blocks created 6,740,777 83.7 8.7
    Cached Commit SCN referenced 423,253,503 5,256.0 547.2
    Commit SCN cached 19,165 0.2 0.0
    DBWR buffers scanned 48,276,489 599.5 62.4
    DBWR checkpoint buffers written 6,959,752 86.4 9.0
    DBWR checkpoints 454 0.0 0.0
    DBWR free buffers found 44,817,183 556.5 57.9
    DBWR lru scans 137,149 1.7 0.2
    DBWR make free requests 162,528 2.0 0.2
    DBWR revisited being-written buff 4,220 0.1 0.0
    DBWR summed scan depth 48,276,489 599.5 62.4
    DBWR transaction table writes 5,036 0.1 0.0
    DBWR undo block writes 2,989,436 37.1 3.9
    DDL statements parallelized 3,723 0.1 0.0
    DFO trees parallelized 4,157 0.1 0.0
    DML statements parallelized 3 0.0 0.0
    OS Block input operations 29,850 0.4 0.0
    OS Block output operations 1,591 0.0 0.0
    OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
    OS Integral unshared data size ################## 242,463,432.4 ############
    OS Involuntary context switches 188,257,786 2,337.8 243.4
    OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
    OS Page reclaims 159,430,953 1,979.8 206.1
    OS Signals received 5,260,938 65.3 6.8
    OS Socket messages received 79,438,383 986.5 102.7
    OS Socket messages sent 93,064,176 1,155.7 120.3
    OS System time used 10,936,430 135.8 14.1
    OS User time used 132,043,884 1,639.7 170.7
    OS Voluntary context switches 746,207,739 9,266.4 964.6
    PX local messages recv'd 55,120,663 684.5 71.3
    PX local messages sent 55,120,817 684.5 71.3
    Parallel operations downgraded 1 3 0.0 0.0
    Parallel operations not downgrade 4,154 0.1 0.0
    SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
    SQL*Net roundtrips to/from dblink 18 0.0 0.0
    active txn count during cleanout 16,529,551 205.3 21.4
    background checkpoints completed 43 0.0 0.0
    background checkpoints started 43 0.0 0.0
    background timeouts 280,202 3.5 0.4
    branch node splits 4,428 0.1 0.0
    buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
    buffer is pinned count 9,675,661,370 120,152.8 12,507.9
    bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
    bytes received via SQL*Net from d 6,142 0.1 0.0
    bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
    bytes sent via SQL*Net to dblink 3,701 0.1 0.0
    calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
    calls to kcmgas 36,816,132 457.2 47.6
    calls to kcmgcs 3,514,770 43.7 4.5
    change write time 369,373 4.6 0.5
    cleanout - number of ktugct calls 20,954,488 260.2 27.1
    cleanouts and rollbacks - consist 6,357,174 78.9 8.2
    cleanouts only - consistent read 10,078,802 125.2 13.0
    cluster key scan block gets 69,403,565 861.9 89.7
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    cluster key scans 41,311,211 513.0 53.4
    commit cleanout failures: block l 413,776 5.1 0.5
    commit cleanout failures: buffer 414 0.0 0.0
    commit cleanout failures: callbac 41,194 0.5 0.1
    commit cleanout failures: cannot 174,382 2.2 0.2
    commit cleanouts 11,469,056 142.4 14.8
    commit cleanouts successfully com 10,839,290 134.6 14.0
    commit txn count during cleanout 17,155,424 213.0 22.2
    consistent changes 145,418,277 1,805.8 188.0
    consistent gets 8,043,252,188 99,881.4 10,397.7
    consistent gets - examination 3,180,028,047 39,489.7 4,110.9
    current blocks converted for CR 9 0.0 0.0
    cursor authentications 14,926 0.2 0.0
    data blocks consistent reads - un 143,706,500 1,784.6 185.8
    db block changes 302,577,666 3,757.4 391.2
    db block gets 336,562,217 4,179.4 435.1
    deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
    dirty buffers inspected 627,174 7.8 0.8
    enqueue conversions 1,296,337 16.1 1.7
    enqueue releases 13,053,200 162.1 16.9
    enqueue requests 13,239,092 164.4 17.1
    enqueue timeouts 185,878 2.3 0.2
    enqueue waits 114,120 1.4 0.2
    exchange deadlocks 7,390 0.1 0.0
    execute count 105,475,101 1,309.8 136.4
    free buffer inspected 1,604,407 19.9 2.1
    free buffer requested 258,126,047 3,205.4 333.7
    hot buffers moved to head of LRU 22,793,576 283.1 29.5
    immediate (CR) block cleanout app 16,436,010 204.1 21.3
    immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
    index fast full scans (direct rea 12,375 0.2 0.0
    index fast full scans (full) 3,733 0.1 0.0
    index fast full scans (rowid rang 192,148 2.4 0.3
    index fetch by key 1,321,024,486 16,404.5 1,707.7
    index scans kdiixs1 406,165,684 5,043.8 525.1
    leaf node 90-10 splits 50,373 0.6 0.1
    leaf node splits 697,235 8.7 0.9
    logons cumulative 884,756 11.0 1.1
    messages received 3,276,719 40.7 4.2
    messages sent 3,257,171 40.5 4.2
    no buffer to keep pinned count 569 0.0 0.0
    no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
    opened cursors cumulative 20,527,704 254.9 26.5
    parse count (failures) 267,088 3.3 0.4
    parse count (hard) 468,996 5.8 0.6
    parse count (total) 19,960,548 247.9 25.8
    parse time cpu 323,024 4.0 0.4
    parse time elapsed 8,393,422 104.2 10.9
    physical reads 537,189,332 6,670.8 694.4
    physical reads direct 292,545,140 3,632.8 378.2
    physical writes 70,409,002 874.3 91.0
    physical writes direct 59,248,394 735.8 76.6
    physical writes non checkpoint 69,103,391 858.1 89.3
    pinned buffers inspected 11,893 0.2 0.0
    prefetched blocks 95,892,161 1,190.8 124.0
    prefetched blocks aged out before 1,495,883 18.6 1.9
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    process last non-idle time ################## ############## ############
    queries parallelized 417 0.0 0.0
    recursive calls 122,323,299 1,519.0 158.1
    recursive cpu usage 3,144,533 39.1 4.1
    redo blocks written 180,881,558 2,246.2 233.8
    redo buffer allocation retries 5,400 0.1 0.0
    redo entries 164,728,513 2,045.6 213.0
    redo log space requests 1,006 0.0 0.0
    redo log space wait time 2,230 0.0 0.0
    redo ordering marks 2,563 0.0 0.0
    redo size 108,208,614,904 1,343,739.0 139,883.4
    redo synch time 558,520 6.9 0.7
    redo synch writes 2,343,824 29.1 3.0
    redo wastage 1,126,585,600 13,990.0 1,456.4
    redo write time 718,655 8.9 0.9
    redo writer latching time 7,763 0.1 0.0
    redo writes 2,685,833 33.4 3.5
    rollback changes - undo records a 522,742 6.5 0.7
    rollbacks only - consistent read 335,177 4.2 0.4
    rows fetched via callback 1,100,990,382 13,672.1 1,423.3
    session connect time ################## ############## ############
    session cursor cache count 1,061 0.0 0.0
    session cursor cache hits 1,687,796 21.0 2.2
    session logical reads 8,061,057,193 100,102.5 10,420.7
    session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
    session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
    session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
    session uga memory max 386,645,043,296 4,801,374.0 499,823.6
    shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
    sorts (disk) 2,657 0.0 0.0
    sorts (memory) 126,165,625 1,566.7 163.1
    sorts (rows) 24,048,783,304 298,638.8 31,088.3
    summed dirty queue length 5,438,201 67.5 7.0
    switch current to new buffer 1,302,798 16.2 1.7
    table fetch by rowid 6,201,503,534 77,010.5 8,016.8
    table fetch continued row 26,649,697 330.9 34.5
    table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
    table scan rows gotten 43,639,997,280 541,923.3 56,414.3
    table scans (cache partitions) 26,112 0.3 0.0
    table scans (direct read) 246,243 3.1 0.3
    table scans (long tables) 340,200 4.2 0.4
    table scans (rowid ranges) 359,617 4.5 0.5
    table scans (short tables) 9,111,559 113.2 11.8
    transaction rollbacks 4,819 0.1 0.0
    transaction tables consistent rea 824 0.0 0.0
    transaction tables consistent rea 1,386,848 17.2 1.8
    user calls 159,931,913 1,986.0 206.8
    user commits 746,543 9.3 1.0
    user rollbacks 27,020 0.3 0.0
    write clones created in backgroun 7 0.0 0.0
    write clones created in foregroun 4,350 0.1 0.0
    Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> Standard block size Pools D: default, K: keep, R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
    Free Write Buffer
    Number of Cache Buffer Physical Physical Buffer Complete Busy
    P Buffers Hit % Gets Reads Writes Waits Waits Waits
    D 774,144 95.6############ 233,869,082 10,089,734 0 0########
    K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
    R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
    Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc
    Tot Wait Avg
    Class Waits Time (s) Time (ms)
    data block 7,791,121 14,676 2
    file header block 587 101 172
    undo header 151,617 71 0
    segment header 299,312 58 0
    1st level bmb 45,235 7 0
    bitmap index block 392 1 3
    undo block 4,250 1 0
    2nd level bmb 14 0 0
    system undo header 2 0 0
    3rd level bmb 1 0 0
    Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
    willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
    Pct Avg Wait Pct
    Get Get Slps Time NoWait NoWait
    Latch Requests Miss /Miss (s) Requests Miss
    Consistent RBA 2,686,230 0.0 0.2 0 0
    FAL request queue 86 0.0 0 0
    FAL subheap alocation 0 0 2 0.0
    FIB s.o chain latch 1,089 0.0 0 0
    FOB s.o list latch 4,589,986 0.5 0.0 2 0
    NLS data objects 1 0.0 0 0
    SQL memory manager worka 5,963 0.0 0 0
    Token Manager 0 0 2 0.0
    active checkpoint queue 719,439 0.3 0.1 0 1 0.0
    alert log latch 184 0.0 0 2 0.0
    archive control 4,365 0.0 0 0
    archive process latch 1,808 0.6 0.6 0 0
    begin backup scn array 3,387,572 0.0 0.0 0 0
    cache buffer handles 1,577,222 0.2 0.0 0 0
    cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
    cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
    cas latch 538,804,153 0.3 0.0 7 0
    channel handle pool latc 1,776,950 0.5 0.0 0 0
    channel operations paren 2,901,371 0.3 0.0 0 0
    checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
    child cursor hash table 3,927,427 0.0 0.0 0 0
    commit callback allocati 8,739 0.0 0 0
    dictionary lookup 7,980 0.0 0 0
    dml lock allocation 6,767,990 0.1 0.0 0 0
    dummy allocation 1,898,183 0.2 0.1 0 0
    enqueue hash chains 27,741,348 0.1 0.1 4 0
    enqueues 17,450,161 0.3 0.1 6 0
    error message lists 132,828 2.6 0.2 1 0
    event group latch 884,066 0.0 0.7 0 0
    event range base latch 1 0.0 0 0
    file number translation 34 38.2 0.9 0 0
    global tx hash mapping 577,859 0.0 0 0
    hash table column usage 4,062 0.0 0 8,757,234 0.0
    hash table modification 16 0.0 0 2 0.0
    i/o slave adaptor 0 0 2 0.0
    job workq parent latch 4 100.0 0.3 0 494 8.7
    job_queue_processes para 1,950 0.0 0 2 0.0
    ksfv messages 0 0 4 0.0
    ktm global data 8,219 0.0 0 0
    lgwr LWN SCN 2,687,862 0.0 0.0 0 0
    library cache 310,882,781 0.9 0.0 34 104,759 4.0
    library cache load lock 30,369 0.0 0.3 0 0
    library cache pin 153,821,358 0.1 0.0 2 0
    library cache pin alloca 126,316,296 0.1 0.0 4 0
    list of block allocation 2,730,808 0.3 0.0 0 0
    loader state object free 566,036 0.1 0.0 0 0
    longop free list parent 197,368 0.0 0 8,390 0.0
    message pool operations 14,424 0.0 0.0 0 0
    messages 25,931,764 0.1 0.0 1 0
    mostly latch-free SCN 40,124,948 0.3 0.0 5 0
    Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by misses desc
    Get Spin &
    Latch Name Requests Misses Sleeps Sleeps 1->4
    cache buffers chains ############## 74,770,083 1,062,119 73803903/884
    159/71439/10
    582/0
    redo allocation 170,107,983 3,441,055 149,631 3292872/1467
    48/1426/9/0
    library cache 310,882,781 2,831,747 89,240 2754499/6780
    6/7405/2037/
    0
    shared pool 158,471,190 1,755,922 55,268 1704342/4836
    9/2826/385/0
    cas latch 538,804,153 1,553,992 6,927 1547125/6808
    /58/1/0
    row cache objects 161,142,207 1,176,998 27,658 1154070/1952
    0/2560/848/0
    process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
    36/0/0
    Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Pct Misses" should be very low
    Get Pct Pin Pct Invali-
    Namespace Requests Miss Requests Miss Reloads dations
    BODY 3,137,721 0.0 3,137,722 0.0 0 0
    CLUSTER 6,741 0.1 4,420 0.2 0 0
    INDEX 353,708 0.8 361,065 1.2 0 0
    SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
    TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
    TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
    SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    SGA regions Size in Bytes
    Database Buffers 22,330,474,496
    Fixed Size 779,288
    Redo Buffers 17,051,648
    Variable Size 7,180,648,448
    sum 29,528,953,880

  • Printing Error using Acrobat Pro 9

    After installing, 9.3.3 Acrobat Pro, I can no longer print PDFs to my printer. I have to use Preview to print them.
    Error Log Messages occurring when a print job to an HP OfficeJet AIO 7590 from Adobe Acrobat Pro, Vers. 9.3.3 on a MacOSX 10.6.4 platform failed to print innumerable times. Prior to Acrobat Vers. 9.0.1, no problems occurred.
    D [20/Aug/2010:09:26:42 -0500] [Job 534] The following messages were recorded from 09:26:40 to 09:26:42
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.74-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.1-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.5-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.67-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.77-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.78-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.75-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.102-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.103-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.104-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.101-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.113-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.114-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.7-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.20-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.66-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.72-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.0-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.6-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.70-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.71-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.88-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.59-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.69-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.68-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.19-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.115-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.42-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.89-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.s.87-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.e.0-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.0-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.1-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.2-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.3-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.4-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.5-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.6-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.7-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.8-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.9-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.10-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.11-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.12-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.13-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.14-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.15-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.16-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.17-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.18-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.19-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.20-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.21-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.22-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.23-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.3.24-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.0-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.1-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.2-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.3-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.4-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.5-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.6-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.7-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.8-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.9-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.10-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.11-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.12-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.13-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.14-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.15-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.16-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.17-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.18-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.19-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.20-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.21-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.22-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.23-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.1.4.24-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.0-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.1-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.2-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.3-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.4-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.5-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.6-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.7-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.8-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.9-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.10-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.11-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.12-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.13-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.14-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.15-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.16-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.17-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.18-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.19-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.20-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.21-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.22-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.23-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.3.24-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.0-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.1-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.2-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.3-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.4-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.5-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.6-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.7-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.8-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.9-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.10-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.11-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.12-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.13-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.14-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.15-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.16-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.17-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.18-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.19-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.20-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.21-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.22-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.23-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.2.4.24-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.0-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.1-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.2-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.3-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.4-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.5-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.6-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.7-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.8-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.9-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.10-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.11-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.12-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.13-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.14-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.15-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.16-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.17-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.18-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.19-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.20-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.21-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.22-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.23-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.3.24-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.0-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.1-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.2-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.3-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.4-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.5-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.6-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.7-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.8-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.9-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.10-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.11-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.12-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.13-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.14-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.15-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.16-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.17-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.18-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.19-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.20-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.21-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.22-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.23-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] STATE: - com.hp.m.3.4.24-report
    D [20/Aug/2010:09:26:42 -0500] [Job 534] PPD: DefaultHPAutoDuplexerInstalled="True"
    D [20/Aug/2010:09:26:42 -0500] [Job 534] PID 18598 (/Library/Printers/hp/cups/Inkjet.driver/Contents/MacOS/Inkjet)  exited with 3 status.
    D [20/Aug/2010:09:26:42 -0500] [Job 534] user   time    used: 2" 220314'
    D [20/Aug/2010:09:26:42 -0500] [Job 534] system time    used: 0" 311151'
    D [20/Aug/2010:09:26:42 -0500] [Job 534] max  resident  size: 29061120
    D [20/Aug/2010:09:26:42 -0500] [Job 534] shared memory  size: 0
    D [20/Aug/2010:09:26:42 -0500] [Job 534] unshared data  size: 0
    D [20/Aug/2010:09:26:42 -0500] [Job 534] unshared stack size: 0
    D [20/Aug/2010:09:26:42 -0500] [Job 534] page     reclaims  : 8158
    D [20/Aug/2010:09:26:42 -0500] [Job 534] page     faults    : 172
    D [20/Aug/2010:09:26:42 -0500] [Job 534] swaps              : 0
    D [20/Aug/2010:09:26:42 -0500] [Job 534] block inputs  count: 76
    D [20/Aug/2010:09:26:42 -0500] [Job 534] block outputs count: 16
    D [20/Aug/2010:09:26:42 -0500] [Job 534] messages       sent: 0
    D [20/Aug/2010:09:26:42 -0500] [Job 534] messages       received: 0
    D [20/Aug/2010:09:26:42 -0500] [Job 534] signals    received: 2
    D [20/Aug/2010:09:26:42 -0500] [Job 534] voluntary  switches: 374
    D [20/Aug/2010:09:26:42 -0500] [Job 534] preempted  switches: 1834
    D [20/Aug/2010:09:26:42 -0500] [Job 534] Sent 0 bytes...
    D [20/Aug/2010:09:26:42 -0500] [Job 534] End of messages
    D [20/Aug/2010:09:26:42 -0500] [Job 534] printer-state=3(idle)
    D [20/Aug/2010:09:26:42 -0500] [Job 534] printer-state-message="can't open `/private/var/spool/cups/tmp/048a64c7042a4'."
    D [20/Aug/2010:09:26:42 -0500] [Job 534] printer-state-reasons=none
    Any ideas? Also sent to HP.
    Mikail

    While on Adobe's site I had a chat with tech support. Once I gave them the serial number they said that it was volume license (I am a teacher and the school supplied the program) and that I would have to call the volume licensing number. I just spent 33 minutes on the phone with Adobe being bounced four different times. I called volume license number given in the chat, went through the problem. They transferred me to tech support where I gave them the same information again. They transferred me to Acrobat support. I gave them the information again and they said that I needed a service agreement (looks like the school does not have one) and would transfer me back to volume license. The last transfer just said that I should have been told that on the first call. I still do not have Acrobat Pro printing to my printer. All of my applications will print except Acrobat Pro. Any other suggestions?

  • Exporting to MS Excel - default "save as" type

    We have written code in our application that exports the
    results of the in MS Excel format and everything works fine except
    for one minor thing using IE. When the user is prompted, they can
    either "open" or "save". If they choose "save", it saves it fine as
    a MS Excel file type. However, if they choose "open" (which does
    open perfectly fine in Excel), but then they later choose the "Save
    As" option from MS Excel's menu, the default "Save As" type now
    becomes "Web Page (*.htm; *.html). It actually saves fine as an
    Excel document, but our customers are concerned about it because
    this is going to be a public application and they are worried that
    it might confuse the end-users. So what I'm really asking is if
    there's a way to programmatically control the default "Save As"
    type from within MS Excel? I've attached a small snippet of our
    code which details how we are performing the export.
    <!--- use cfsetting to block output of HTML outside of
    cfoutput tags --->
    <cfsetting enablecfoutputonly="Yes">
    <!--- Send output to a Microsoft Excel Spreadsheet --->
    <cfheader name="Content-Disposition" value="attachment;
    filename=AcctOpsRptSpendAgCat.xls">
    <cfheader name="Expires" value="#Now()#">
    <cfcontent type="application/vnd.ms-excel">

    Hi,
    I think excel can display only 11digits,that might be the problem its showing error.
    Refer : http://support.microsoft.com/kb/65903
    Regards,
    Srikanth

  • Capturing log files from multiple .ps1 scripts called from within a .bat file

    I am trying to invoke multiple instances of a powershell script and capture individual log files from each of them. I can start the multiple instances by calling 'start powershell' several times, but am unable to capture logging. If I use 'call powershell'
    I can capture the log files, but the batch file won't continue until that current 'call powershell' has completed.
    ie.  within Test.bat
    start powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > a.log 2>&1
    timeout /t 60
    start powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > b.log 2>&1
    timeout /t 60
    start powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > c.log 2>&1
    timeout /t 60
    start powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > d.log 2>&1
    timeout /t 60
    start powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > e.log 2>&1
    timeout /t 60
    start powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > f.log 2>&1
    the log files get created but are empty.  If I invoke 'call' instead of start I get the log data, but I need them to run in parallel, not sequentially.
    call powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > a.log 2>&1
    timeout /t 60
    call powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > b.log 2>&1
    timeout /t 60
    call powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > c.log 2>&1
    timeout /t 60
    call powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > d.log 2>&1
    timeout /t 60call powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > e.log 2>&1
    Any suggestions of how to get this to work?

    Batch files are sequential by design (batch up a bunch of statements and execute them). Call doesn't run in a different process, so when you use it the batch file waits for it to exit. From CALL:
    Calls one batch program from another without stopping the parent batch program
    I was hoping for the documentation to say the batch file waits for CALL to return, but this is as close as it gets.
    Start(.exe), "Starts a separate window to run a specified program or command". The reason it runs in parallel is once it starts the target application start.exe ends and the batch file continues. It has no idea about the powershell.exe process
    that you kicked off. Because of this reason, you can't pipe the output.
    Update: I was wrong, you can totally redirect the output of what you run with start.exe.
    How about instead of running a batch file you run a PowerShell script? You can run script blocks or call individual scripts in parallel with the
    Start-Job cmdlet.
    You can monitor the jobs and when they complete, pipe them to
    Receive-Job to see their output. 
    For example:
    $sb = {
    Write-Output "Hello"
    Sleep -seconds 10
    Write-Output "Goodbye"
    Start-Job -Scriptblock $sb
    Start-Job -Scriptblock $sb
    Here's a script that runs the scriptblock $sb. The script block outputs the text "Hello", waits for 10 seconds, and then outputs the text "Goodbye"
    Then it starts two jobs (in this case I'm running the same script block)
    When you run this you receive this for output:
    PS> $sb = {
    >> Write-Output "Hello"
    >> Sleep -Seconds 10
    >> Write-Output "Goodbye"
    >> }
    >>
    PS> Start-Job -Scriptblock $sb
    Id Name State HasMoreData Location Command
    1 Job1 Running True localhost ...
    PS> Start-Job -Scriptblock $sb
    Id Name State HasMoreData Location Command
    3 Job3 Running True localhost ...
    PS>
    When you run Start-Job it will execute your script or scriptblock in a new process and continue to the next line in the script.
    You can see the jobs with
    Get-Job:
    PS> Get-Job
    Id Name State HasMoreData Location Command
    1 Job1 Running True localhost ...
    3 Job3 Running True localhost ...
    OK, that's great. But we need to know when the job's done. The Job's Status property will tell us this (we're looking for a status of "Completed"), we can build a loop and check:
    $Completed = $false
    while (!$Completed) {
    # get all the jobs that haven't yet completed
    $jobs = Get-Job | where {$_.State.ToString() -ne "Completed"} # if Get-Job doesn't return any jobs (i.e. they are all completed)
    if ($jobs -eq $null) {
    $Completed=$true
    } # otherwise update the screen
    else {
    Write-Output "Waiting for $($jobs.Count) jobs"
    sleep -s 1
    This will output something like this:
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    When it's done, we can see the jobs have completed:
    PS> Get-Job
    Id Name State HasMoreData Location Command
    1 Job1 Completed True localhost ...
    3 Job3 Completed True localhost ...
    PS>
    Now at this point we could pipe the jobs to Receive-Job:
    PS> Get-Job | Receive-Job
    Hello
    Goodbye
    Hello
    Goodbye
    PS>
    But as you can see it's not obvious which script is which. In your real scripts you could include some identifiers to distinguish them.
    Another way would be to grab the output of each job one at a time:
    foreach ($job in $jobs) {
    $job | Receive-Job
    If you store the output in a variable or save to a log file with Out-File. The trick is matching up the jobs to the output. Something like this may work:
    $a_sb = {
    Write-Output "Hello A"
    Sleep -Seconds 10
    Write-Output "Goodbye A"
    $b_sb = {
    Write-Output "Hello B"
    Sleep -Seconds 5
    Write-Output "Goodbye B"
    $job = Start-Job -Scriptblock $a_sb
    $a_log = $job.Name
    $job = Start-Job -Scriptblock $b_sb
    $b_log = $job.Name
    $Completed = $false
    while (!$Completed) {
    $jobs = Get-Job | where {$_.State.ToString() -ne "Completed"}
    if ($jobs -eq $null) {
    $Completed=$true
    else {
    Write-Output "Waiting for $($jobs.Count) jobs"
    sleep -s 1
    Get-Job | where {$_.Name -eq $a_log} | Receive-Job | Out-File .\a.log
    Get-Job | where {$_.Name -eq $b_log} | Receive-Job | Out-File .\b.log
    If you check out the folder you'll see the log files, and they contain the script contents:
    PS> dir *.log
    Directory: C:\Users\jwarren
    Mode LastWriteTime Length Name
    -a--- 1/15/2014 7:53 PM 42 a.log
    -a--- 1/15/2014 7:53 PM 42 b.log
    PS> Get-Content .\a.log
    Hello A
    Goodbye A
    PS> Get-Content .\b.log
    Hello B
    Goodbye B
    PS>
    The trouble though is you won't get a log file until the job has completed. If you use your log files to monitor progress this may not be suitable.
    Jason Warren
    @jaspnwarren
    jasonwarren.ca
    habaneroconsulting.com/Insights

Maybe you are looking for

  • DVI to Video problem

    Hi everyone ! I've got a big problem using my MacMini with TV. I bought the Apple DVI to Video converter, and connected my Philips cathodic TV to the Mac. But the TV only displays gray and deformed pictures. I tried all resolutions available, but not

  • Application Mananger is stating Acrobat X PDF is 'Installed' but this is false.

    I am attempting to install Acrobat X PDF using Adobe Application Manager, but due to the status stating 'Installed' this hides the download link. Using Spotlight; I can comfortably say that Acrobat X PDF is not installed on my machine. Within the Ado

  • Multi-layered compositions creating overly large compressed files

    Hi there community! FCP HD 4.5 I have a composition that involves 3 layers of video and 4 audio tracks. It is only short but when trying to compress a sample for the web I'm having difficulty getting the file size down to something reasonable, like b

  • Driver for hp 54730D?

    i'm searching for a gpib-driver for the oscilloscope hp 54720D. i'm using labview 5.0.1 and 6.i. can somebody help me.

  • User lock persists for killed process

    Hi, We have an app that acquires a user lock (using dbms_lock, release_on_commit FALSE). We needed to kill a session but it remained in v$session (KILLED), even after killing the OS process. Fine, I know it will clear up eventually. But the user lock