What are the idoc to send data from sap hr to sap fi

what are the idoc to send data from sap hr to sap fi

Message type of IDoc depends on the data you wish to send.
please detail on the data that you need to send in the IDocs.
regards,
Nitin

Similar Messages

  • What are the ways to download data from ods?

    Hi all,
    what are the ways to download data from ods?
      1. se11/se12
      2. listcube
      3. infospokes.
    Apart from the above 3 is there any other way to download data from my ods.
    I need to download around 90 fields from my ods to a flat file. But listcube doesnt allow me to select all the fields. So i used se11 to download the data from my ods. bUt still se12 doesnt allow me to download all the 90 fields it downloads only 40 fields.
    Can anyone suggest me how to select all the 90 fields and download it to an excel sheet.
    thanxs
    haritha

    Hi Haritha,
    Go to Tcode SE16, give your ODS active Table name and give width of output list as 1023.
    Now run transaction to see your data and then click Settings --> User parameters and Select ALV Grid Display.
    Now you should see and Excel Icon on top, click on it then select Table , then
    Microsoft Excel and it will open your data with all columns you want.
    I just tried for 213 columns
    Hope this helps.
    Thanks
    CK

  • What are the settings for apn data and mms for iphone 4 on straight talk?

    what are the settings for cellular data network APN DATA and MMS for Straighttalk sims on IPhone 4?I cant send or recieve pictures.My service keeps losin signal constantly saying no service then 1 bar,2 bars,3 bars...then back down to no service.It drops calls.Iv been trying to find out the answer to this problem for 2 weeks!I have no Profile selection in my settings either to do wat its showing online on one website.and have entered numerous diffrent APNs and none have worked!!!!PLEASE HELP!!!

    They're asking a question about carrier settings for their iPhone. I think that has everything to do with Apple.
    Maybe they did already ask Straight Talk. I've called Straight Talk before... they're not too helpful.
    You know, this forum would be much friendlier if people in this forum would quit squabbling, & posting comedic replies, not too mention name calling (like I did), when people are just needing some help.
    It's also frustrating that people stick up for the guy who doesn't help, but rather takes the time to post some smart alec reply. Granted, I shouldn't have name-called... some people just frustrate me when they act like that.
    Admittedly, there ARE some grey areas when it comes to carrier questions... but let the moderators do their job when someone is asking innappropriate stuff. And if the mods aren't chiming in, then that's a good indication that what the thread is about IS in fact Apple related.

  • What are the book names for MDM from SAP Education academy

    Hi SAP gurus,
    Can any one tell the name of the books for MDM from SAP Education academy.
    For example in SAP MM (material management) - it is TAMM40(part1,part2,part3,paret4...etc)
    So what are the names of the books for MDM from SAP Education academy. Just name and other deails enough.
    Thanks in advance.
    Vam C

    Hi Vamsay,
    I am sending you some books names of SAP MDM.
    1. SAP MDM Frequently Asked Questions (English)
    (Master Data Management Certification Sap Mdm Faq - ISBN: 9781603320153)
    Price range: $42.00 - $54.95 from 4 Sellers
    Publisher: Equity Pr
    Format: Paperback
    2. Build Foundations for Continual Improvements with SAP MDM
    Enterprise Data Management with SAP NetWeaver MDM
    Andrew LeBlanc
    3. English Edition
    Auszug aus:
    SAP NetWeaver Master Data Management
    ISBN 978-1-59229-131-1
    lieferbar –  EUR 69,95 • CHF 115,00
    [In den Warenkorb] [Auf den Merkzettel]
    Please rewrds if found helpful.
    Regards,
    Alok

  • IDOC :: how to send data from Custom Infotype in SAP HR to third party

    Hi,
    I have created one custom Infotype by number 9020. How to send data from this infotype to third party system and also change pointers need to trigger for this infotype.
    Please help me in doing it.
    I am using one Custom Message type ZTALENT and Custom Idoc Type ZTALENT.
                                                                                    ZTALENT                        Talent Management                                                                               
    5  E1PLOGI                        Header for an HR Object (Master Data or Organizational Data)                                                                               
    5  E1PITYP                        HR: Transported Infotypes and Subtypes for an Object                                                                               
    ZPUSER                         User base Data File                                          
                    ZPERSON                        Personal Information File Segment                            
                    ZPOST                          Position File                                                
                    ZOPE                           Overall Performance                                          
                    ZPWORK                         Outside Work Experience                                      
                    ZPEDUC                         Education Details of Employee                                
                    E1P0000                        HR: HR Master Record Infotype 0000 (Actions)                 
                    E1P0001                        HR: HR Master Record Infotype 0001 (Org. Assignment)         
                    E1P0002                        HR: HR Master Record Infotype 0002 (Personal Data)           
                    E1P0016                        HR Master Record: Infotype 0016 (Contract Elements)          
                    E1P0022                        HR Master Record: Infotype 0022 (Education)                  
                    E1P0023                        HR Master Record: Infotype 0023 (Other/Previous Employers)   
                    E1P0041                        HR Master Record: Infotype 0041 (Date Specifications)        
                    E1P0105                        HR: HR Master Record Infotype 0105 (Communications)       
                   ZE1P9020
                    ZPLANG                         Language Details                                             
                    ZACTION                        Actions Changes            
    Regards,
    Krishna

    Hello Shankar,
             Technically TEMSE files are read by calling the following 3 function modules in sequence,
                  1) RSTS_OPEN_RLC or RP_TS_OPEN: open the temse object
                  2) RSTS_READ : read the object
                  3) RSTS_CLOSE: close the object
    Regards,
    Rajesh

  • What are the methods called while navigating from one applet to another one

    Hi All,
    Could any one brief me about "When you navigate from one applet to another what are the methods called ?".
    Thanks in advance.
    Best Regards,
    N.Madhusudhanan.

    http://forum.java.sun.com/thread.jsp?forum=421&thread=426771&tstart=0&trange=100

  • What are the reports we can get from OPM Quality

    Dear All,
    What are the reports we can get it from OPM Quality module.
    I am unable to get any reports abouts Tests, Samples specifications.
    Can someone please suggest me.
    Regards,
    Jon

    Pl post details of EBS version.
    If 12.1.3, see Chapter 7 of the OPM Quality User Guide - http://download.oracle.com/docs/cd/B53825_08/current/acrobat/121gmdqmug.pdf
    The same guide for other versions can be found at http://www.oracle.com/technetwork/documentation/applications-167706.html
    HTH
    Srini

  • What are the roles need to add for webservice user in SAP ECC 6.0

    Dear SDNS,
    Can you please help me to understand , what are the roles needed to add while creating a webservice user in ABAP STACK.
    Really appreciate your immediate help and response.
    Thanks and Regards.
    Suraj

    Hi Suraj,
    Please refer to this link & apply the role/s as per the requirements for the web service user:
    [http://help.sap.com/saphelp_nwpi71/helpdata/en/2b/07074155bcf26fe10000000a1550b0/content.htm]
    Best Regards, Trevor

  • What are the drawbacks of implementing QAS (Quick Address System) with SAP CRM

    Hi,
    What are the drawbacks of implementing QAS (Quick Address System) for duplicate address management in SAP. QAS is a third party software that deals with duplicate address in the system.
    Thanks in advance...

    Hi Peter,
    You can refer the CRM installation guide which has all the details including the post installation steps you require:
    You can refer to CRM installation guide in SAP Service Market Place,which would also guide you about post-installation steps.
    Please go through the below link:
    service.sap.com/instguides> SAP Business Suite Solutions>SAP CRM--> SAP CRM 7.0
    Hope this helps.
    Regards
    Yogesh

  • What are the steps to send sales order custom field from CRM to ECC

    Hi Xperts,
      We have created a custom field in sales order [ VBAK] and successfully replicate its value from ECC to CRM. But while doing the enhancement to replicate the field value from CRM to ECC [ When the SO is created in CRM ] we are not able to do so.
    We used a FM in CRM0_300, but while replicating the SO from CRM to R/3 it is not getting called.
    Please help us by providing the steps to do the full enhancement to replicate the custom field of SO from CRM to ECC.
    Thanks in Advance.

    Hi Anjaneyulu,
    We are faced with a similar situation as you.
    Here is our scenario with the steps that we performed so far:
    1.  We have added a few custom fields in CRM 7.0 (Ehp1) using AET.
         The BDoc BUS_TRANS_MSG has been automatically extended with these custom fields.
    2.  On ECC side also, same custom fields have been added to VBAK and VBAP, using APPEND STRUCTURE. Fields were added to ADDITIONAL DATA TAB B in VA01 / VA02 transactions.
    3. Extended BAPI structures BAPISDITM and BAPISDITMX in both CRM and R/3 Side.
    4. As mentioned in the note 1053817, we have implemented BADI CRM_DATAEXCHG_BADI -> Method CRM_DATAEXCH_AFTER_BAPI_FILL in CRM (mapped fields from BDOC to BAPI structures)
    5. In the above note, for R/3 BAPI to R/3 API they have given to implement user exit USEREXIT_MOVE_FIELD_TO_VBAP which we found in MV45AFZZ. But in this we are unable to find BAPI Structure to map.
    Our issue is, when a sales order is created in CRM, it is getting replicated on ECC. But, only standard fields are getting replicated. The custom fields that we have added are remaining empty on ECC side.
    In CRM --> txn SMW01, we can see that the BDOC is populated with custom fields also.
    Could you let us know if your issue is solved completely. Are you able to see the value of custom field, in ECC? Did you use AET to add fields in CRM.
    Could you please give us the steps that you have done.

  • What is the different to load date from SBIW and LO(lbwe)

    Hi all experts,
    I am wondering if i need to data to generate reports(infostructure S001) in BW. Can i just only load data in "business content datasources"(SBIW) ->select S001 OR I have to load data from LO.
    What is the different between two (SBIW,LBWE)
    Thank you
    Koala

    Hi Koala,
    After SBIW tcode execution you have a screen where another tcodes are accesible. Including LBWE. Namely, in SBIW screen
    'Data transfer to the SAP Business Information Warehouse/Settings for Application-Specific Datasources (PI)/Logistics/Managing Extract structures/Logistics Extraction Structures Customizing Cockpit' is LBWE tcode!
    As I already have answered to you
    anyone  experienced data load for S001?
    You need to choose between LIS extraction and LO extraction, not between SBIW and LBWE.
    BTW, managing LIS setup is situated in SBIW also:
    ...Logistics/Managing transfer information structures/Application-Specific setup of statistical data.
    Best regards,
    Eugene

  • What are the best solutions for data warehouse configuration in 10gR2

    I need help on solutions to be provided to my Client for upgrading the data warehouse.
    Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
    Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
    However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
    We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
    So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
    I have till now thought of the following:
    Move the Oracle database and data mart to separate host.
    The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
    Install new Oracle database 10g on the new host and move the data to it.
    Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
    Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
    I need all your help to prepare a good road map for my assignment. Please suggest.
    Thanks,
    Tapan

    SGA=27.5 GB and PGA=50 MB
    Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
    STATSPACK report for
    Snap Id Snap Time Sessions Curs/Sess Comment
    Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
    End Snap: 583036 12-Mar-13 18:24:24 60 118.9
    Elapsed: 1,342.13 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 21,296M Std Block Size: 16K
    Shared Pool Size: 6,144M Log Buffer: 16,384K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 1,343,739.01 139,883.39
    Logical reads: 100,102.54 10,420.69
    Block changes: 3,757.42 391.15
    Physical reads: 6,670.84 694.44
    Physical writes: 874.34 91.02
    User calls: 1,986.04 206.75
    Parses: 247.87 25.80
    Hard parses: 5.82 0.61
    Sorts: 1,566.76 163.10
    Logons: 10.99 1.14
    Executes: 1,309.79 136.35
    Transactions: 9.61
    % Blocks changed per Read: 3.75 Recursive Call %: 43.34
    Rollback per transaction %: 3.49 Rows per Sort: 190.61
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.90 Redo NoWait %: 100.00
    Buffer Hit %: 96.97 In-memory Sort %: 100.00
    Library Hit %: 99.27 Soft Parse %: 97.65
    Execute to Parse %: 81.08 Latch Hit %: 99.58
    Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
    Shared Pool Statistics Begin End
    Memory Usage %: 7.11 50.37
    % SQL with executions>1: 62.31 46.46
    % Memory for SQL w/exec>1: 26.75 13.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 492,062 43.66
    db file sequential read 157,418,414 343,549 30.49
    library cache pin 92,339 66,759 5.92
    PX qref latch 63,635 43,845 3.89
    db file scattered read 2,506,806 41,677 3.70
    Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sequential read 176,386 0 3,793 22 0.2
    log file parallel write 2,685,833 0 1,813 1 3.5
    db file parallel write 239,166 0 1,350 6 0.3
    control file parallel write 33,432 0 79 2 0.0
    LGWR wait for redo copy 478,120 536 75 0 0.6
    rdbms ipc reply 10,027 0 47 5 0.0
    control file sequential read 32,414 0 40 1 0.0
    db file scattered read 4,101 0 30 7 0.0
    db file sequential read 13,946 0 29 2 0.0
    direct path read 203,694 0 14 0 0.3
    log buffer space 363 0 13 37 0.0
    latch free 3,766 0 9 2 0.0
    direct path write 80,491 0 6 0 0.1
    async disk IO 351,955 0 4 0 0.5
    enqueue 28 0 1 21 0.0
    buffer busy waits 1,281 0 1 0 0.0
    log file single write 172 0 0 1 0.0
    rdbms ipc message 10,563,204 251,286 992,837 94 13.7
    pmon timer 34,751 34,736 78,600 2262 0.0
    smon timer 7,462 113 76,463 10247 0.0
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    CPU used by this session 49,206,154 611.0 63.6
    CPU used when call started 49,435,735 613.9 63.9
    CR blocks created 6,740,777 83.7 8.7
    Cached Commit SCN referenced 423,253,503 5,256.0 547.2
    Commit SCN cached 19,165 0.2 0.0
    DBWR buffers scanned 48,276,489 599.5 62.4
    DBWR checkpoint buffers written 6,959,752 86.4 9.0
    DBWR checkpoints 454 0.0 0.0
    DBWR free buffers found 44,817,183 556.5 57.9
    DBWR lru scans 137,149 1.7 0.2
    DBWR make free requests 162,528 2.0 0.2
    DBWR revisited being-written buff 4,220 0.1 0.0
    DBWR summed scan depth 48,276,489 599.5 62.4
    DBWR transaction table writes 5,036 0.1 0.0
    DBWR undo block writes 2,989,436 37.1 3.9
    DDL statements parallelized 3,723 0.1 0.0
    DFO trees parallelized 4,157 0.1 0.0
    DML statements parallelized 3 0.0 0.0
    OS Block input operations 29,850 0.4 0.0
    OS Block output operations 1,591 0.0 0.0
    OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
    OS Integral unshared data size ################## 242,463,432.4 ############
    OS Involuntary context switches 188,257,786 2,337.8 243.4
    OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
    OS Page reclaims 159,430,953 1,979.8 206.1
    OS Signals received 5,260,938 65.3 6.8
    OS Socket messages received 79,438,383 986.5 102.7
    OS Socket messages sent 93,064,176 1,155.7 120.3
    OS System time used 10,936,430 135.8 14.1
    OS User time used 132,043,884 1,639.7 170.7
    OS Voluntary context switches 746,207,739 9,266.4 964.6
    PX local messages recv'd 55,120,663 684.5 71.3
    PX local messages sent 55,120,817 684.5 71.3
    Parallel operations downgraded 1 3 0.0 0.0
    Parallel operations not downgrade 4,154 0.1 0.0
    SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
    SQL*Net roundtrips to/from dblink 18 0.0 0.0
    active txn count during cleanout 16,529,551 205.3 21.4
    background checkpoints completed 43 0.0 0.0
    background checkpoints started 43 0.0 0.0
    background timeouts 280,202 3.5 0.4
    branch node splits 4,428 0.1 0.0
    buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
    buffer is pinned count 9,675,661,370 120,152.8 12,507.9
    bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
    bytes received via SQL*Net from d 6,142 0.1 0.0
    bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
    bytes sent via SQL*Net to dblink 3,701 0.1 0.0
    calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
    calls to kcmgas 36,816,132 457.2 47.6
    calls to kcmgcs 3,514,770 43.7 4.5
    change write time 369,373 4.6 0.5
    cleanout - number of ktugct calls 20,954,488 260.2 27.1
    cleanouts and rollbacks - consist 6,357,174 78.9 8.2
    cleanouts only - consistent read 10,078,802 125.2 13.0
    cluster key scan block gets 69,403,565 861.9 89.7
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    cluster key scans 41,311,211 513.0 53.4
    commit cleanout failures: block l 413,776 5.1 0.5
    commit cleanout failures: buffer 414 0.0 0.0
    commit cleanout failures: callbac 41,194 0.5 0.1
    commit cleanout failures: cannot 174,382 2.2 0.2
    commit cleanouts 11,469,056 142.4 14.8
    commit cleanouts successfully com 10,839,290 134.6 14.0
    commit txn count during cleanout 17,155,424 213.0 22.2
    consistent changes 145,418,277 1,805.8 188.0
    consistent gets 8,043,252,188 99,881.4 10,397.7
    consistent gets - examination 3,180,028,047 39,489.7 4,110.9
    current blocks converted for CR 9 0.0 0.0
    cursor authentications 14,926 0.2 0.0
    data blocks consistent reads - un 143,706,500 1,784.6 185.8
    db block changes 302,577,666 3,757.4 391.2
    db block gets 336,562,217 4,179.4 435.1
    deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
    dirty buffers inspected 627,174 7.8 0.8
    enqueue conversions 1,296,337 16.1 1.7
    enqueue releases 13,053,200 162.1 16.9
    enqueue requests 13,239,092 164.4 17.1
    enqueue timeouts 185,878 2.3 0.2
    enqueue waits 114,120 1.4 0.2
    exchange deadlocks 7,390 0.1 0.0
    execute count 105,475,101 1,309.8 136.4
    free buffer inspected 1,604,407 19.9 2.1
    free buffer requested 258,126,047 3,205.4 333.7
    hot buffers moved to head of LRU 22,793,576 283.1 29.5
    immediate (CR) block cleanout app 16,436,010 204.1 21.3
    immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
    index fast full scans (direct rea 12,375 0.2 0.0
    index fast full scans (full) 3,733 0.1 0.0
    index fast full scans (rowid rang 192,148 2.4 0.3
    index fetch by key 1,321,024,486 16,404.5 1,707.7
    index scans kdiixs1 406,165,684 5,043.8 525.1
    leaf node 90-10 splits 50,373 0.6 0.1
    leaf node splits 697,235 8.7 0.9
    logons cumulative 884,756 11.0 1.1
    messages received 3,276,719 40.7 4.2
    messages sent 3,257,171 40.5 4.2
    no buffer to keep pinned count 569 0.0 0.0
    no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
    opened cursors cumulative 20,527,704 254.9 26.5
    parse count (failures) 267,088 3.3 0.4
    parse count (hard) 468,996 5.8 0.6
    parse count (total) 19,960,548 247.9 25.8
    parse time cpu 323,024 4.0 0.4
    parse time elapsed 8,393,422 104.2 10.9
    physical reads 537,189,332 6,670.8 694.4
    physical reads direct 292,545,140 3,632.8 378.2
    physical writes 70,409,002 874.3 91.0
    physical writes direct 59,248,394 735.8 76.6
    physical writes non checkpoint 69,103,391 858.1 89.3
    pinned buffers inspected 11,893 0.2 0.0
    prefetched blocks 95,892,161 1,190.8 124.0
    prefetched blocks aged out before 1,495,883 18.6 1.9
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    process last non-idle time ################## ############## ############
    queries parallelized 417 0.0 0.0
    recursive calls 122,323,299 1,519.0 158.1
    recursive cpu usage 3,144,533 39.1 4.1
    redo blocks written 180,881,558 2,246.2 233.8
    redo buffer allocation retries 5,400 0.1 0.0
    redo entries 164,728,513 2,045.6 213.0
    redo log space requests 1,006 0.0 0.0
    redo log space wait time 2,230 0.0 0.0
    redo ordering marks 2,563 0.0 0.0
    redo size 108,208,614,904 1,343,739.0 139,883.4
    redo synch time 558,520 6.9 0.7
    redo synch writes 2,343,824 29.1 3.0
    redo wastage 1,126,585,600 13,990.0 1,456.4
    redo write time 718,655 8.9 0.9
    redo writer latching time 7,763 0.1 0.0
    redo writes 2,685,833 33.4 3.5
    rollback changes - undo records a 522,742 6.5 0.7
    rollbacks only - consistent read 335,177 4.2 0.4
    rows fetched via callback 1,100,990,382 13,672.1 1,423.3
    session connect time ################## ############## ############
    session cursor cache count 1,061 0.0 0.0
    session cursor cache hits 1,687,796 21.0 2.2
    session logical reads 8,061,057,193 100,102.5 10,420.7
    session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
    session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
    session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
    session uga memory max 386,645,043,296 4,801,374.0 499,823.6
    shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
    sorts (disk) 2,657 0.0 0.0
    sorts (memory) 126,165,625 1,566.7 163.1
    sorts (rows) 24,048,783,304 298,638.8 31,088.3
    summed dirty queue length 5,438,201 67.5 7.0
    switch current to new buffer 1,302,798 16.2 1.7
    table fetch by rowid 6,201,503,534 77,010.5 8,016.8
    table fetch continued row 26,649,697 330.9 34.5
    table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
    table scan rows gotten 43,639,997,280 541,923.3 56,414.3
    table scans (cache partitions) 26,112 0.3 0.0
    table scans (direct read) 246,243 3.1 0.3
    table scans (long tables) 340,200 4.2 0.4
    table scans (rowid ranges) 359,617 4.5 0.5
    table scans (short tables) 9,111,559 113.2 11.8
    transaction rollbacks 4,819 0.1 0.0
    transaction tables consistent rea 824 0.0 0.0
    transaction tables consistent rea 1,386,848 17.2 1.8
    user calls 159,931,913 1,986.0 206.8
    user commits 746,543 9.3 1.0
    user rollbacks 27,020 0.3 0.0
    write clones created in backgroun 7 0.0 0.0
    write clones created in foregroun 4,350 0.1 0.0
    Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> Standard block size Pools D: default, K: keep, R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
    Free Write Buffer
    Number of Cache Buffer Physical Physical Buffer Complete Busy
    P Buffers Hit % Gets Reads Writes Waits Waits Waits
    D 774,144 95.6############ 233,869,082 10,089,734 0 0########
    K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
    R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
    Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc
    Tot Wait Avg
    Class Waits Time (s) Time (ms)
    data block 7,791,121 14,676 2
    file header block 587 101 172
    undo header 151,617 71 0
    segment header 299,312 58 0
    1st level bmb 45,235 7 0
    bitmap index block 392 1 3
    undo block 4,250 1 0
    2nd level bmb 14 0 0
    system undo header 2 0 0
    3rd level bmb 1 0 0
    Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
    willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
    Pct Avg Wait Pct
    Get Get Slps Time NoWait NoWait
    Latch Requests Miss /Miss (s) Requests Miss
    Consistent RBA 2,686,230 0.0 0.2 0 0
    FAL request queue 86 0.0 0 0
    FAL subheap alocation 0 0 2 0.0
    FIB s.o chain latch 1,089 0.0 0 0
    FOB s.o list latch 4,589,986 0.5 0.0 2 0
    NLS data objects 1 0.0 0 0
    SQL memory manager worka 5,963 0.0 0 0
    Token Manager 0 0 2 0.0
    active checkpoint queue 719,439 0.3 0.1 0 1 0.0
    alert log latch 184 0.0 0 2 0.0
    archive control 4,365 0.0 0 0
    archive process latch 1,808 0.6 0.6 0 0
    begin backup scn array 3,387,572 0.0 0.0 0 0
    cache buffer handles 1,577,222 0.2 0.0 0 0
    cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
    cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
    cas latch 538,804,153 0.3 0.0 7 0
    channel handle pool latc 1,776,950 0.5 0.0 0 0
    channel operations paren 2,901,371 0.3 0.0 0 0
    checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
    child cursor hash table 3,927,427 0.0 0.0 0 0
    commit callback allocati 8,739 0.0 0 0
    dictionary lookup 7,980 0.0 0 0
    dml lock allocation 6,767,990 0.1 0.0 0 0
    dummy allocation 1,898,183 0.2 0.1 0 0
    enqueue hash chains 27,741,348 0.1 0.1 4 0
    enqueues 17,450,161 0.3 0.1 6 0
    error message lists 132,828 2.6 0.2 1 0
    event group latch 884,066 0.0 0.7 0 0
    event range base latch 1 0.0 0 0
    file number translation 34 38.2 0.9 0 0
    global tx hash mapping 577,859 0.0 0 0
    hash table column usage 4,062 0.0 0 8,757,234 0.0
    hash table modification 16 0.0 0 2 0.0
    i/o slave adaptor 0 0 2 0.0
    job workq parent latch 4 100.0 0.3 0 494 8.7
    job_queue_processes para 1,950 0.0 0 2 0.0
    ksfv messages 0 0 4 0.0
    ktm global data 8,219 0.0 0 0
    lgwr LWN SCN 2,687,862 0.0 0.0 0 0
    library cache 310,882,781 0.9 0.0 34 104,759 4.0
    library cache load lock 30,369 0.0 0.3 0 0
    library cache pin 153,821,358 0.1 0.0 2 0
    library cache pin alloca 126,316,296 0.1 0.0 4 0
    list of block allocation 2,730,808 0.3 0.0 0 0
    loader state object free 566,036 0.1 0.0 0 0
    longop free list parent 197,368 0.0 0 8,390 0.0
    message pool operations 14,424 0.0 0.0 0 0
    messages 25,931,764 0.1 0.0 1 0
    mostly latch-free SCN 40,124,948 0.3 0.0 5 0
    Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by misses desc
    Get Spin &
    Latch Name Requests Misses Sleeps Sleeps 1->4
    cache buffers chains ############## 74,770,083 1,062,119 73803903/884
    159/71439/10
    582/0
    redo allocation 170,107,983 3,441,055 149,631 3292872/1467
    48/1426/9/0
    library cache 310,882,781 2,831,747 89,240 2754499/6780
    6/7405/2037/
    0
    shared pool 158,471,190 1,755,922 55,268 1704342/4836
    9/2826/385/0
    cas latch 538,804,153 1,553,992 6,927 1547125/6808
    /58/1/0
    row cache objects 161,142,207 1,176,998 27,658 1154070/1952
    0/2560/848/0
    process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
    36/0/0
    Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Pct Misses" should be very low
    Get Pct Pin Pct Invali-
    Namespace Requests Miss Requests Miss Reloads dations
    BODY 3,137,721 0.0 3,137,722 0.0 0 0
    CLUSTER 6,741 0.1 4,420 0.2 0 0
    INDEX 353,708 0.8 361,065 1.2 0 0
    SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
    TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
    TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
    SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    SGA regions Size in Bytes
    Database Buffers 22,330,474,496
    Fixed Size 779,288
    Redo Buffers 17,051,648
    Variable Size 7,180,648,448
    sum 29,528,953,880

  • What are the recommended steps to upgrade from CF7 to CF8

    I have read all of the documentation for how to upgrade from
    CF 7.02 to CF 8. Well there is actually only one blurb in all of
    the CF docs.
    "Upgrade installation You can upgrade from ColdFusion MX,
    ColdFusion MX 6.1, and ColdFusion MX 7. When upgrading from
    ColdFusion MX, ColdFusion MX 6.1, or ColdFusion MX 7, the installer
    preserves the existing settings and installs in a new directory,
    automatically assigning ports that do not conflict with the
    existing installation."
    OK that sounds simple enough, so i went ahead and installed
    it. Well it did not work, now I am stuck somewhere between 7 and 8.
    My basic step, as I can best recall at 1:00AM.
    * Select Multi server Configuration
    * Select install path, I used D:\JRun4
    * This is where my existing cf7 install was located
    * installer told me port 8300 was being used so it gave me
    port 8306 instead
    * Selected d: for ColdFusion8DotNetService install path
    * Select Passwords for admin and rds
    * installed cf
    * received error after install
    (Adobe_ColdFusion_8_InstallLog.log)
    ANT Script Error:
    Status: ERROR
    Additional Notes: ERROR - jrun.xmlUnable to remove existing
    file D:\JRun4\bin\jrun.exe
    * rebooted
    * can't login to cfadmin at port 8306
    * tried to login to cfadmin at port 8300, everythin works
    * it appears the cf7 services are still running,
    * didn't notice any new cf8 services, excpet the dotnet
    stuff
    Any assistance would be appriciated.
    I am on windows 2003/CF8EE/Multiserver configuration

    A_B wrote:
    > I have not seen anything come out from Adobe on this
    subject but I did talk to
    > two of the CF engineers at the Max Conference regarding
    this issue. I spoke
    > with Dean Harmon and Tom Jordahl about this issue. Here
    are the basic take
    > aways from what I recall. I have not yet attempted this
    but will be sometime
    > in the next month or two. There are a few options but
    basically JRUN is an
    > updated version with CF8 and there is a new JRE that it
    runs on
    Essentially CF comes with the updates that will become JRun 4
    Updater 7
    integrated into it. (The beta of Updater 7 is under way).
    > - Backup existing JRUN directory (c:\JRun4 or whatever)
    completely.
    > - Make a backup of all the existing CF settings via a
    CAR file in the CF admin.
    > - Do an uninstall of your CF7 installation. This goes
    against my better
    > judgement since you should not need to uninstall 7
    before installing 8 since
    > there should be an upgrade path. I understand that there
    are differences with
    > JRUN etc ... but this should be done via an install.
    > - Install CF8 and then restore your settings via the CAR
    file. If you are
    > crafty enough I believe you can also bring back all the
    XML files and set a
    > flag to let the CF admin to an upgrade.
    Use the CAR file, XML file formats have changed between
    versions.
    > I want to also note that this is for the multi-server
    install on top of JRUN
    > in which you already have JRUN installed with CF7. I'm
    surprised that Adobe
    > has not released better instructions on how to do this
    but they haven't and
    > they seem to have let us to try to figure this out which
    is completely wrong of
    > them. How hard is it to write up a tech article.
    What you are describing is a simultaneous update of JRun and
    CF. For
    just an update of CF you need to redeploy the CF EAR file as
    described
    in the JRun manual. For just an update of JRun you have to
    wait until
    updater 7 is released (or join the beta and file bugs if you
    don't like
    the procedure).
    > Another possible option would be to update JRUN with the
    latest release,
    The latest release is only updater 6.
    > download the latest JRE and create a custom jvm.config
    file to use for the new
    > JRE. Then you would create a new instance of JRUN using
    the JRUN console and
    > deploy a CF8 EAR/WAR file to it. You still then have the
    issue of getting
    > settings and things transferred.
    Use a CAR file for that.
    > This seems possible to do it this way but does not seem
    as clean at all.
    In place upgrades are never clean. In my book, an upgrade
    means
    rebuilding from scratch.
    Jochem
    Jochem van Dieten
    Adobe Community Expert for ColdFusion

  • What are the essentials needed before upgrading from call manager 9.1.1 to call manager 9.1.2?

    I recently tried to upgrade my call manager in a lab environment from 9.1.1 to 9.1.2 but failed. The error stated that connection had been lost after 2 hours into it. Connect using CLI. Any help would be greatly appreciated.
    Steve

    You mean via GUI??
    What does the upgrade status via CLI says??

  • What are the Disadvantages of Management Data Warehouse (data collection) ?

    Hi All,
    We are plan to implement Management Data Warehouse in production servers .
    could you please explain the Disadvantages of Management Data Warehouse (data collection) .
    Thanks in advance,
    Tirumala 
     

    >We are plan to implement Management Data Warehouse in production servers
    It appears you are referring to production server performance.
    BOL: "You can install the management data warehouse on the same instance of SQL Server that runs the data collector. However, if server resources or performance is an issue on the server being monitored, you can install the management data warehouse
    on a different computer."
    Management Data Warehouse
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Database Design
    New Book / Kindle: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2014

Maybe you are looking for

  • ORA-01467: sort key too long

    Hi, I am trying to run the below query in toad but getting the error "ORA-01467: sort key too long". SELECT :o, :o, MSD, 1, SUM(FLAG), DECODE(SUM(FLAGCOLOR),0,0,1), SUM(DECODE(SIGN(Tiefool), -1,'',Tiefool)), SUM(DECODE(SIGN(Tiegreen), -1,'',Tiegreen)

  • Processing length specific file in XI

    Hi All, Is it possible to process lenghth specific file in XI. Can any one gives some idea how to do that. If I have a very big file like this and want to test mapping by taking that file as input. Is it possible or do we need to convert that file in

  • Photos App Storage

    I have a 64GB iPhone 6, with the latest update (8.1.1). I have 22 photos saved to my phone, and 7 videos totaling about 1 minute of 240 fps slow motion video. In the "manage storage" section of Settings, its says "Photos & Camera" is using 5.6 GB of

  • ITunes artwork grabber not working

    Has anyone else encountered the problem with the iTunes artwork grabber not working in the new update of iTunes? It just WON'T get any artwork (I was trying to get Nirvana Unplugged in case you were wondering). What the **** is going on, Apple??? Mac

  • Right click Delay windows 8.1

    When I right click on a file on my hp laptop I get a huge delay. After searching the internet I found that it is a bluetooth  problem on hp laptops. Bluetooth has to be on in order to don' t have any delay. I don' t want bluetooth to be always on!!!!