Performance isues

I'm having some odd results from the Kona System test which I noticed after swapping out my Dual G5 for a Dual Quad-core Intel. This may have been going on before, but I just noticed it:
When I run the Read/Write test using 1280x720 8-bit, my write speed starts at about 320 MB/s then steadily decreases throughout the test, settling at 190 MB/s by the end. The read test starts at 9 MB/s and gradually improves to 190 MB/s by the end of the test.
I haven't seen this kind of performance in the 2 years I've had the drives but I haven't been able to find anything wrong. Performance settings (write caches, etc) are set correctly, latest drivers, Fibre card set correctly, RAID is almost completely empty.
Any ideas how I can pick up the read speed on the RAID? Thanks-

I'm ready to reformat my RAID set tomorrow. I am going to be using a RAID 50 scheme. I have a 14 drive setup, so Im thinking my best bet would be to configure one 4-drive RAID 5 set and one 3-drive RAID 5 set on each controller, and then stripe the 4 RAID 5 sets as one RAID 0 array.
Is this the best way to go?
Should I change the block size option in the Disk Utility? I primarily edit uncompressed HD 720p 8-bit video.
RAID 5 set 1
Drive 1
Drive 2
Drive 3
Drive 4
RAID 5 set 2
Drive 5
Drive 6
Drive 7
RAID 5 set 3
Drive 8
Drive 9
Drive10
Drive11
RAID 5 set 4
Drive12
Drive13
Drive14
Then combine using Disk Utility as a RAID 0
Thanks!

Similar Messages

  • Performance isues for uploading file

    Hello -
    Has anyone hit any performance issues when uploading files, using the obj.setSource('HTTP'...) method?
    Seems like all the examples I've seen have to call the "upload" procedure one time per upload. That is, unless you know haead of time that you have 3 documents to upload, then you can put 3 update statements within the procedure.
    But if you don't know ahead of time how many files a user is going to upload, are we stuck then calling the "upload" procedure once per file (say if a user wants to upload 5 files, the the "upload" procedure has to be called 5 times, once per file).
    Will this cause a major performance hit, or will it be OK because the procedure is already compiled and loaded in the SGA so it can be run multiple times by multiple users?
    I hope that makes sense. Thanks for any input

    hi ST,
    1   http://www.sap-img.com/financial/sap-fi-transaction-code-list-1.htm
    2  http://www.sap-img.com/general/easy-to-remember-transaction-codes.htm
    3  http://www.4soi.de/transactions-f-h.php
    4 http://www.saptransactioncodes.com/search
    5 http://www.sap-img.com/financial/sap-fi-transaction-code-list-2.htm
    6 http://www.sap-img.com/financial/some-important-tcodes-for-fi-gl-ar-ap-asset.htm
    Hope this will helps u .............
    Ranjit

  • 2958-J6Y performanc​e

    hi guys,
    I just bought a Lenovo 3000 G550 2958-J6Y, and made a clean win7ultimate 64b install.
    I am having serious performance isues in what i consider key spots:
    1 windows graphics: in areo mode, windows and menues show up slowly like a bad vista install.
    2 opengl games running below average fps... (I'm aware the video card isn' that great but still)
     I think the problem (if there is a problem) is  the video driver. win7 downloaded and installed it.
    any suggestions before I move to xp 64bits?  is this a good move? I didn¨t notice a full set of drivers for xp 64b in the drivers page: http://consumersupport.lenovo.com/en/DriversDownlo​ads/drivers_list.aspx?CategoryID=600154
    many thanks in advance. the machine itself works like a charm but I need the rigth OS.
    greetings

    It didn't came with card reader..
    I tried to install the bluetooth driver but none was detected by the software... what gives?!
    anyway, I'm pretty happy with the pc. it works fine with games, movies and music. although windows media center it's a dissapointment....
    i think w7 doesn't work too well with the pagefile since i hear a lot of HD noise.
     I hope the full set of drivers will complete soon, I thnk I need the chipset drivers for w7 64b...
    thanks duck!

  • Tracking All Transactions on an Instance

    Hi All,
    We currently have a project requirement where we need to report all the transactions run by a (rather large) set of users over a certain period of time (e.g. last week).
    The system in consideration is an ECC 6.0 instance.
    My questions is: What is the best practice method to perform this type of activity?
    Is it by using SAP standard transactions (e.g. SM20, STAD, ST03, etc) or are there programs that can elegantly provide the desired data in the desired format?
    We did review SAP standard transactions and found them lacking one way or another (e.g. STAD does not allow us to filter by user, ST03 requires us to manually click on each user to identify the transactions run by them, etc).  At this point - the ABAP code route appears attractive to us
    Any IC and/or tips in the matter would be greatly appreciated <b><REMOVED BY MODERATOR></b>.
    Thanks.
    Scott
    Message was edited by:
            Alvaro Tejada Galindo

    I work for IBM and found the answer in house.  Here it is:
    The standard ways of determining the transaction usage would require you to activate statistics or the system-wide logging which can potentially consume huge amounts of disk space besides causing performance isues. Also check to see if ERP 6.0 supports specific transactions meant for auditig certain users; I do recall the security team  talk about the same here but this needs to be checked out.
    Alternatively you can use the enhancement S38MREP1 and the function module exit component within the same to code in your logic to track transaction usage; this gets called at the start of each transaction including development transactions.  You can restrict your code to look at a specific set of transactions or just custom Ztransactions alone.
    Within the exit,  code in logic that would populate a custom table (preferably in update task) where you can track the caller(s)(specific list of callers if needed) of the concerned transactions, date of call,  number of times the transaction was called, IP addresses etc.  We had used teh same to analyze the active transactions being used in a 4.6C system for upgrade purposes and had no issues whatsoever with performance.
    Additionally you may need to develop a simple ALV report to report off of this table(s).

  • Libraries work flow managed referenced

    I'm a photographer and reaching a library that is pushing 1T. I don't see a performance isue as much as the backups I run are painful - even the vaults. I back on two other network drives as well, so I could be out of commision for a day.
    What are the pros doing out there ? Are you using referenced or masters ? Do you keep one big library or smaller contextual libs and why ? Are there any really good tutorials on heavy workflow? I have decided to use three different managed libraries, one for Archival, Current, and Professional. I can then run Vaults on all three seperatly on an external drive.
    I am curious about other folks experience and the workflows they've settled on //
    JJ

    IMO
    • Multiple Libraries are self-defeating to the concept of an images database because they prevent global searches like keyword searches. Only for very specific and unusual reasons (like specific-client security of some kind) should there be more than one Library. Creating multiple Libraries for organizational or drive space purposes is misguided thinking.
    • Large Managed Libraries are a bad idea for the reasons you are experiencing. They can be done well, but only by true mass storage experts with sophsticated large-drives setups.
    • The Library on an internal drive with Masters/originals referenced on external drives is preferable for all but a small Library.
    From a previous post of mine:
    The Library with its Previews lives on the internal drive and is always accessible. Masters live on external drives. The Library is backed up via Vaults and originals are backed up to redundant locations using the Finder before import into Aperture.
    Personally I have images managed on the internal SSD until editing is complete then convert to Referenced-Masters.
    Database Physics
    Aperture is designed to bite into small chunks at a time, so TBs of data do not bother the app itself. However, handling super-large batches of data like 1.5 TB on consumer hardware tends to be problematic.
    Slower speeds seem to exacerbate handling large data chunks.
    IMO referenced Masters make far more sense than building huge managed-Masters Libraries. With referenced Masters one has no need to copy a 1.5 TB sized file. A I find that even (2011 MBP) copying 5-15 GB-sized batches of RAW/JPEG files copying fails with some frequency, enough so that I always verify the copy. Failures copying 1500 GB to a drive as a single file should be expected based on my experience.
    • Hard disk speed. Drives slow as they fill so making a drive more full (which managed Masters always does) will slow down drive operation.
    • Database size. Larger databases are by definition more prone to "issues" than smaller databases are.
    • Vaults. Larger Library means larger Vaults, and Vaults are an incremental repetitive backup process, so again larger Vaults are by definition more prone to "issues" than smaller Vaults are. One-time backup of Referenced Masters (each file small, unlike a huge managed-Masters DB) is neither incremental nor ongoing; which is by definition a more stable process.
    Managed-Masters Libraries can work, but they cannot avoid the basic database physics.
    Note that whether managed or referenced, original images should be separately backed up prior to import into Aperture or any other images management application. IMO after backing up each batch of original images importing that batch into Aperture as a new Project by reference makes by far the most sense. Building a huge managed Library or splitting into multiple smaller Libraries is less logical.
    HTH
    -Allen

  • PATCH STACK 11  TO STACK 15

    Dear all
    present in production server SAP ERP 2004 SUPPORT PACKAGE STACK IS 11  AND DEVELOPMENT SERVER  SAP SUPPORT PACKAGE STACK IS 12
    WE WANT GO TO THE SAP SUPPORT PACKAGE STACK 15 .
    PLEASE ADVISE WHATA IS THE MESURE PRECASUTIONS ,
    IS THIS NESSARY TO THE  SUPPORT PACK UPDATAION .
    VENKAT

    Dear markus
    can any performance isues  if  i apllying  PATCH STACK 11  TO STACK 15.
    thanks in advanse
    with regards
    venkat

  • Creating Move out

    Hi Experts,
    I have a query for CRM solution designed for deregulated utility industry. We know that the CRM contract is created in CRM which uses Product MDT in the back end ISU system to perform Move in.
    My question is when we do CONTRACT END in CRM then does it use the MDT or it directly performs ISU MOVE OUT?
    I have tested with data set and found that when the Contract is terminated in CRM , MOVE OUT happens automatically in ISU system.
    This will help me to decide the location of enhacment associated with MOVE OUT. Should it be done in CRM or ISU.
    Regards
    Tarun Mishra

    This earlier post was created for the same issue.
    Error in Move Out | SCN
    Sorry heres the full path as requested...
    -> Financial Accounting
    -> Contract Accounts Receivable and Payable
    -> Basic Functions
    -> Customer Contacts
    -> Define Configuration Determination
    Hope this can solve your issue.
    Regards
    Olivia

  • TRACE SESSION X TRACE USER (11gr2)

    Hi,
    We are facing performance isues on a Windows 2008 Oracle database (11.2.0.2) SAP BI Loads. There is a huge chain of jobs that are started and runs on the database with talkes a long time to execute. Let's supose that in this chain there are 500 jobs inside and takes more ou less than 15 to 20 hours. But, sometimes, this time increases and takes 30, 35, 40 hours.
    So, we need to understand in which steps of the chain it takes more time, and one of the activities that we are planing is to trace all the execution and identify the waits.
    The problem is:
    . If there is a "father" process that call's the others, it would be easy since we could set the trace in this session.
    Now,
    How can we guarantee to trace all the chain since each step is one session opened on the database? Anybody know the usage or viability to trace the user at all? Any other options that could help us in this issue?
    Doubt: Even that we set a trace on a "father" session, and this session calls/opens other sessions, this trace would contain all the instructions for all the sessions ou just the father? The trace sees a relation and traces all that are started in parallel?
    Please disconsider tracing all the database. Not useful for this situation.
    I aprreciate any Help

    Hi,
    this does not seem to be a security related issue. While the scheduler may know the notion of 'chains' the sql_trace facility
    has no understanding of inheritance, so to gather trace info build it into the jobs so they can set event 10046 in their
    respective job sessions. Also if you generate an AWR report for the duration of the jobs, you may be able to pinpoint
    certain sql statements using more resources, compare with AWR report for the 'normal' situation,
    greetings,
    Harm ten Napel

  • Modify a SELECT Query on ISU DB tables to improve performance

    Hi Experts,
    I have a SELECT query in a Program which is hitting 6 DB tables by means of 5 inner joins.
    The outcome is that the program takes an exceptionally long time to execute, the SELECT statement being the main time consumer.
    Need your expertise on how to split the Query without affecting functionality -
    The Query :
    SELECT  fkkvkpgpart eablablbelnr eabladat eablistablart
      FROM eabl
      INNER JOIN eablg  ON eablgablbelnr = eablablbelnr
      INNER JOIN egerh  ON egerhequnr    = eablequnr
      INNER JOIN eastl  ON eastllogiknr  = egerhlogiknr
      INNER JOIN ever   ON everanlage    = eastlanlage
      INNER JOIN fkkvkp ON fkkvkpvkont   = evervkonto
      INTO TABLE itab
    WHERE eabl~adat GT [date which is (sy-datum - 3 years)]
    Thanks in advance,
    PD

    Hi Prajakt
    There are a couple of issues with the code provided by Aviansh:
    1) Higher Memory consumption by extensive use of internal tables (possible shortdump TSV_NEW_PAGE_ALLOC_FAILED)
    2) In many instances multiple SELECT ... FOR ALL ENTRIES... are not faster than a single JOIN statement
    3) In the given code the timeslices tables are limited to records active of today, which is not the same as your select (taking into account that you select for the last three years you probably want historical meter/installation relationships as well*)
    4) Use of sorted/hashed internal tables instead of standard ones could also improve the runtime (in case you stick to all the internal tables)
    Did you create an index on EABL including columns MANDT, ADAT?
    Did you check the execution plan of your original JOIN Select statement?
    Yep
    Jürgen
    You should review your selection, because you probably want business partner that was linked to the meter reading at the time of ADAT, while your select doesn't take the specific Contract / Device Installation of the time of ADAT into account.
    Example your meter reading is from 16.02.2010
    Meter 00001 was in Installation 3000001 between 01.02.2010 and 23.08.2010
    Meter 00002 was in Installation 3000001 between 24.08.2010 and 31.12.9999
    Installation 3000001 was linked to Account 4000001 between 01.01.2010 and 23.01.2011
    Installation 3000001 was linked to Account 4000002 between 24.01.2010 and 31.12.9999
    This means with your select returns four lines and you probably want only one.
    To achieve that you have to limit all timeslices to the date of EABL-ADAT (selects from EGERH, EASTL, EVER).
    Update:
    Coming back to point one and the memory consumption:
    What are you planning to do with the output of the select statment?
    Did you get a shortdump TSV_NEW_PAGE_ALLOC_FAILED with three years meter reading history?
    Or did you never run on production like volumes yet?
    Dependent on this you might want to redesign your program anyway.
    Edited by: sattlerj on Jun 24, 2011 10:38 AM

  • Vista & OSX wifi performance on MacBook Pro

    MacBook Pro 2.4Ghz, 4Gb ram, OSX 10.5.2, Airport Extreme 802.11n, Airport Express 802.11n extending network
    Noticing dreadful wifi performance while running OSX (trace below) compared to performance when runing Vista (Bootcamp) on the same laptop. In both traces I'm pinging the DNS server for the connection. As a result web browsing in OSX is awful compared to Vista. Signal strength is also much better under Vista (96%) than OSX (56%) I'm 4 feet from the Airport Express which extends the Extreme BS in another room.
    Any idea what could be causing this ? This has been going on for a while to the extent I had the Airport card replaced under warranty today. Presumably ths must be an OSX issue as dual-booting into Vista effectively rules out hardware isues.
    Apple OSX 10.5.2
    PING 212.23.3.100 (212.23.3.100): 56 data bytes
    64 bytes from 212.23.3.100: icmp_seq=0 ttl=58 time=516.876 ms
    64 bytes from 212.23.3.100: icmp_seq=1 ttl=58 time=233.578 ms
    64 bytes from 212.23.3.100: icmp_seq=4 ttl=58 time=1420.669 ms
    64 bytes from 212.23.3.100: icmp_seq=5 ttl=58 time=420.449 ms
    64 bytes from 212.23.3.100: icmp_seq=6 ttl=58 time=29.720 ms
    64 bytes from 212.23.3.100: icmp_seq=7 ttl=58 time=3307.350 ms
    64 bytes from 212.23.3.100: icmp_seq=8 ttl=58 time=2307.465 ms
    64 bytes from 212.23.3.100: icmp_seq=10 ttl=58 time=336.797 ms
    64 bytes from 212.23.3.100: icmp_seq=11 ttl=58 time=32.153 ms
    64 bytes from 212.23.3.100: icmp_seq=13 ttl=58 time=2258.254 ms
    64 bytes from 212.23.3.100: icmp_seq=14 ttl=58 time=1258.072 ms
    64 bytes from 212.23.3.100: icmp_seq=15 ttl=58 time=257.886 ms
    64 bytes from 212.23.3.100: icmp_seq=16 ttl=58 time=31.105 ms
    64 bytes from 212.23.3.100: icmp_seq=17 ttl=58 time=3110.678 ms
    64 bytes from 212.23.3.100: icmp_seq=18 ttl=58 time=2110.769 ms
    64 bytes from 212.23.3.100: icmp_seq=19 ttl=58 time=1139.461 ms
    64 bytes from 212.23.3.100: icmp_seq=20 ttl=58 time=142.393 ms
    64 bytes from 212.23.3.100: icmp_seq=21 ttl=58 time=29.261 ms
    64 bytes from 212.23.3.100: icmp_seq=22 ttl=58 time=3275.200 ms
    64 bytes from 212.23.3.100: icmp_seq=23 ttl=58 time=2275.256 ms
    64 bytes from 212.23.3.100: icmp_seq=24 ttl=58 time=1314.971 ms
    64 bytes from 212.23.3.100: icmp_seq=25 ttl=58 time=316.191 ms
    64 bytes from 212.23.3.100: icmp_seq=26 ttl=58 time=29.871 ms
    64 bytes from 212.23.3.100: icmp_seq=27 ttl=58 time=3097.048 ms
    64 bytes from 212.23.3.100: icmp_seq=28 ttl=58 time=2096.986 ms
    64 bytes from 212.23.3.100: icmp_seq=29 ttl=58 time=1126.190 ms
    64 bytes from 212.23.3.100: icmp_seq=30 ttl=58 time=127.525 ms
    64 bytes from 212.23.3.100: icmp_seq=31 ttl=58 time=28.869 ms
    64 bytes from 212.23.3.100: icmp_seq=33 ttl=58 time=2425.877 ms
    64 bytes from 212.23.3.100: icmp_seq=34 ttl=58 time=1425.730 ms
    64 bytes from 212.23.3.100: icmp_seq=35 ttl=58 time=1001.442 ms
    64 bytes from 212.23.3.100: icmp_seq=36 ttl=58 time=30.232 ms
    64 bytes from 212.23.3.100: icmp_seq=38 ttl=58 time=2441.496 ms
    64 bytes from 212.23.3.100: icmp_seq=39 ttl=58 time=1441.439 ms
    64 bytes from 212.23.3.100: icmp_seq=40 ttl=58 time=441.249 ms
    64 bytes from 212.23.3.100: icmp_seq=41 ttl=58 time=30.432 ms
    64 bytes from 212.23.3.100: icmp_seq=42 ttl=58 time=3287.105 ms
    64 bytes from 212.23.3.100: icmp_seq=43 ttl=58 time=2287.152 ms
    64 bytes from 212.23.3.100: icmp_seq=44 ttl=58 time=1316.517 ms
    64 bytes from 212.23.3.100: icmp_seq=45 ttl=58 time=318.959 ms
    64 bytes from 212.23.3.100: icmp_seq=46 ttl=58 time=29.508 ms ^C
    --- 212.23.3.100 ping statistics ---
    49 packets transmitted, 41 packets received, 16% packet loss round-trip min/avg/max/stddev = 28.869/1197.761/3307.350/1107.757 ms
    iStumbler showing 52%/37% signal strength - Airport Extreme-N/Airport Express-N
    Vista Ultimate - 94% signal strength
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=3525ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=30ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=30ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=31ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=28ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=35ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Reply from 212.23.3.100: bytes=32 time=29ms TTL=58
    Ping statistics for 212.23.3.100:
    Packets: Sent = 50, Received = 49, Lost = 1 (2% loss),
    Approximate round trip times in milli-seconds:
    Minimum = 28ms, Maximum = 3525ms, Average = 164ms

    Mac OS X (10.6.3)
    Use Software Update or the OS 10.6.8 combo update to update your OS.  Also, update everything SU has to offer for your computer.  When done, repair permissions and restart your computer.
    You should get your wifi/airport back after doing the above.

  • Business scenario..for ISU and integration with R/3 modules..

    Scenario: 
    1.     Client is providing Utility services to the customer and also to its own employees.
    The service is also consumed by the employees who are treated as contract partners in ISU. The constraint is that the utility bill has to be deducted from their payroll, which is deducted monthly using a wage type from their pay roll, which is in SAP R/3.
    2.     Similarly some of the contract partners are also vendors to the client (the vendor and contract partner is same). The consumption of utility bill is adjusted to the payment made by the client to vendor. 
    Questions :
    1.     How we can deduct the utility bill of the employees from their payroll. Is there any integration of ISU with HR ?
    2.               Also how  can we clear the receivables in FICA since data can be pushed from FICA to FICO thru reconciliation keys but not vice versa ?
    3.     How can the utility bill be deducted from vendor’s account and there is no standard link between vendor master to contract partner in ISU.
         Can anybody suggest any standard solution for these two scenarios or  any work arounds .
    Early solutions are highly appriciated
    Thanks and Regards

    Hello,
    Please look into my answers below:
    Questions:
    1.     How we can deduct the utility bill of the employees from their payroll. Is there any integration of ISU with HR?
    i.     There is no integration between ISU with HR.  What you need to do is that create separate account determination id <say HR> and use this id in Contract account to identify that a particular contract account is an employee related ID.
    ii.     Now at the end of the day or month collect the O/s balances of all the employee contracts in a form of flat file, use a BDC program to update the data in the HR table PA0014 <Please confirm with your HR colleague too> so that their pay would be = Actual payroll-Amount due.
    iii.     After the above step, make sure that you run FPY1 <Payment run program> to clear all employee open items. You can have a separate payment method for Employees and attach the same in contract account
    iv.     You can create a separate ‘Z’ table to map Employee ID with a contract number so that during creation of flat file in step 1.ii above, your job becomes easier.
    2. Also how can we clear the receivables in FICA since data can be pushed from FICA to FICO thru reconciliation keys but not vice versa?
    i.     Please refer to point 1.iv above. You need to run FPY1 Payment run to clear employee receivables in FICA.
    3. How can the utility bill be deducted from vendor’s account and there is no standard link between vendor master to contract partner in ISU.
    i.     There is no integration between ISU with AP.  What you need to do is that create separate account determination id <say AP> and use this id in Contract account to identify that a particular contract account is a Vendor related ID.
    ii.     Now at the end of the day or month collect the O/s balances of all the vendors in a form of flat file, use a BDC program to update the data in the AP table BSIK so that amount to be collected from vendor = Actual amount-Amount due.
    iii.     After the above step, make sure that you run FPY1 <Payment run program> to clear all Vendor open items. You can have a separate payment method for Vendor-Contract Accounts and attach the same in contract account
    iv.     You can create a separate ‘Z’ table to map Vendor number with a contract number so that during creation of flat file in step 1.ii above, your job becomes easier.
    Additional points to be considered
    1.     Check if at all separate clearing rules are needed for Employee- Contract accounts, Vendor-Contract Accounts & other contract accounts.
    2.     Verify FM ‘COMPUTE_CONTRACT ACCOUNT_BALANCES’ is useful in obtaining the balances of contract accounts <I do not have SAP ISU active system with me else I would have provided correct FM>
    3.     There may be performance issues based on the volumes of the data.
    Rgds
    Rajendra

  • CRB Integration: CRM-ISU

    Since there seem to be a new post every day for basic setup of ISU integration with CRM, here is one thread which provides links to the documents in the service marketplace and elsewhere.
    Most of these documents are directly available at the Utilities page on the Service Marketplace https://service.sap.com/utilities
    Choose 'SAP for Utilities - Product Information' then 'SAP CRM for Utilities' and lastly click on 'Cookbooks & Guidelines' as shown below
    New 12-15-2013
    CRM for Utilities Overview
    https://service.sap.com/~sapidb/002007974700000213002013E/CRM_Utilities_2013.pdf
    New 03-27-2013
    WebClient UI Style Guidelines - great resource for understnading how to develop user interfaces in ICWC
    http://www.sapdesignguild.org/resources/uiguidelines.asp
    New 06-14-2012
    Best Practices - Setting Up a Best Run Interaction Center with SAP for Utilities
    https://service.sap.com/~sapdownload/011000358700000669512012E/SettingupaBestRunIC.pdf
    New - 05-14-2013
    The Utilities Check Cockpit
    This link only works for internal SAP folks, sorry.  https://service.sap.com/~sapdownload/011000358700001040272011E    
    Here is link for the help for the check cockpit. Utilities Check Cockpit - Functions for the Utilities Industry - SAP Library
    New - 11-14-2011
    Link: [Defining Products and Packages|https://service.sap.com/~sapdownload/011000358700001236052011E/Pakete_Definition_e.pdf]
    Link: [Selling products and packages|https://service.sap.com/~sapdownload/011000358700001235842011E/Pakete_Verkauf_e.pdf]
    Link: [Products and Product Packages|https://service.sap.com/~sapdownload/011000358700001424402010E/Final-Produktpakete_EN.pdf]
    Link: [Error handling with replication of contracts and TMD|https://service.sap.com/~sapdownload/011000358700004455162003E/Errorhandling_IS_U_CRM_e.pdf]
    Link: [Using ISU Master Data Templates|https://service.sap.com/~sapdownload/011000358700000623432005E/Cookbook_MDT_e_140507.pdf]
    Link: [ISU Setup Load Guide for Business Agreements/Contract Accounts|https://service.sap.com/~sapdownload/011000358700001088262009E/SDG_busagrmnt_contrtacc_e.pdf]
    Link: [ISU Setup Load Guide for BP relationships|https://service.sap.com/~sapdownload/011000358700000347572009E/SLG_GP_rel_Partner_roles_e.pdf]
    Link: [ISU specific Business Partner Load Setup|https://service.sap.com/~sapdownload/011000358700001064192007E/SLG_GP_ISU_V2.pdf]
    Link: [ISU-CRM Contract Integration - Contract Management|https://service.sap.com/~sapdownload/011000358700001143442009E/ContrIntegr_021009_e.pdf]
    Link: [ISU-CRM Marketing Integration|https://service.sap.com/~sapdownload/011000358700000503312010E/Cookbook_Marketing_2007_e2.pdf]
    Link: [Master Agreements in Sales Processing for C&I|https://service.sap.com/~sapdownload/011000358700000159352009E/Rahmenvertraege_CRM2007_e.pdf]
    Link: [Quotation Calculation in Sales Processing for C&I|https://service.sap.come/~sapdownload/011000358700000483592009E/Kalkulation_CRM70_Gesamt_e.pdf]
    Link: [Replication of BP Contacts to CRM Activities, Interaction Records|https://service.sap.com/~sapdownload/011000358700000646422009E/Replication_BP_Contacts.pdf]
    Link: [Replication of Technical Objects - Connection Object, Premise, Point of Delivery|https://service.sap.com/~sapdownload/011000358700001988852008E/TechnicalObjets_KW_261108.pdf]
    Link: [Archiving ISU/CRM Objects|https://service.sap.com/~sapdownload/011000358700000732942010E/Utilities_CRM_Archiving.pdf]
    Link: [Utilities Process Framework|https://service.sap.com/~sapdownload/011000358700001113322009E/Utilities_Prozess_FW_en.pdf]
    Link: [Replication of ISU Price data|https://service.sap.com/~sapdownload/011000358700000042352009E/MDG_Preisreplikation_e.pdf]
    Link: [Usage of ISU Consumption Profiles|https://service.sap.com/~sapdownload/011000358700000065322009E/Profile_e_final.pdf]
    Link: [SAP Help - CRM with SAP for Utilities|http://help.sap.com/saphelp_crm700_ehp01/helpdata/en/46/66235fc77c453a8696f76c66e8014c/frameset.htm]
    Link: [Performance of Initial Download from ERP into CRM for ISU|https://service.sap.com/~sapdownload/011000358700000101082010E/Perf_Initial_Download.pdf]

    Hey Bill, thanks for the links.  Question - is the Utilities Check Cockpit available externally?  I try the link but receive an authorization issue using my new S-id
    I am trying the following link:
    https://service.sap.com/~sapdownload/011000358700001040272011E
    Thanks,
    James

  • SAP SD Interface with SAP ISU FICA

    Hi all,
    I want to know if we bill the customer for the energy and the non energy charges seperately.
    Energy charges as per the consumption we bill them in SAP ISU and also invoice them in SAP ISU , the open item will be posted in SAP ISU FICA.
    For the Energy charges if i bill the customer in SAP SD and then i want to post that in SAP ISU FICA what kind of the configuration i had to perform.
    And when we create the Business Partner in SAP ISU it is created in SAP SD also.
    Regards,
    Himanshu

    Hi,
    Please search in sdn isu section.
    Integrating SAP SD with FICA
    Billing - SD and IS-U
    SD + FICA Integration
    sap sd and sap isu integration
    Re: Posting documents from SD to FICA

  • SAP ISU extarct structure

    How to appened extacrt structure in ISU side?
    Can any one help me on this?

    If the fields of the DataSource are not sufficient, you can use the customer append structure to insert additional fields in the structure of the installation, the installation contract and the billing documents. This also applies to a user-defined field. When it has the same name as an existing field it is automatically assigned the value of the field.
    Additionally, you are able to access the Exit BWESTA01 that enables you to carry out modifications whilst data records are being formatted. This is a higher-performance enhancement than the standard function module for DataSources.
    http://help.sap.com/saphelp_nw70/helpdata/en/87/c299397f3d11d5b3430050dadf08a4/frameset.htm

  • SAP ISU- Business Partner

    Hi All,
    I have a couple of queries more pertaining to SAP ISU Business Partner which are as below:
    1.Can we delete an exiting Business Partner?If yes, how do we do it?Or is it only marked for deletion.In that case,Can we see a report to view all existing BP and BP marked for deletion?
    2.Can we lock an exisiting BP?For example, a BP was created but was not used for any purpose.Can we lock the BP so that nobody can use it accidentally?
    Look forward to your responses.
    Thanks
    Amitav Otta

    Hi Amitav,
    You have the possibility to delete temporary business partners that are no longer needed, or business partners that were loaded into a system by mistake without archiving these data records beforehand.
    In this way the volume of data is reduced and performance is improved
    You have two different possibilities to delete business partners from your database.
    1.One step procedure
    Using the transaction Deletion of business partners (BUPA_DEL), you can remove business partners from your system
    a)that you select using the business partner number
    b)that have been given a deletion or archiving flag
    c)that have the system status deletable
    You can also carry out a test run.
    2.Two step procedure
    a.)     In preparation for the delete operation, you can first determine the data records for deletion and set the relevant system status using the transaction Set deletion or archiving flag (BUPA_PRE_DA).
    b.)     Following this, you carry out the actual delete operation using the transaction Deletion of business partners (BUPA_DEL). You should have set the flag deletable as the parameter.
    In both cases it is checked beforehand whether the business partners intended for archiving are still being actively used in the system. These checks are defined in the BDT event DELE1. You can also define other checks here.
    <b>You can only delete business partner data records as a whole. It is not possible to delete only certain data of business partners.</b>
    You have the possibility to delete user-defined tables. To do this you must define function modules for the BDT event DELE2.
    The business partners removed from the system, as well as the business partners that are still being used and that cannot be deleted, are recorded in an application log.
    In order to evaluate the log, choose the transaction Evaluate application log (SLG1) and enter the object BDT_DATAARCHIVING and the sub-object CA_BUPA.
    cheers
    kp

Maybe you are looking for

  • Need Xorg config file for dual graphics cards, triple monitors

    Hey guys, I'm having a bit of trouble getting triple monitors working on my system. The two plugged into the graphics card (radeon x300) work fine, but the one plugged into onboard graphics (geforce 6150SE) refuses to appear in xrandr. I figure I nee

  • 2012 iMac with 3.1Ghz i5 and 6970M

    I'm looking at getting an iMac for a family member to move them away from windows. This is not meant to be a gaming computer, but it will be playing Diablo 3. My question is, how well would the high-end iMac (not getting the 'online only' upgrades) r

  • The Letter É

    I am programmatically generating my .hhc file from Excel VBA and in English, everything is perfect.  When I try to create my French one, that's when it implodes. For example, I have the following sample page which works and shows up correctly in Robo

  • 'Any' Search In Mountain Lion Mail Not Showing Results

    Regardless of what I put into a search on 'Any" inside of the mail program  absolutely no search results show up at all.  Regardless of what I search for.  The To and From searches do appear to work.

  • Can Internal Table be declared dynamically

    Hi All ,      iam facing a problem in delearing an internal table ,i let u know the query and the problem iam facing in that . Nothing but the include is used for the Replacment of a interface . 1.iam trying to use an include which is called by nearl