Good partition-count for 360GB cache

Coherence Version: 3.6.1.3
JRE Version: 6u22
Platform: Solaris 10
We have a application which uses a 360GB coherence cache with 60 cache nodes. What is good partition-count configuration. The current coherence count we are using is 3607, but we are seeing some issues with query functionality. So I check the Oracle docs to see if this partition-count is a problem but its not helping. They have 2 points:
1. The number of partitions should be a prime number and sufficiently large such that a given partition is expected to be no larger than 50MB in size
2. They have given table and say that for 100GB the partition count should 8191
If point #1 is considered the partition-count of 8191 is too high for 100G node.
Edited by: coh on Apr 16, 2012 11:54 AM

Hi
Whilst NJ is right and GC should be tuned you should also look at why you might have so much GC - the chances are that the queries are producing the high GC rather than the GC affecting your query. We had a situation recently where the general performance of our system had halved and this turned out to be a simple POF extractor Filter query that was running quite frequently and was not using indexes. As this was in effect a full scan of one of our biggest caches then a lot of garbage was generated. In our case the GCs were small as the garbage was in the new generation, but even so this had quite an impact on the system.
What sort of queries are you doing? Are they using reflection extractors or POF extractors? Do you have indexes?
user738616 wrote:
With your configuration, you are looking close to 50MB/paritition and a total of 3-4 GB data transfer assuming only one of the nodes was running stop-the-world GC.NJ: I don't quite understand what you are on about here - a stop the world GC is not a cause of partition transfer. Partition distribution will only occur when nodes join or leave the cluster. If you are seeing partition transfer during queries then you may have killed one of the nodes or you caused a GC that was so long that the node left the cluster. Long GC pauses will cause partition transfer to take longer as the nodes that need to participate in the distribution are paused.
JK

Similar Messages

  • Setting partition count globally

    Hi,
    I have the following questions about distributed services
    1. I understand that the recommended value for partition count for distributed caches is the first prime number higher than the square of the number of JVMs. Is it possible to set this value globally for all the distributed services?
    2. Are there any thumb rules for determining thread-count for my distributed services (for compute intensive and write-behind services)?
    3. I have a large number of write-behind caches using using same distributed service. Is there any advantage of creating multiple distributed services for the same configuration and split up the caches between these services?
    Thanks
    Sairam
    Edited by: SairamR on Aug 31, 2009 11:31 PM

    Hi Magnus,
    A few reasons for running multiple services are
    - separation of code for testability (individual services can be more isolated from each other and could be developed by different teams, too)
    - separation of services which would compete for resources, by having different layers of nodes, one layer storage-enabled for one service, another for another service
    - separation of service threads due to reentrancy problems (cache store or other code running on the service thread of one service could this way use the other service)
    - separating of data to a subset of nodes
    You can't override partition count it on a global basis out-of-the-box, but there are several ways to do that
    - either with appropriately changing the operational configuration file (tangosol-coherence.xml) adding a system-property attribute to the distributed cache service service-scheme part declaring the default value partition-count, or changing it to your liking
    - or possibly (this may not work) also by defining an operational-config override file (overriding the respective element in the distributed cache service service-scheme part declaring the default value partition-count)
    - or by having a common ancestor distributed-scheme for all the distributed cache services which defines a override with the system-property attribute
    Best regards,
    Robert

  • What is the difference between partition-count and the number of caches?

    What is the difference between partition-count and the number of caches in Coherence? Are they same?

    Those are totally orthogonal concepts.
    For more, look at this thread where I answered your other related questions and explain this, too:
    Where can I find the accurate definitions of these terms?
    Best regards,
    Robert

  • Count for every partition

    Hi,
    How can I get the count for every partition. I want:
    partition_name count(*)
    The table is daily partitioned by a date column.
    Ex:
    ALTER TABLE 'tableName' ADD PARTITION 'tableName||today' VALUES LESS THAN (TO_DATE('tommorow','DD-MM-YYYY'))'
    Regards,
    Gicu

    Hallo,
    if you analyze your table regularily and you want to know
    only approximate count of rows you can use :
    select partition_name, num_rows from dba_tab_partitions
    where table_name = 'TBL_TEST'Else , dynamic SQL:
    declare
    v_partition VARCHAR2(30);
    v_count NUMBEr;
    begin
    FOR x IN (select table_owner, partition_name from dba_tab_partitions
            WHERE table_name = 'TBL_TEST')
    LOOP       
        execute immediate 'SELECT '''||x.partition_name||''', count(*) from '||x.table_owner||'.TBL_TEST PARTITION ("'||x.partition_name||'")'
                           INTO v_partition, v_count ;
        dbms_output.put_line('Partition '||v_partition||' count '||v_count);                  
    END LOOP;                      
    end;     Regards
    Dmytro Dekhtyaryuk

  • Partitionning Disk for OS X Server 10.5.4 - A good Idea?

    Hi,
    Is it a good idea to partition the Disk for a clean install of OS X Server (10.5) to put the system on one partition and other files (datas of groups and users) into the second partition?
    If yes, please what size of partition should I consider for OS X Server to breathe freely?
    TIA

    If you cant get another drive in there (highly recommended) then yes I'd recommend that.
    The problem is, the system keeps some data on the system partition (mail, for example is stored in /var/spool/imap) so if you truly want to separate the data you will need to move this folder (with the mail server off) and change the setting.
    Id say about 20GB would be reasonable, that is enough for the system and anything you might add to it, and plenty for things like the collaboration tools which use so little space as to be not worth moving.
    Whatever you do, backup your stuff. Even if its just Time Machine (which in my experience is better than nothing, even in the 'advanced' setup)
    Does that help?
    James

  • How to get total number of result count for particular key on cluster

    Hi-
    My application requirement is client side require only limited number of data for 'Search Key' form total records found in cluster. Also i need 'total number of result count' for that key present on the custer.
    To get subset of record i'm using IndexAwarefilter and returning only limited set each individual node. though i get total number of records present on the individual node, it is not possible to return this count to client form IndexAwarefilter (filter return only Binary set).
    Is there anyway i can get this number (total result size) on client side without returning whole chunk of data?
    Thanks in advance.
    Prashant

    user11100190 wrote:
    Hi,
    Thanks for suggesting a soultion, it works well.
    But apart from the count (cardinality), the client also expects the actual results. In this case, it seems that the filter will be executed twice (once for counting, then once again for generating actual resultset)
    Actually, we need to perform the paging. In order to achieve paging in efficient manner we need that filter returns only the PAGESIZE records and it also returns the total 'count' that meets the criteria.
    If you want to do paging, you can use the LimitFilter class.
    If you want to have paging AND total number of results, then at the moment you have to use two passes if you want to use out-of-the-box features because LimitFilter does not return the total number of results (which by the way may change between two page retrieval).
    What we currently do is, the filter puts the total count in a static variable and but returns only the first N records. The aggregator then clubs these info into a single list and returns to the client. (The List returned by aggregator contains a special entry representing the count).
    This is not really a good idea because if you have more than one user doing this operation then you will have problems storing more than one values in a single static variable and you used a cache service with a thread-pool (thread-count set to larger than one).
    We assume that the aggregator will execute immediately after the filter on the same node, this way aggregator will always read the count set by the filter.
    You can't assume this if you have multiple client threads doing the same kind of filtering operation and you have a thread-pool configured for the cache service.
    Please tell us if our approach will always work, and whether it will be efficient as compared to using Count class which requires executing filter twice.
    No it won't if you used a thread-pool. Also, it might happen that Coherence will execute the filtering and the aggregation from the same client thread multiple times on the same node if some partitions were newly moved to the node which already executed the filtering+aggregation once. I don't know anything which would even prevent this being executed on a separate thread concurrently.
    The following solution may be working, but I can't fully recommend it as it may leak memory depending on how exactly the filtering and aggregation is implemented (if it is possible that a filtering pass is done but the corresponding aggregation is not executed on the node because of some partitions moved away).
    At sending the cache.aggregate(Filter, EntryAggregator) call you should specify a unique key for each such filtering operation to both the filter and the aggregator.
    On the storage node you should have a static HashMap.
    The filter should do the following two steps while being synchronized on the HashMap.
    1. Ensure that a ConcurrentLinkedQueue object exists in a HashMap keyed by that unique key, and
    2. Enqueue the total number count you want to pass to the aggregator into that queue.
    The parallel aggregator should do the following two steps while being synchronized on the HashMap.
    1. Dequeue a single element from the queue, and return it as a partial total count.
    2. If the queue is now empty, then remove it from the HashMap.
    The parallel aggregator should return the popped number as a partial total count as part of the partial result.
    The client side of the parallel aware aggregator should sum the total counts in the partial result.
    Since the enqueueing and dequeueing may be interleaved from multiple threads, it may be possible that the partial total count returned in a result does not correspond to the data in the partial result, so you should not base anything on that assumption.
    Once again, that approach may leak memory based on how Coherence is internally implemented, so I can't recommend this approach but it may work.
    Another thought is that since returning entire cached values from an aggregation is more expensive than filtering (you have to deserialize and reserialize objects), you may still be better off by running a separate count and filter pass from the client, since for that you may not need to deserialize entries at all, so the cost on the server may be lower.
    Best regards,
    Robert

  • Compacting the cache-config.xml for multiple cache-store

    Hi,
    I have a cache-config.xml that has various ReadWriteBackingMaps with different CacheLoader implementations. I was wondering of the best way to compact this xml using the scheme-ref tag, as all I really need is schemes, with different cache stores. e.g. I have an InstrumentCacheStore and a CurrencyCacheStore .. which invoke different CacheLoaders. they are both distributed caches.
    I thought the below would work, but it dosen't.. :( when loading a currency, the InstrumentCacheStore gets invoked.
    is there a way to compact this XML? Else, for 6 different cache loaders that I have, do I have to specify the whole distributed-scheme again and again?
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>instrument-*</cache-name>
    <scheme-name>distributed-instrument-scheme</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>currency-*</cache-name>
    <scheme-name>distributed-currency-scheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>distributed-instrument-scheme</scheme-name>
    <scheme-ref>distributed-scheme</scheme-ref>
    </distributed-scheme>
    <distributed-scheme>
    <scheme-name>distributed-currency-scheme</scheme-name>
    <scheme-ref>distributed-scheme</scheme-ref>
    <!-- THIS DOES NOT OVERRIDE THE DEFAULT distributed-scheme? -->
    <cachestore-scheme>
    <class-scheme>
    <class-name>coherence.cachestore.CurrencyCacheStore</class-name>
    </class-scheme>
    </cachestore-scheme>
    </distributed-scheme>
    <distributed-scheme>
    <scheme-name>distributed-scheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    <scheme-ref>LocalSizeLimited</scheme-ref>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>coherence.cachestore.InstrumentCacheStore</class-name>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    </serializer>
    <partition-count>5557</partition-count>
    <backup-count>1</backup-count>
    <thread-count>10</thread-count>
    <autostart>true</autostart>
    </distributed-scheme>
    <local-scheme>
    <scheme-name>LocalSizeLimited</scheme-name>
    <high-units>500000000</high-units>
    <low-units>10000</low-units>
    <unit-calculator>BINARY</unit-calculator>
    </local-scheme>
    </caching-schemes>
    </cache-config>
    --------------------------------------------------------------------------------------------------------------

    There are two possible ways to sort this out
    1. The cache configuration for the distributed-currency-scheme shown in the original post is wrong and does not correctly override the cache store, it should look like this:.
    <distributed-scheme>
      <scheme-name>distributed-currency-scheme</scheme-name>
      <scheme-ref>distributed-scheme</scheme-ref>
      <backing-map-scheme>
        <read-write-backing-map-scheme>
          <internal-cache-scheme>
            <local-scheme>
              <scheme-ref>LocalSizeLimited</scheme-ref>
            </local-scheme>
          </internal-cache-scheme>
          <cachestore-scheme>
            <class-scheme>
              <class-name>coherence.examples.CurrencyCacheStore</class-name>
            </class-scheme>
          </cachestore-scheme>
        </read-write-backing-map-scheme>
      </backing-map-scheme>
    </distributed-scheme> 2. You can use a single scheme and parameterise it like this:
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
      <caching-scheme-mapping>
        <cache-mapping>
          <cache-name>instrument-*</cache-name>
          <scheme-name>distributed-scheme</scheme-name>
          <init-params>
            <init-param>
              <param-name>cache-store-class-name</param-name>
              <param-value>coherence.examples.InstrumentCacheStore</param-value>
            </init-param>
          </init-params>
        </cache-mapping>
        <cache-mapping>
          <cache-name>currency-*</cache-name>
          <scheme-name>distributed-scheme</scheme-name>
          <init-params>
            <init-param>
              <param-name>cache-store-class-name</param-name>
              <param-value>coherence.examples.CurrencyCacheStore</param-value>
            </init-param>
          </init-params>
        </cache-mapping>
      </caching-scheme-mapping>
      <caching-schemes>
        <distributed-scheme>
          <scheme-name>distributed-scheme</scheme-name>
          <service-name>DistributedCache</service-name>
          <backing-map-scheme>
            <read-write-backing-map-scheme>
              <internal-cache-scheme>
                <local-scheme>
                  <scheme-ref>LocalSizeLimited</scheme-ref>
                </local-scheme>
              </internal-cache-scheme>
              <cachestore-scheme>
                <class-scheme>
                  <class-name>{cache-store-class-name}</class-name>
                </class-scheme>
              </cachestore-scheme>
            </read-write-backing-map-scheme>
          </backing-map-scheme>
          <serializer>
            <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
          </serializer>
          <partition-count>5557</partition-count>
          <backup-count>1</backup-count>
          <thread-count>10</thread-count>
          <autostart>true</autostart>
        </distributed-scheme>
        <local-scheme>
          <scheme-name>LocalSizeLimited</scheme-name>
          <high-units>500000000</high-units>
          <low-units>10000</low-units>
          <unit-calculator>BINARY</unit-calculator>
        </local-scheme>
      </caching-schemes>
    </cache-config>Parameter names from the init-params part of each cache mapping can be used inside curly brackets in the cache scheme part.
    Hope that helps,
    JK

  • Distinct count for multiple fact tables in the same cube

    I'm fairly new to working with SSAS, but have been working with DW environments for many years.
    I have a cube which has 4 fact tables.  The central fact table is Encounter and then I also have Visit, Procedure and Medication.  Visit, Procedure and Medication all join to Encounter on Encounter Key.  The relationship between Encounter
    and Procedure and Encounter and Medication are both an optional 1 to 1.  The relationship between Encounter and Visit is an optional 1 to many.
    Each of the fact tables join to the Patient dimension on the Patient Key.  The users are looking for a distinct count of patients in all 4 fact tables.  
    What is the best way to accomplish this so that my cube does not talk all day to process?  Please let me know if you need any more information about my cube in order to answer this.
    Thanks for the help,
    Andy

    Hi Andy,
    Each distinct count measure cause an ORDER BY clause in the SELECT sent to the relational data source during processing. In SSAS 2005 or later, it creates a new measure group for each distinct count measure(it's a technique strategy for improving perormance).
    Besides, please take a look at the following distinct count optimization techniques:
    Create Customized Aggregations
    Define a Processing Plan
    Create Partitions of Equal Size
    Use Partitions Comprised of a Distinct Range of Integers
    Distribute the Hash of Your UserIDs
    Modulo Function
    Hash Function
    Choose a Partitioning Strategy
    For more detail information, please refer to the article below:
    Analysis Services Distinct Count Optimization:
    http://www.microsoft.com/en-us/download/details.aspx?id=891
    In addition, here is a good article about SSAS Best Practices for your reference:
    http://technet.microsoft.com/en-us/library/cc966525.aspx
    If you have any feedback on our support, please click
    here.
    Hope this helps.
    Elvis Long
    TechNet Community Support

  • What is a good cleanup tool for Mavericks?

    What is a good cleanup tool for Mavericks?

    Kappy's Personal Suggestions About Mac Maintenance
    For disk repairs use Disk Utility.  For situations DU cannot handle the best third-party utility is: Disk Warrior;  DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption. Drive Genius provides additional tools not found in Disk Warrior for defragmentation of older drives, disk repair, disk scans, formatting, partitioning, disk copy, and benchmarking. 
    Four outstanding sources of information on Mac maintenance are:
    1. OS X Maintenance - MacAttorney.
    2. Mac maintenance Quick Assist
    3. Maintaining Mac OS X
    4. Mac Maintenance Guide
    Periodic Maintenance
    OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) See Mac OS X- About background maintenance tasks. If you are running Leopard or later these tasks are run automatically, so there is no need to use any third-party software to force running these tasks.
    If you are using a pre-Leopard version of OS X, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep.  Dependence upon third-party utilities to run the periodic maintenance scripts was significantly reduced after Tiger.  (These utilities have limited or no functionality with Snow Leopard, Lion, or Mountain Lion and should not be installed.)
    Defragmentation
    OS X automatically defragments files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive except when trying to install Boot Camp on a fragmented drive. Malware Protection
    As for malware protection there are few if any such animals affecting OS X. Starting with Lion, Apple has included built-in malware protection that is automatically updated as necessary. To assure proper protection, update your system software when Apple releases new OS X updates for your computer.
    Helpful Links Regarding Malware Protection:
    1. Mac Malware Guide.
    2. Detecting and avoiding malware and spyware
    3. Macintosh Virus Guide
    For general anti-virus protection I recommend only using ClamXav, but it is not necessary if you are keeping your computer's operating system software up to date. You should avoid any other third-party software advertised as providing anti-malware/virus protection. They are not required and could cause the performance of your computer to drop.
    Cache Clearing
    I recommend downloading a utility such as TinkerTool System, OnyX 2.4.3, Mountain Lion Cache Cleaner 7.0.9, Maintenance 1.6.8, or Cocktail 5.1.1 that you can use for periodic maintenance such as removing old log files and archives, clearing caches, etc. Corrupted cache files can cause slowness, kernel panics, and other issues. Although this is not a frequent nor a recurring problem, when it does happen there are tools such as those above to fix the problem.
    If you are using Snow Leopard or earlier, then for emergency cleaning install the freeware utility Applejack.  If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the command line.  Note that AppleJack 1.5 is required for Leopard. AppleJack 1.6 is compatible with Snow Leopard. (AppleJack works with Snow Leopard or earlier.)
    Installing System Updates or Upgrades
    Repair the hard drive and permissions beforehand.
    Update your backups in case an update goes bad.
    Backup and Restore
    Having a backup and restore strategy is one of the most important things you can do to maintain your computer. Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. You can never have too many backups. Don't rely on just one. Make several using different backup utilities. My personal recommendations are (order is not significant):
         1. Carbon Copy Cloner
         2. Get Backup
         3. Deja Vu
         4. SuperDuper!
         5. Synk Pro
         6. Tri-Backup
    Visit The XLab FAQs and read the FAQs on maintenance and backup and restore.
    Always have a current backup before performing any system updates or upgrades.
    Final Suggestions
    Be sure you have an adequate amount of RAM installed for the number of applications you run concurrently. Be sure you leave a minimum of 10% of the hard drive's capacity or 20 GBs, whichever is greater, as free space. Avoid installing utilities that rely on Haxies, SIMBL, or that alter the OS appearance, add features you will rarely if ever need, etc. The more extras you install the greater the probability of having problems. If you install software be sure you know how to uninstall it. Avoid installing multiple new software at the same time. Install one at a time and use it for a while to be sure it's compatible.
    Additional reading may be found in:    
    1. Mac OS X speed FAQ
    2. Speeding up Macs
    3. Macintosh OS X Routine Maintenance
    4. Essential Mac Maintenance: Get set up
    5. Essential Mac Maintenance: Rev up your routines
    6. Five Mac maintenance myths
    7. How to Speed up Macs
    8. Myths of required versus not required maintenance for Mac OS X
    Referenced software can be found at CNet Downloads or MacUpdate.
    Most if not all maintenance is for troubleshooting problems. If your computer is running OK, then there isn't really a thing you need to do except repair the hard drive and permissions before installing any new system updates.

  • Partition Count

    Got couple of questions on the below thread!
    Capacity 150 GB (actual Data) So ~3times 450GB total memory needed.
    Out of this
    150 GB = CacheName1 (application1 50GB) + CacheName2 for a specifc application (application2 100GB)
    Question -
    a) When I configure the partition count in the distributed scheme they should have separate partition count in their local-scheme for CacheName1 and CacheName2. (correct me incase)
    PartitionCount Calulation:
    50 GB actual Data (150GB x 1024) MB / 50 MB = 3072 ~= 3080 (3079 PrimeNumber + 1)
    100 GB actual Data (300GB x 1024) MB / 50 MB = 6144 ~= 6152 (6151 PrimeNumber + 1)
    b) Is there anyway to set operational parameter that reflects and restricts the CacheName1 to 150GB and CacheName2 to 300GB?
    c) What is the role of High Unit(currently I have 2Gb for each of the CacheName1 and CacheName2 in the local scheme with all the JVMs running on 6GB)
    My understanding is High Unit restricts the individual cache location like CacheName1 to not exceed 2GB in that node?
    d) Does each cache name has some foot print in each JVM and is there any role by High Unit in the alloation of the cacheitems?

    Hi
    a) The partition count is preferably a prime number so do not add +1, the caches may have different partition counts but aren't required so
    b) No not directly, since the capacity is dependent on the number of machines and their respective heap-size. If you had a stable environment you could use high-units to do something similar, by setting it to 300 (or 150) * 1073741824 / number of storage enabled jvms.
    c) correct the high-units is what the local capacity of a certain scheme is.
    d) if the hashCode() implementation have even enough distribution it should generate equal footprint for the machines in the cluster. However if you have data affinity or uneven distribution in the hashCode() implementation some partitions may become more heavy than others which if you are unlucky could end up on same machine.
    Thanks
    /Charlie Helin - Coherence Dev Team

  • Get the Count for each row

    I'm trying to get the count for each row to total count for each month
    Something like this
    Hardware     |      Jan
    Monitors       |       5
    Processors   |      137
    Printers        |      57
    etc........
    How can I write a query for this. I can get the Hardware column but don't know how to get the next column.

    If you can provide more data like sample input DML statements it would have been wonderful..
    Assuming is , you need a pivot. Here is an article on basic Pivot..
    http://sqlsaga.com/sql-server/how-to-use-pivot-to-transform-rows-into-columns-in-sql-server/
    something like this may be..
    DECLARE @Input TABLE
    Hardware VARCHAR(20),
    [Date] VARCHAR(20)
    INSERT INTO @Input VALUES('Monitor', '01/01/2014'), ('CPU', '01/01/2014'), ('Monitor', '01/03/2014')
    , ('ABC', '01/01/2014'),('Monitor', '02/01/2014')
    ;WITH CTE AS
    SELECT Hardware, LEFT(DATENAME(M, [Date]),3) AS [MonthName] FROM @Input
    SELECT *
    FROM
    SELECT Hardware, [MonthName], COUNT(Hardware) AS Count FROM CTE GROUP BY Hardware, [MonthName]) a
    PIVOT (MAX([Count]) FOR [MonthName] IN ([Jan], [Feb])) pvt
    Please mark as answer, if this has helped you solve the issue.
    Good Luck :) .. visit www.sqlsaga.com for more t-sql code snippets and BI related how to articles.

  • 1st gen ipod Shuffle not updating play count for songs

    I've noticed since I updated to the iTunes version 7.2.035 that plugging in and updating the library with my iPod shuffle no longer updates the play count for the songs I've listened to. Is there a known fix for this? Anyone else experiencing the same problem with their 1st gen ipod shuffle (512)?

    Well, I have that problem too. My shuffle holds about 125 songs or so, yet I've noticed there are always a good 20 or so songs that won't play, no matter how many times I load them onto the device. I have had that problem since I inherited it from a friend.
    My new problem is that every time I plug my iPod into iTunes, it no longer updates the playcount for songs. In other words, if I load a song that I've heard 3 times, hear it twice during the day, and replug in my iPod at night, it used to update that song's playcount in the library to 5, but now they still read 3. It's also failing to update the "date last played" field.
    I use those to help make sure that I don't load the same songs over and over again by creating a custom playlist that will comb out songs played in the last few days and have a very high play count. Since these are no longer updating, I've noticed that I keep arriving at work only to find my iPod has the same or similar songs as the day before, and dude, if I wanted that, I'd listen to the radio!

  • Error - E 036 No goods receipt possible for purchase order

    Good Day,
    Am not able to do the Goods Receipt through BAPI : BAPI_GOODSMVT_CREATE.But am able to create GR through MIGO.Am getting the below error through BAPI:
    E 036 No goods receipt possible for purchase order  4500000563 0010.
    I gone throug the SDN,  but i count not found any solution for this. Please help me in this regard.
    Cheers,
    sravan

    Hi,
    Maybe this example can help you.
    http://www.sap-img.com/abap/bapi-goodsmvt-create-to-post-goods-movement.htm
    Regards,
    Harish

  • Best partitioning strategy for OS 8.6?

    Hello,
    I have two hard drives on my beige 233 MHz G3 minitower (rev 1):
    4 GIG SCSI
    80 GIG ATA
    I need to install OS 8.6. I was considering using both drives but I do not know what I should put on each drive.
    Should I install OS 8.6 onto the 4 GIG drive or the 80 GIG drive?
    Option 1:
    4 GIG: OS 8.6
    80 GIG: applications
    Option 2:
    4 GIG drive: don't use it at all
    80 GIG drive: OS 8.6 and the applications
    Option 3 (using partitioning):
    4 GIG drive: don't use it at all
    80 GIG drive: First partition (15 GIGS): OS 8.6 and the applications. Second partition: (65 GIGs) for data.
    Unfortunately 4 GIGs is too small for all my applications and so that is why I was thinking of not even using the 4 GIG drive and instead putting everything onto the 80 GIG drive. I have heard it is a good idea to keep the system and apps on the same volume - I am not sure if this is always true however.
    Any thoughts?
    Thanks!
    The other issue is that I might have to further partition the 80 GIG drive into an additional partition for OSX in case I also need to install OSX because I read somewhere that on a beige G3, OSX must be installed in a partition that is no larger than 8 GIGs and it must be the first partition.
    This scenario would look like this:
    Option 4 (using more partitioning):
    4 GIG: don't use it at all
    80 GIG:
    First partition (7.9 GIGS): OS X and OS X applications
    Second partition (15 GIGS): OS 8.6 and the applications
    Third partition: (57 GIGs) for data.
    (I created a separate realted post under older hardware/beige G3/usage regarding how to boot between OS 8.6 and OS X in case anyone is interested.)

    Thanks Don and Jim,
    I carefully considered all your comments and here's what I plan to do:
    It is important for me to indicate that, for this beige G3, I will use OSX primarily (OS 10.1.5) and therefore I would like its partition to be as large as possible. I read this will benefit OSX's swap file needs. However the OS must be within the 7.7 GIG limit for this particular mac (beige G3). I can therefore conclude that I should make the first partition exactly 7.7 GIGs and use it only for OSX.
    Jim, you say that the 8 GIG limit for the startup volume applies to all OSes not just OSX and so that is a critical point for me. I will need to be certain about this. Are you sure OS 8.6 isn't bootable if it is in another partition beyond the first 8 GIGS on a beige G3? You see, I was thinking of installing OS 8.6 on a second partition of the 80 GIG drive like this:
    partition 1 (7.7 GIGs): OSX
    partition 2: (15 GIGs) : OS 8.6 and its applications
    Based on your comment (the 8 GIG limit for the startup volume applies to all OSes) this will not work - the OS 8.6 partition will never boot because it is not within the first 8 GIGs. I had thought that rule only applied to OSX.
    And if so, the best solution is to put OS 8.6 on the 4 GIG drive instead.
    Here is my final solution:
    4 GIG drive:
    OS 8.6 (plus maybe photoshop)
    80 GIG drive:
    partition 1 (7.7 GIGs): OSX (10.1.5 is the version I have)
    partition 2: (about 67 GIGs) : all the OSX applications, all the OS 8.6 applications and all my data
    partition 3 (5 GIGS) : scratch disk just for running photoshop in OS 8.6 (I may or may not create this last partition - still thinking about it)
    I might consider putting one or two of my OS 8.6 applications on the 4 GIG drive - the higher end applications such as photoshop which I think might fit on the 4 GIG drive with the OS 8.6.
    As for the 5 GIG scratch partition on the 80 GIG drive, I am still considering if I need to do this or not because I would rarely use it - only for the few times I might run photoshop in OS 8.6. Maybe I should keep this partition anyways for something else just in case.
    Other less important notes:
    I might, one day, be able to use OS 9 instead of OS8.6 in these examples, in which case I will just substitute 9 for 8.6 in my examples noted above. Howwever for now I am reserving OS 9 for another computer I have. As for classic, I don't need classic support. I would rather just boot directly into 9. I have used classic before and I found it to be slow.
    Thanks Jim and Don for your help.
    If anyone sees anything wrong with this setup please let me know!

  • Partitioning Tips for Most Effective Usage

    Hi guys,
    I'm running on Snow Leopard with 4GB DDR3. I'm hoping I'll be able to get some advice here on how i should fully utilise my MBP's hard drive.
    I recently bought a new 500GB Samsung HDD but I've yet to install it cos I'm still pondering on what type of partitioning scheme i should stick to. I use my MBP as a workhorse. I do photography, design with photoshop, video editing, recording and gaming (with windows). From time to time i work with large transfers of files in and out, like we're talking GB here.. There are also 2 other people in my family who use this com, but just from time to time, transfer pictures and sync their iphones etc. I'm aware that too many partitions can also slow down the system, but this time round, im really certain i'd wanna partition my mac after a tragic crash of my os x partition hdd. It wasn't the HD's fault tho as my bootcamp partition survived.
    Anyway, how many partitions is too much and how should i partition my new drive to effectively use it? I've been reading quite a bit around the internet and so far from what i've gathered, im thinking of the following scheme:
    1) Primary Boot ROM - System and Applications (150GB)
    From my experience some apps like Final Cut Studio and Logic Studio 9 can take up as much as 56 GB each, Adobe CS5 takes up quite a bit too.
    2) Emergency Boot - For emergencies (15GB)
    System files and essentials like Alsoft Disk Warrior 4 and Data Recovery 3 in the event of nasty crashes for damage control/rescue
    3) Windows Partition - For Windows 7 and Games (100GB)
    Games like Modern Warfare 2 and Fallout 3 can take up to 15GB etc,
    4) Data Files (Remaining).
    What do ya'll think?
    More Questions I'd like to raise are:
    1) Would you think it would be better to create a partition, just for installing hugeass apps (ala logic and fc) and does anyone know if they can be installed on the non boot roms? eg if im using my primary startup partition, can i install logic on another partition which doesn't have Mac OS X on it?
    2) Should my Video, Audio and photography workfiles be in separate partitions or would it be more advisable to just keep them together?
    3) Should there be a partition just for temporary file storage like if im moving 50gb of data?
    4) How about video capturing? Recording sessions and post production project files? Should they be in a partitions of their own?
    5) I've read about scratch/swap partitions, what are they and are they advisable to have? Especially cos the stuff i do are pretty resource intensive.
    6) Should the different users be on different partitions?
    I guess that's about all the questions on my mind for now..
    Would greatly appreciate your help before i plan out and partition
    Thanks in advance!

    @Kappy,
    I'm sorry! Nono don't get me wrong.. I'm not shutting out your advises. I'm just in a dilemma as there seems to be 2 opposing camps: people who swear by partitioning and those against it. I read that a lotta people in the media industry, ie sound engineers who do recording on the go/designers who do huge projects highly recommend the practice of partitioning as huge amoutns of time are spent on each project, so they'd rather be safe than sorry.
    On the otherhand, people here are saying there's no need to do so/it sounds illogical.. I'm just wondering why.. I mean, i understand its gonna be a hassel and all, but is it not advantageous to to do so especially in times of adversity?
    About rEFIt, understand that the point of the article was aimed at being a tutorial at creating a multiple booting computer. However one of the steps pointed out was that we could create multiple partitions before installing Windows in just one partition. If that can be done, wouldn't it mean that instead of creating multiple boot partitions , i can create storage partitions as well, by selecting the appropriate kind of disk formats, which technically bypasses the limit of Bootcamp's 2 partition only policy.. Do you think that would be possible? PS: im not looking to install Windows on an external drive..
    About the emergency disk,
    I fully agree with you that ideal is to have it on an external drive, which i definitely have been practicing. However cos of my recent crashing, i figured that data recovery from my external usb harddrive indeed help, but was quite a slow process (yea.. i know firewire's the fastest option! heh), but im just wondering, any idea if booting from a good partition would be faster than one from an external hd/usb stick?
    OS X is structured to expect data/documents to be stored in their appropriate folders on the startup volume. This i do agree, do you think there are ways to re route them?
    Video, Audio and photography workfiles might best be kept on another hard drive, preferably FW800 for speed.Yea this i definitely agree that it would be the ideal, however i bring my mac out for live recordings sometimes using a firewire interface. My mbp has only 1 firewire interface tho, so an ext firewire HD wouldnt be an option.. and if i can avoid bringing along an external drive, that would too be great.
    Your tip on backing-up with a dedicated large external HD that i do agree fully and couldn't disagree less! In fact that's what i'm practicing. The RAID box is a wonderful idea actually! So thanks! (:
    @Michael Black
    Thanks for your answers to #5 and #6.. !
    Everyone, thanks for your responses so far! (:

Maybe you are looking for